Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ sections:
2i2c aims to support JupyterHubs on any cloud provider that offers a managed Kubernetes service.
To start, we are focusing on the major commercial cloud providers listed below.
If you would like a hub hosted on a different cloud provider, please [give us your feedback](mailto:hello@2i2c.org).
See [our Organizational Strategy and Goals](https://compass.2i2c.org/organization/strategy.html) to learn more about our plans.
See [our Organizational Strategy and Goals](https://compass.2i2c.org/organization/strategy) to learn more about our plans.
items:
- icon: google-cloud
Expand Down
2 changes: 1 addition & 1 deletion content/about/funding/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ To sustain and grow our operations, 2i2c receives funding from the following sou

{{% callout %}}

[2i2c's Financial and Sustainability Strategy page](https://compass.2i2c.org/finance/strategy.html) has our full financial sustainability strategy.
[2i2c's Financial and Sustainability Strategy page](https://compass.2i2c.org/finance/strategy) has our full financial sustainability strategy.

[Our accounting dashboards](https://2i2c.org/kpis/finances) have all our latest costs and revenue.

Expand Down
6 changes: 3 additions & 3 deletions content/blog/2021/q3-update/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ We focused on a few major areas for work, outlined below:

- **Automation across cloud providers**. We wish to serve communities that run on any of the major commercial cloud providers. We can standardize some of our infrastructure through abstractions like Kubernetes, but must still create cloud-specific deployment infrastructure as well (that Kubernetes cluster has to come from somewhere first!). In the last four months we've worked on automating Kubernetes and JupyterHub deployments on [AWS](https://github.com/2i2c-org/infrastructure/issues/627) as well as [Azure](https://github.com/2i2c-org/infrastructure/issues/512) to complement our Google Cloud deployments. We would soon like to run more hubs on this infrastructure to test how well it scales.
- **Monitoring and reporting infrastructure**. We have worked on the JupyterHub [`grafana-dashboards` project](https://github.com/jupyterhub/grafana-dashboards) to improve dashboarding around JupyterHub deployments in general, and will soon automatically deploy Grafana dashboards for our hubs so that communities have insight into what is going on in their hubs.
- **User environment management**. We want communities to have control over the environments that are available on their hubs. We also want to encourage that our communities follow community standards for reproducible environments that can be re-used elsewhere. For this reason, we've improved the [repo2docker GitHub action](https://github.com/jupyterhub/repo2docker-action) to work with more image registries, and created a [2i2c user image template repository](https://github.com/2i2c-org/hub-user-image-template) for users to re-use for their hubs. See [the User Environment docs](https://docs.2i2c.org/admin/howto/environment.html#bring-your-own-docker-image) for more information.
- **Support and collaboration roles**. In addition to technology changes, we have developed an alpha-level support and collaboration model for the communities we serve. Most relevant for our communities is the **community representative** role, who acts as the main point of contact with 2i2c engineers, and leads administrators on the hub to guide its customization for the community it serves. See [the user roles documentation](https://docs.2i2c.org/about/roles.html) for more information. We have also begun prototyping a [FreshDesk support model](https://docs.2i2c.org/support.html) and team processes around monitoring our support channels and responding to requests and incidents.
- **User environment management**. We want communities to have control over the environments that are available on their hubs. We also want to encourage that our communities follow community standards for reproducible environments that can be re-used elsewhere. For this reason, we've improved the [repo2docker GitHub action](https://github.com/jupyterhub/repo2docker-action) to work with more image registries, and created a [2i2c user image template repository](https://github.com/2i2c-org/hub-user-image-template) for users to re-use for their hubs. See [the User Environment docs](https://docs.2i2c.org/admin/environment/hub-user-image-template-guide) for more information.
- **Support and collaboration roles**. In addition to technology changes, we have developed an alpha-level support and collaboration model for the communities we serve. Most relevant for our communities is the **community representative** role, who acts as the main point of contact with 2i2c engineers, and leads administrators on the hub to guide its customization for the community it serves. See [the user roles documentation](https://docs.2i2c.org/community-lead/about/shared-responsibility) for more information. We have also begun prototyping a [FreshDesk support model](https://docs.2i2c.org/support) and team processes around monitoring our support channels and responding to requests and incidents.

### Pangeo

Expand All @@ -50,7 +50,7 @@ Finally, in addition to our major development and projects, we have also made a

We **began [a fiscal sponsorship with Code for Science and Society](https://2i2c.org/posts/2021/css-announce/)**. This provides a new organizational and legal home for 2i2c after spending nearly a year receiving [critical strategic and start-up support](https://www.icsi.berkeley.edu/icsi/news/2021/08/2i2c-new-chapter) from our previous host, [ICSI](https://www.icsi.berkeley.edu). We are excited to work with CS&S to create the business infrastructure that will power our managed JupyterHubs service.

The 2i2c team has also been **improving our team planning and coordination processes**, so that we can more effectively execute on our mission. As a distributed team, we have the challenge of building processes for team communication, coordination, and planning that are distributed and asynchronous. If you're curious, you can learn more about our coordination processes in [our Team Compass](https://compass.2i2c.org/practices/development.html).
The 2i2c team has also been **improving our team planning and coordination processes**, so that we can more effectively execute on our mission. As a distributed team, we have the challenge of building processes for team communication, coordination, and planning that are distributed and asynchronous.

We have **improved our organization-wide documentation** in order to make it easier to navigate between 2i2c's various sources of information. We hope that this provides more transparency into what 2i2c is up to and how it is structured, and that it allows us to build more connections between our projects and the broader community. Check out the new documentation landing site at [docs.2i2c.org](https://docs.2i2c.org).

Expand Down
6 changes: 3 additions & 3 deletions content/blog/2021/six-month-update/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,10 +71,10 @@ Here's a bit about each new team member.
## Governance and a code of conduct

Finally, while it's easy to get lost in technology and collaborations, 2i2c has also made important steps towards defining a stable and transparent organizational model moving forward.
2i2c now [has a Steering Council](https://compass.2i2c.org/about/structure.html#steering-council) and an [early organizational structure](https://compass.2i2c.org/about/structure.html).
In addition, [we've defined a one-year bootstrap strategy](https://compass.2i2c.org/organization/strategy.html) that we'll use to guide our path in the first year of 2i2c's existence.
2i2c now [has a Steering Council](https://compass.2i2c.org/about/structure#steering-council) and an [early organizational structure](https://compass.2i2c.org/about/structure).
In addition, [we've defined a one-year bootstrap strategy](https://compass.2i2c.org/organization/strategy) that we'll use to guide our path in the first year of 2i2c's existence.

Finally, one of the first acts of the Steering Council has been to [adopt a Code of Conduct](https://compass.2i2c.org/code-of-conduct/index.html).
Finally, one of the first acts of the Steering Council has been to [adopt a Code of Conduct](https://compass.2i2c.org/code-of-conduct/).
This is a set of guidelines, and a process for resolving incidents, that makes our community more inclusive, equitable, and enjoyable for all.
Creating a Code of Conduct is a crucial part of defining our organizational and community culture, and we're excited to have some explicit guidelines to support our interactions in the future!

Expand Down
2 changes: 1 addition & 1 deletion content/blog/2022/eddy-symposium-report/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ The Symposium focused on three disciplinary areas (**Exoplanets**; **Sun-Climate

Our experience with the Symposium taught 2i2c a few things.

We learned that our engineering team can rapidly deploy interactive computing resources to support a research and education community. Along the way, we confirmed what we've been learning from Pangeo and the neuroscience communities: flexible methods to customize the software environment are necessary. We confirmed that our developing [shared responsibility model](https://docs.2i2c.org/about/service/shared-responsibility.html?highlight=shared%20responsibility), enabling domain-specific experts to provide curated toolchains for their communities while leveraging 2i2c's infrastructure expertise, is the right approach.
We learned that our engineering team can rapidly deploy interactive computing resources to support a research and education community. Along the way, we confirmed what we've been learning from Pangeo and the neuroscience communities: flexible methods to customize the software environment are necessary. We confirmed that our developing [shared responsibility model](https://docs.2i2c.org/community-lead/about/shared-responsibility), enabling domain-specific experts to provide curated toolchains for their communities while leveraging 2i2c's infrastructure expertise, is the right approach.

We learned that managing access to the hub using members in a GitHub organization works but involves some toil since organizers had to work through the GitHub invitation process for each participant. We are exploring others ways to systematically grant event participants access to a hub.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ With that in mind, here are a few ideas we have in mind for goals that will driv

## How does 2i2c provide mentorship/onboarding?

You can find [our onboarding process in our Team Compass](https://compass.2i2c.org/get-started.html).
You can find [our onboarding process in our Team Compass](https://compass.2i2c.org/get-started).
This roughly comes down to choosing an "Onboarding Champion" for the new team member, to help walk them through our team processes and get them access to the right information and accounts.
However, 2i2c is quite young, so has only had a few iterations in onboarding new team members.
We look forward to improving this process further via this new hire.
Expand Down Expand Up @@ -89,7 +89,7 @@ In this meeting, we discussed two major challenges we continue to iterate on:
2. **Building a distributed organization from scratch**. The other major challenge we've faced is simply the act of creating an organization from the ground up.
We have worked together for many years in open source communities, but there's a new degree of complexity when you're all working on the _same_ service and development efforts.
Throw in a largely asynchronous team split across many time zones, and there are a lot of coordination and planning challenges to overcome.
We have tried many things over the past year (see [our latest team practices in the team compass](https://compass.2i2c.org/practices/index.html), but there is still a lot of improvement to make.
We have tried many things over the past year (see [our latest team practices in the team compass](https://compass.2i2c.org/practices/), but there is still a lot of improvement to make.

## How crucial is a deep-seated knowledge of Jupyter for this role?

Expand Down
2 changes: 1 addition & 1 deletion content/blog/2022/q1-update/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Here are a few highlights:

## Communities we've served and lessons learned

As described in [our Managed Hub Services strategy](https://compass.2i2c.org/organization/strategy.html), our goals for this phase of our organization are to balance _serving communities of practice_ and _learning where we can improve our infrastructure and practices_.
As described in [our Managed Hub Services strategy](https://compass.2i2c.org/organization/strategy), our goals for this phase of our organization are to balance _serving communities of practice_ and _learning where we can improve our infrastructure and practices_.
With that in mind, here are a few highlights of communities we've served, and what we've learned from it:

- **We grew a hub for [the University of Toronto](https://jupyter.utoronto.ca/) to around 4000 monthly users**. This has taught us a lot about where our support and operations can and cannot scale, and where we have gaps in our sustainability / pricing model.
Expand Down
10 changes: 3 additions & 7 deletions content/blog/2022/q3-update/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,11 +49,7 @@ We also ran hubs for several **community events**:
- Eddy Symposium: [infrastructure#467](https://github.com/2i2c-org/team-compass/issues/467)
- Allen Institute Summer Workshop on the Dynamic Brain [infrastructure#1621](https://github.com/2i2c-org/infrastructure/issues/1621)

For a recap of one of these events, see our recent [blog post on the Jack Eddy symposium](https://2i2c.org/blog/2022/eddy-symposium-report).

{{% callout note %}}
If you are interested in partnering with 2i2c to have your own managed JupyterHub, please contact us at `partnerships@2i2c.org`.
We have a shared cluster on Google Cloud, with plans to deploy one on AWS soon, and dedicated clusters can be run on any major cloud provider. Please see [our service documentation](https://docs.2i2c.org/about/service/index.html) for more details.
Please see [our service documentation](https://docs.2i2c.org/community-lead/about/service-model) for more details.
{{% /callout %}}

## Organization wide updates
Expand Down Expand Up @@ -84,7 +80,7 @@ Here's a brief breakdown:

**We expanded our shared clusters to new cloud providers and regions**. We now have shared clusters already deployed on Google Cloud Platform on `us-central1-b` and `europe-west2`.

**We defined an incident commander process**. This will allow us to coordinate and respond to major outages in our cloud infrastructure more efficiently. See [our incident response documentation](https://compass.2i2c.org/projects/managed-hubs/incidents.html) for more information.
**We defined an incident commander process**. This will allow us to coordinate and respond to major outages in our cloud infrastructure more efficiently. See [our incident response documentation](https://compass.2i2c.org/projects/managed-hubs/incidents) for more information.

**We improved our cloud usage monitoring infrastructure**. We've deployed [a centralized Grafana Dashboard](https://github.com/2i2c-org/infrastructure/issues/328) that aggregates cloud usage across all of our partner communities, and allows us to keep track of any unexpected behavior or outages across them all.

Expand All @@ -107,5 +103,5 @@ Many thanks to the 2i2c team, our partner communities, our funders, and the many

{{% callout note %}}
If you are interested in partnering with 2i2c to have your own managed JupyterHub, please contact us at `partnerships@2i2c.org`.
We have a shared cluster on Google Cloud, with plans to deploy one on AWS soon, and dedicated clusters can be run on any major cloud provider. Please see [our service documentation](https://docs.2i2c.org/about/service/index.html) for more details.
We have a shared cluster on Google Cloud, with plans to deploy one on AWS soon, and dedicated clusters can be run on any major cloud provider. Please see [our service documentation](https://docs.2i2c.org/community-lead/about/service-model) for more details.
{{% /callout %}}
4 changes: 2 additions & 2 deletions content/blog/2023/2022-year-in-review/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,15 +74,15 @@ You can [read a write-up about these improvements in this blog post](https://2i2

Our goal is to frame each community hub as a partnership with a clear breakdown of responsibility to give communities more agency over the infrastructure and service.
The Shared Responsibility Model provides a framework for assigning responsibility for various tasks with our partner communities.
See [our Shared Responsibility Model docs here](https://docs.2i2c.org/about/service/shared-responsibility.html).
See [our Shared Responsibility Model docs here](https://docs.2i2c.org/community-lead/about/shared-responsibility).

### We defined a formal Incident Response process

Cloud infrastructure inevitably degrades over time, and running ongoing services is largely about quickly responding to issues and resolving them quickly.
To do so, we need clear processes to follow in order to quickly identify and respond to major incidents in the infrastructure.
Our Incident Response process defines formal team roles and alerting mechanisms that are served by [PagerDuty](https://www.pagerduty.com/), following best-practices in industry.
This will make our service more reliable and make our processes more transparent for our partner communities.
[Here's our current incident response process](https://compass.2i2c.org/projects/managed-hubs/incidents.html).
[Here's our current incident response process](https://compass.2i2c.org/projects/managed-hubs/incidents).

### We expanded our service offerings to include community and workflow guidance

Expand Down
2 changes: 1 addition & 1 deletion content/blog/2023/open-source-funding-principles/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ draft: false
---

_This is a brainstorm to consider the principles and guidelines that 2i2c should follow in defining its strategy towards open source communities.
See [our open source policy documentation](https://compass.2i2c.org/open-source/index.html) for the product of this brainstorm._
See [our open source policy documentation](https://compass.2i2c.org/open-source/) for the product of this brainstorm._

Over the past year the 2i2c team has focused its efforts on deploying, configuring, running, and managing cloud infrastructure that supports open source workflows in research and education. We've also done a lot of _upstream contribution_ as a part of our work.

Expand Down
6 changes: 3 additions & 3 deletions content/blog/2024/aws-cost-attribution/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Note that this feature is currently available to AWS hosted hubs only and will b

## Accessing the cloud cost dashboard

Community Champions can view the Cloud Cost dashboard from their Grafana instance (please see the [Service Guide](https://docs.2i2c.org/admin/howto/monitoring/grafana-dashboards/#getting-a-grafana-account) for how to gain access).
Community Champions can view the Cloud Cost dashboard from their Grafana instance (please see the [Service Guide](https://docs.2i2c.org/admin/monitoring/grafana-dashboards#getting-a-grafana-account) for how to gain access).

From the main menu of Grafana, navigate to *Dashboards > Cloud cost dashboards > Cloud cost attribution* to view the dashboard.

Expand All @@ -38,13 +38,13 @@ The dashboard is made of several panels:

{{< video autoplay="true" loop="true" src="demo.mp4" >}}

For more detailed information on the data that each panel displays, please consult our [Service Guide](https://docs.2i2c.org/admin/howto/monitoring/cost-attribution/#understanding-the-cloud-cost-dashboard) for reference.
For more detailed information on the data that each panel displays, please consult our [Service Guide](https://docs.2i2c.org/admin/monitoring/cost-users#understanding-the-cloud-cost-dashboard) for reference.

## Sharing cost reports

The dashboard can be shared with other community members and stakeholders so they can understand usage and cost patterns. Community Champions can export data to a CSV file, or they can generate a snapshot of the Grafana dashboard and share a public link.

For instructions on how to export data from the dashboard, please see our [Service Guide](https://docs.2i2c.org/admin/howto/monitoring/cost-attribution/#sharing-cost-reports) for reference.
For instructions on how to export data from the dashboard, please see our [Service Guide](https://docs.2i2c.org/admin/monitoring/cost-users#sharing-cost-reports) for reference.

## Next steps

Expand Down
Loading
Loading