Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs/self-host: add k8s requirements & other fixes #2222

Merged
merged 5 commits into from Oct 14, 2021

Conversation

guidoiaquinti
Copy link
Contributor

Changes

  • add Kubernetes cluster requirement snippet
  • attach the new snippet to all the relevant pages
  • use alphabetical order for the cloud providers
  • refactoring of the DigitalOcean page

Checklist

- [AWS](/docs/self-host/deploy/aws)
- [Azure](/docs/self-host/deploy/azure)
- [Using Helm Charts](/docs/self-host/deploy/other)
- [DigitalOcean](/docs/self-host/deploy/digital-ocean)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sort alphabetically

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We actually sorted them that way on purpose. DigitalOcean is easiest to get started with and most transparent about pricing, but maybe we should explicitly say this here instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be subjective and not objective. Is it possible that our users find the pricing and setup of Azure or Oracle Cloud easier than DigitalOcean (maybe because they are already familiar with the platform).

I’m inclined to keep our docs as impartial as possible if you agree.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@fuziontech fuziontech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

import InstallingSnippet from './snippets/installing'
import UpgradingSnippet from './snippets/upgrading'
import UninstallingSnippet from './snippets/uninstalling'
import TryUnsecureSnippet from './snippets/tryunsecure'

First, we need to set up a Kubernetes Cluster, see [Setup EKS - eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Follow the "Managed nodes - Linux" guide. The default nodes (2x m5.large) work well for running PostHog.
First, we need to set up a Kubernetes cluster (see the official AWS [documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) for more info).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are missing the part about how to get the expandable volume. This should be documented to make users life easier (& we should test that we can make this work).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we shouldn't follow the route to copy and paste volume type specific how-to-steps from 3rd party documentations as they'll be likely out of date the second after we merge this PR.

Those are the volume types currently supported:

  • gcePersistentDisk
  • awsElasticBlockStore
  • Cinder
  • glusterfs
  • rbd
  • Azure File
  • Azure Disk
  • Portworx
  • FlexVolume
  • CSI

should we document and keep up to date the procedures on how to enable the setting for all of them?

I think it’s important to note that for PostHog we recommend the use of storage classes with expandable volumes, but then it’s up to our users to decide if and how they want to implement that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the same spirit as the suggestion for users to use DO. We want to avoid making it seem complicated to get PostHog up and running, if someone knows what they are doing they will do it anyway.

So in that spirit I do think it's worth it for us to document this explicitly for each platform, but we don't have to copy their documentation, we can just link to the right place similarly as we did for cluster creation. That yes will get out of date potentially and a user will ask about it in users slack & then we'll update it.

Btw one of our goals is to minimize the amount of time it takes for someone to spin up & maintain their self-hosted instance (we don't have metrics for this yet defined, but <10min average install time ; <15min average monthly maintenance time seem like good goals).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds like it could be a separate PR to outline the Volumes we support with documentation.
The only Volume types I would support here are

  • gcePersistentDisk
  • awsElasticBlockStore
  • Azure Disk

Anything beyond these and the user is using a stack that is going to be pretty custom.

Let's land this PR and add a todo to document these. No reason to hold up shipping this.

import InstallingSnippet from './snippets/installing'
import UpgradingSnippet from './snippets/upgrading'
import UninstallingSnippet from './snippets/uninstalling'
import TryUnsecureSnippet from './snippets/tryunsecure'

First, we need to set up a Kubernetes Cluster, see [Creation with Azure portal](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal#create-an-aks-cluster). Make sure your cluster has enough resources to run PostHog (total minimum 4 vcpu & 8GiB RAM).
First, we need to set up a Kubernetes cluster (see the official Azure [documentation](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal#create-an-aks-cluster) for more info).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same how can we get a qualifying cluster, is following the instructions good enough or not. cc @fuziontech to setup Azure account for PostHog team

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's your suggestion on this? Should we add more documentation on top of the official Azure documentation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My suggestion very specifically is for every platform specific guide:

  1. Test what the default is,
  2. Add to the docs a sentence along the lines of "At the time of writing by default <uses/doesn't use> expanding volumes"
  3. if not the default figure out how to do it & expand on how to do it.

Additionally in the "other platforms" I'd call out that this is something they'd want to check explicitly as many platforms default is non-expandable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's simply state that for all platforms outside DigitalOcean you must ensure that Kubernetes supports expandable volumes. We only have so many resources to spend on documentation here and the only platform I think we can all agree that should be as turn key as possible is DigitalOcean. The rest you are going to need to have some sort of understanding of what is going on in the K8s infrastructure or you will have a bad time.

Options for deploying PostHog sorted by how K8s familiar you are:

  1. Cloud
  2. DigitalOcean
    -----Should have baseline understanding of K8s beyond here-----
  3. AWS/GCP/Azure
  4. Rest of World

contents/docs/self-host/deploy/digital-ocean.mdx Outdated Show resolved Hide resolved
contents/docs/self-host/deploy/digital-ocean.mdx Outdated Show resolved Hide resolved
Comment on lines +2 to +4
export INGRESS_IP=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && \
echo "\n-----\n" && \
echo "Your PostHog installation is available at: http://$INGRESS_IP" && \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to get the URL here & I don't think this does that, but if nothing else the variable name is confusing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which URL? This command will output something like:

-----

Your PostHog installation is available at: http://213.195.116.118

-----

Except for the output format I didn't change variable names or anything from the previous version. What do you mean "we need to get the URL here"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If they already setup TLS they should access posthog via the hostname, not IP, e.g. http://app.posthog.com instead of http://104.22.58.181. Especially because it's possible to forbid direct IP access

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So there's two different things: and separately: access your PostHog instance if TLS was set up, where arguably we can just say navigate to your hostname

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 but for what I saw we never rendered the hostname, even before this PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be a nice to have - definitely not a blocker for getting this PR in

- [AWS](/docs/self-host/deploy/aws)
- [Azure](/docs/self-host/deploy/azure)
- [Using Helm Charts](/docs/self-host/deploy/other)
- [DigitalOcean](/docs/self-host/deploy/digital-ocean)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We actually sorted them that way on purpose. DigitalOcean is easiest to get started with and most transparent about pricing, but maybe we should explicitly say this here instead.

src/sidebars/sidebars.json Show resolved Hide resolved
src/sidebars/sidebars.json Outdated Show resolved Hide resolved
@guidoiaquinti
Copy link
Contributor Author

guidoiaquinti commented Oct 14, 2021

@tiina303 I think I've addressed most (if not all) your comments. Let me know what do you think about this last version. Thank you! 🙇

contents/docs/self-host/deploy/digital-ocean.mdx Outdated Show resolved Hide resolved
Comment on lines +2 to +4
export INGRESS_IP=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && \
echo "\n-----\n" && \
echo "Your PostHog installation is available at: http://$INGRESS_IP" && \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So there's two different things: and separately: access your PostHog instance if TLS was set up, where arguably we can just say navigate to your hostname

@tiina303
Copy link
Contributor

tiina303 commented Oct 14, 2021

Lets land this and follow-up with separate PRs (see attached below):

@tiina303 tiina303 merged commit 7396b80 into master Oct 14, 2021
@tiina303 tiina303 deleted the k8s_cluster_requirements branch October 14, 2021 18:00
@fuziontech
Copy link
Member

I'm a slowpoke on this one but added in some opinions in case you happened to be curious of my take

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants