New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs/self-host: add k8s requirements & other fixes #2222
Conversation
- [AWS](/docs/self-host/deploy/aws) | ||
- [Azure](/docs/self-host/deploy/azure) | ||
- [Using Helm Charts](/docs/self-host/deploy/other) | ||
- [DigitalOcean](/docs/self-host/deploy/digital-ocean) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sort alphabetically
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We actually sorted them that way on purpose. DigitalOcean is easiest to get started with and most transparent about pricing, but maybe we should explicitly say this here instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could be subjective and not objective. Is it possible that our users find the pricing and setup of Azure or Oracle Cloud easier than DigitalOcean (maybe because they are already familiar with the platform).
I’m inclined to keep our docs as impartial as possible if you agree.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
c2a8495
to
7aac18c
Compare
7aac18c
to
2fdfe23
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
contents/docs/self-host/deploy/snippets/cluster-requirements.mdx
Outdated
Show resolved
Hide resolved
import InstallingSnippet from './snippets/installing' | ||
import UpgradingSnippet from './snippets/upgrading' | ||
import UninstallingSnippet from './snippets/uninstalling' | ||
import TryUnsecureSnippet from './snippets/tryunsecure' | ||
|
||
First, we need to set up a Kubernetes Cluster, see [Setup EKS - eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Follow the "Managed nodes - Linux" guide. The default nodes (2x m5.large) work well for running PostHog. | ||
First, we need to set up a Kubernetes cluster (see the official AWS [documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) for more info). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are missing the part about how to get the expandable volume. This should be documented to make users life easier (& we should test that we can make this work).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we shouldn't follow the route to copy and paste volume type specific how-to-steps from 3rd party documentations as they'll be likely out of date the second after we merge this PR.
Those are the volume types currently supported:
gcePersistentDisk
awsElasticBlockStore
Cinder
glusterfs
rbd
Azure File
Azure Disk
Portworx
FlexVolume
CSI
should we document and keep up to date the procedures on how to enable the setting for all of them?
I think it’s important to note that for PostHog we recommend the use of storage classes with expandable volumes, but then it’s up to our users to decide if and how they want to implement that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the same spirit as the suggestion for users to use DO. We want to avoid making it seem complicated to get PostHog up and running, if someone knows what they are doing they will do it anyway.
So in that spirit I do think it's worth it for us to document this explicitly for each platform, but we don't have to copy their documentation, we can just link to the right place similarly as we did for cluster creation. That yes will get out of date potentially and a user will ask about it in users slack & then we'll update it.
Btw one of our goals is to minimize the amount of time it takes for someone to spin up & maintain their self-hosted instance (we don't have metrics for this yet defined, but <10min average install time ; <15min average monthly maintenance time seem like good goals).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds like it could be a separate PR to outline the Volumes
we support with documentation.
The only Volume
types I would support here are
gcePersistentDisk
awsElasticBlockStore
Azure Disk
Anything beyond these and the user is using a stack that is going to be pretty custom.
Let's land this PR and add a todo to document these. No reason to hold up shipping this.
import InstallingSnippet from './snippets/installing' | ||
import UpgradingSnippet from './snippets/upgrading' | ||
import UninstallingSnippet from './snippets/uninstalling' | ||
import TryUnsecureSnippet from './snippets/tryunsecure' | ||
|
||
First, we need to set up a Kubernetes Cluster, see [Creation with Azure portal](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal#create-an-aks-cluster). Make sure your cluster has enough resources to run PostHog (total minimum 4 vcpu & 8GiB RAM). | ||
First, we need to set up a Kubernetes cluster (see the official Azure [documentation](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal#create-an-aks-cluster) for more info). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same how can we get a qualifying cluster, is following the instructions good enough or not. cc @fuziontech to setup Azure account for PostHog team
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's your suggestion on this? Should we add more documentation on top of the official Azure documentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My suggestion very specifically is for every platform specific guide:
- Test what the default is,
- Add to the docs a sentence along the lines of "At the time of writing by default <uses/doesn't use> expanding volumes"
- if not the default figure out how to do it & expand on how to do it.
Additionally in the "other platforms" I'd call out that this is something they'd want to check explicitly as many platforms default is non-expandable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's simply state that for all platforms outside DigitalOcean you must ensure that Kubernetes supports expandable volumes. We only have so many resources to spend on documentation here and the only platform I think we can all agree that should be as turn key as possible is DigitalOcean. The rest you are going to need to have some sort of understanding of what is going on in the K8s infrastructure or you will have a bad time.
Options for deploying PostHog sorted by how K8s familiar you are:
- Cloud
- DigitalOcean
-----Should have baseline understanding of K8s beyond here----- - AWS/GCP/Azure
- Rest of World
export INGRESS_IP=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && \ | ||
echo "\n-----\n" && \ | ||
echo "Your PostHog installation is available at: http://$INGRESS_IP" && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to get the URL here & I don't think this does that, but if nothing else the variable name is confusing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which URL? This command will output something like:
-----
Your PostHog installation is available at: http://213.195.116.118
-----
Except for the output format I didn't change variable names or anything from the previous version. What do you mean "we need to get the URL here"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If they already setup TLS they should access posthog via the hostname, not IP, e.g. http://app.posthog.com
instead of http://104.22.58.181
. Especially because it's possible to forbid direct IP access
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So there's two different things: and separately: access your PostHog instance if TLS was set up, where arguably we can just say navigate to your hostname
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 but for what I saw we never rendered the hostname, even before this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be a nice to have - definitely not a blocker for getting this PR in
- [AWS](/docs/self-host/deploy/aws) | ||
- [Azure](/docs/self-host/deploy/azure) | ||
- [Using Helm Charts](/docs/self-host/deploy/other) | ||
- [DigitalOcean](/docs/self-host/deploy/digital-ocean) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We actually sorted them that way on purpose. DigitalOcean is easiest to get started with and most transparent about pricing, but maybe we should explicitly say this here instead.
@tiina303 I think I've addressed most (if not all) your comments. Let me know what do you think about this last version. Thank you! 🙇 |
export INGRESS_IP=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && \ | ||
echo "\n-----\n" && \ | ||
echo "Your PostHog installation is available at: http://$INGRESS_IP" && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So there's two different things: and separately: access your PostHog instance if TLS was set up, where arguably we can just say navigate to your hostname
Lets land this and follow-up with separate PRs (see attached below):
|
I'm a slowpoke on this one but added in some opinions in case you happened to be curious of my take |
Changes
Checklist