diff --git a/docs/index.md b/docs/index.md index 798519ee..01d4f84e 100644 --- a/docs/index.md +++ b/docs/index.md @@ -48,10 +48,11 @@ auto-scaling deployment with simple APIs. ### Features Coming Soon -**Kubernetes Installation Documentation**: We are working hard to document the installation and -maintenance of inference and fine-tuning functionality on your infrastructure. -For now, our documentation covers using our client libraries to access Scale's -hosted infrastructure. +**Kubernetes Installation Enhancements**: We are working hard to enhance the +installation and maintenance of inference and fine-tuning functionality on +your infrastructure. For now, our documentation covers _experimental_ libraries +to [deploy language models on your infrastructure](guides/self_hosting) +and libraries to access Scale's [hosted infrastructure](https://spellbook.scale.com). **Fast Cold-Start Times**: To prevent GPUs from idling, LLM Engine automatically scales your model to zero when it's not in use and scales up