diff --git a/content/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama-frontend.md b/content/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama-frontend.md index ffae9d430a..8f2442978b 100644 --- a/content/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama-frontend.md +++ b/content/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama-frontend.md @@ -74,3 +74,15 @@ Collecting usage statistics. To deactivate, set browser.gatherUsageStats to fals Open the local URL from the link above in a browser and you should see the chatbot running: ![Chatbot](images/chatbot.png) + +{{% notice Note %}} +If you are running a server in the cloud, the local URL may not connect when starting the frontend server. If this happens, stop the frontend server and reconnect to your instance using port forwarding (see code below). After reconnecting, activate the `venv` and start the Streamlit frontend server. + +```sh +# Replace with your .pem file and machine's public IP +ssh -i /path/to/your/key.pem -L 8501:localhost:8501 ubuntu@ +source torch_env/bin/activate +cd torchchat +streamlit run browser/browser.py +``` +{{% /notice %}} \ No newline at end of file