From 8aa2626d60e1e315e0116554822f44411e4e54f7 Mon Sep 17 00:00:00 2001 From: Christopher Moroney Date: Mon, 20 Oct 2025 10:35:23 -0700 Subject: [PATCH] solution to localhost not connecting --- .../pytorch-llama/pytorch-llama-frontend.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/content/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama-frontend.md b/content/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama-frontend.md index ffae9d430a..8f2442978b 100644 --- a/content/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama-frontend.md +++ b/content/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama-frontend.md @@ -74,3 +74,15 @@ Collecting usage statistics. To deactivate, set browser.gatherUsageStats to fals Open the local URL from the link above in a browser and you should see the chatbot running: ![Chatbot](images/chatbot.png) + +{{% notice Note %}} +If you are running a server in the cloud, the local URL may not connect when starting the frontend server. If this happens, stop the frontend server and reconnect to your instance using port forwarding (see code below). After reconnecting, activate the `venv` and start the Streamlit frontend server. + +```sh +# Replace with your .pem file and machine's public IP +ssh -i /path/to/your/key.pem -L 8501:localhost:8501 ubuntu@ +source torch_env/bin/activate +cd torchchat +streamlit run browser/browser.py +``` +{{% /notice %}} \ No newline at end of file