Experiment with Ollama and LangChain.
Enter development shell:
devbox shell
Start Ollama server (in a separate terminal):
ollama serve
Pull the model (first time only):
ollama pull llama2:3b
Start the Streamlit app:
devbox run start-chatgpt
The application will be available at http://localhost:8501