The goal of this project is to create an autonomous LLM system that has the ability to gather information about any company in the NYSE. Then, using the information we gather, provide a coherent and sound reasoning for a particular trade on a stock. This has been implemented through various prompt engineering strategies that sequentially guide the model in its research.
This code utilizes Ollama with Open-Source models for inference on data. It is possible to rework this to connect to an API for inference instead, or use whatever model you want. This could be done by modifying the "query_local_llm()" function so that it uses an API key, appropriate request packet, and IP address.
- Nvidia GPU with at least 8GB RAM (RTX 3060 or better recommended)
- Ubuntu 20.04+ on WSL2
- Nvidia CUDA:
sudo apt install nvidia-cuda-toolkit
curl -fsSL https://ollama.com/install.sh | sh
Install guide for help: Guide
Install the model of your choice: Models
This will download the model and run it in the terminal:
ollama run llama3
./run.py --mode=2
./run.py --mode=1
- You will need to create a directory:
/Stonks/LLM_bot/symbols
- Then add a file called
positions.txt
with a list of ticker symbols in it
or
python /LLM_bot/run.py --mode=2 --timeline="two month" --bias="energy sector"
- Note: only mode 2 uses the
--bias
term.
Contributions are welcome! Please run tests to ensure that you are not breaking the existing code base.
Minimum testing:
python run.py --testing
- Ensure the above script execution completes without crashing
- This could take up to 30 mins
Open source under MIT license. link