LLMs learn static representation of knowledge in the weigths. Our experiments show that smaller models (7B) can compete with models of higher capacity such as ChatGPT and Llama 2 in perceived utility and F1 score if they access search engines to augment their reasoning power during in-context learning.
touch .env
openai_api_key = ''
serpapi_api_key = ''
- Run with:
pip install -r requirements.txt
python run/run_online.py