This project explores the applications of LLMs. It uses a "library (bibliotheca)" as the context.
click to see details
- Python 11
click to see details
git clone https://github.com/dujm/library.git
# remove my git directory
rm -rf .git/
# create a new git repository if you need
#git init
# create an env with Python 11 (see file `environment.yml`)
conda env create --name library --file=environment.yml
# activate env
conda activate library
# add conda environment to jupyter lab
ipython kernel install --user --name=library
# open jupyter lab
jupyter lab
- Below is for MacOS. Find more instructions on Ollama if you use other operating systems.
- Download file from Ollama website
- Open Ollama app
- Select a model from Model library.
- I selected
llama2
model. - Download it in the terminal
# pull llama2 model
ollama pull llama2
- Open Ollama app
- Or run the bash script in the terminal
bash scripts/ollama_serve.sh
# if you want to stop Ollma in the Mac terminal
pkill ollama
- Go to
notebooks/
- Open a notebook
- Select the kernel
library
click to see details
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data/ <- Data directory
│
├── docs/ <- A default Sphinx project; see sphinx-doc.org for details
│
├── models/ <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks/ <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description,
│
├── reports/ <- Generated analysis as HTML, PDF, LaTeX, etc.
│
├── requirements.txt <- The requirements file for reproducing the Python environment
│
├── environment.yml <- The environment file for reproducing the conda environment
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
│
├── src/ <- Source code for use in this project.
│
└── tox.ini <- tox file with settings for running tox; see tox.readthedocs.io