Only Helloworld with different LLM
| Term | What It Is |
|---|---|
| LLaMA | Meta's open-source language model family (LLaMA 2, LLaMA 3...) |
| Ollama | A local LLM runtime — makes it easy to run models like LLaMA, Mistral, etc. |
| OpenAI | Cloud-based LLMs like GPT-3.5, GPT-4, Codex. Requires internet + API key. |
| LangChain | A Python (or JS) framework for building LLM-powered apps (RAG, agents, chatbots, etc.) |
| LangSmith | A dev/debugging platform built by LangChain to trace and visualize LLM calls |
| Use Case | Recommended Tool |
|---|---|
| Run LLaMA locally | Ollama |
| Use GPT-4/GPT-3.5 | OpenAI |
| Build LLM apps | LangChain |
| Debug & trace LLM apps | LangSmith |
LLM stands for Large Language Model, a type of artificial intelligence designed to understand and generate human-like text based on vast amounts of data.
- Install Python - Download Python
- Install OpenAI Python package -
pip install openai - Set API key in environment variable -
export OPENAI_API_KEY=<your_api_key> - Use OpenAI library to call OpenAI APIs
OpenAI is an AI research and deployment company. Its mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.
- Create an account - OpenAI API Keys
- Get API key
- Set API key based on your settings
- Create a virtual environment -
python -m venv venvorpython3 -m venv venv - Activate the virtual environment:
- On Windows:
venv\Scripts\activate - On macOS/Linux:
source venv/bin/activate
- On Windows:
- Install the required packages -
pip install -r requirements.txtorpip install -r helloworld/python/requirements.txt - Deactivate the virtual environment -
deactivateif you want to exit - Run the Python script -
python helloworld/python/openai-helloworld.py
Ollama is a platform that provides access to various versions of the Llama language model, including Llama2 and Llama3.
- It allows users to download, install, and run these models locally or in any environment.
- Ollama simplifies the process of working with large language models by offering easy-to-use commands for pulling and running models, as well as checking available models.
- It supports both terminal-based interactions and API calls, making it versatile for different use cases.
- Download Llama3 or Llama2 version based on your need. Some of the models are free to use and some are paid. Llama Downloads
- Choose as per your need and download the model
- Install Llama
- Pull the model -
ollama pull llama3:70borllama3.2orllama2.2orllama3 - Use this command to check the models -
ollama list - Run the model -
ollama run llama3 - You can ask in terminal:
What is the capital of India?
curl http://localhost:11434/api/chat -d '{
"model": "llama3",
"messages": [
{
"role": "user",
"content": "What is the capital of India?"
}
],
"stream": false
}'- Install the required packages -
pip install -r requirements.txtorpip install -r helloworld/python/requirements.txt - Run -
python helloworld/python/ollama-helloworld.py
- Install the required packages -
pip install -r requirements.txtorpip install -r helloworld/python/requirements.txt - Run -
python helloworld/python/langchain-helloworld.py- this wrapper uses llama3
Langchain is a powerful framework designed to simplify the integration of language models into your applications. It provides a seamless interface for managing and orchestrating multiple language models, making it easier to build, deploy, and maintain complex LLM-based systems. With Langchain, you can leverage the strengths of different models, optimize their usage, and create robust, scalable solutions for various natural language processing tasks.
- Install the required packages -
pip install -r requirements.txtorpip install -r helloworld/python/requirements.txt - Run -
python helloworld/python/langchain-helloworld.py- this wrapper uses llama3 - Create a free account - Langsmith
- Get API key
- Set API key based on your settings
If you have a development background or are familiar with tools like Splunk, Dynatrace, OpenTelemetry, or New Relic, you can understand why Langsmith is important.
Langsmith provides powerful tools for monitoring and debugging your LLM applications. It offers detailed insights into model performance, helps identify bottlenecks, and ensures your applications run smoothly. With Langsmith, you can track usage, manage API keys, and optimize your LLM workflows efficiently.