Let's quothe the autors =)
AI_Devs to 5-tygodniowy, największy w Polsce, kurs łączenia narzędzi Generative AI
(w szczególności modeli OpenAI) z logiką aplikacji oraz narzędziami do automatyzacji.
Całkowicie rezygnujemy z ChatGPT na rzecz bezpośredniego połączenia z modelami poprzez
API, budując dopasowane narzędzia zwiększające efektywność codziennych zadań.
This is everything related to the course that I am doing during learning.
It will contains few modules:
Module | Description |
---|---|
examples_python | python implementations of optional course examples covering OpenAI, Langchain, vector databases, and similarity search. The goal is to recreate the functionality presented in TS examples with Python. |
api_tasks | python implementations of tasks required to complete this course. |
chat_tasks | tasks completed on the course platform (which simulates a modified OpenAI playground). Contains a task descriptions and list of my attempts for future reference. |
docs | Additional descriptions and theory researched and documented while completing the course. Generaly things useful for future reference in upcoming examples, produced while i was processing knowledge. |
own_testing | some kind of sandbox with additional tests, libraries and methods methods comparations and trying different ways to achieve the same goals, kept for reference. |
Lesson | Name | Description |
---|---|---|
C01L04 | PYTHON 🐍 different_connections_to_openai.py |
- Different ways to connect with OpenAI models using the OpenAI library and the requests library - Streaming example that prints the content of the response chunk by chunk as it is generated - Example of using the LangChain library to initialize a default model (gpt-3.5-turbo) |
C01L04 | PYTHON 🐍 langchain_conversationchain.py |
- Two ways to create a conversation using the LangChain library with the OpenAI chat model - The first method manually creates a list to hold message objects (HumanMessage, SystemMessage, AIMessage) and appends them to the list as the conversation progresses - The second method uses the ConversationChain class implemented in LangChain, which automatically manages the conversation history using ConversationBufferMemory - Both methods allow for sending messages to the model and receiving responses, maintaining the context of the conversation |
C03L01 | MARKDOWN 📜 async.md |
- Event loop in Python and its usage in Jupyter/interactive mode vs normal scripts - Document objects in langchain library - Sync (model.invoke) vs async (model.agenerate) methods for chat models - Code examples of async workflow with asyncio.gather |
C03L01 | MARKDOWN 📜 simulating_max_cocurrency.md |
- Python's langchain library doesn't have a direct equivalent of the maxConcurrency parameter for the ChatOpenAI class - Concurrency can be controlled using Python's asyncio library and semaphores - The text provides a possible solution using asyncio.Semaphore to limit concurrency when generating descriptions for multiple documents |
C03L03 | MARKDOWN 📜 FAISS_vetor_storing.md |
- FAISS (Facebook AI Similarity Search) is a library for efficient similarity search and clustering of dense vectors - It converts data into high-dimensional vectors using embeddings or feature extraction techniques - FAISS creates an index to enable fast similarity search by organizing the vectors for efficient retrieval - The pre-built index allows for quick similarity search when a query vector is provided, using optimized algorithms and parallelization to speed up the process |
C03L03 | JUPYTER NOTEBOOK 🐍+📜 function_calling.ipynb |
- How to prepare functions and their schemas to be used with LLMs for function calling - Different ways to initialize models with function schemas using dictionaries, pydantic BaseModel or convert_to_openai_tool - Extracting function names and arguments from the model's response - Executing the selected function with the provided arguments - Comparison of function calling and prompt-with-examples approaches for solving a specific task |
C04L04 | MARKDOWN 📜 C04L04_README.md |
- How we can host FLASK API and use ngrok to tunnel traffic to our local hosting - The process involves preparing a function to generate answers using LLM, creating a Flask API to handle user requests and return answers, and using ngrok to make the locally running app accessible over the internet - The text provides step-by-step instructions on how to set up the Flask API, configure ngrok, and complete the task by executing parts of the C04L04_ownapi.py file while the API is running |
Orginally available in typescript at https://github.com/i-am-alice/2nd-devs/
Name | python version | typescript | typescript | status |
---|---|---|---|---|
01_langchain_init | Python | snapshot | orginal | ✅DONE |
02_langchain_format | Python | snapshot | orginal | ✅DONE |
03_langchain_stream | Python | snapshot | orginal | ✅DONE |
04_tiktoken | Python | snapshot | orginal | ✅DONE |
05_conversation | Python | snapshot | orginal | ✅DONE |
06_external | Python | snapshot | orginal | ✅DONE |
07_output | Python | snapshot | orginal | ✅DONE |
08_cot | Python | snapshot | orginal | ✅DONE |
09_context | Python | snapshot | orginal | ✅DONE |
10_switching | Python | snapshot | orginal | ✅DONE |
11_docs | Python | snapshot | orginal | ✅DONE |
12_web | Python | snapshot | orginal | ✅DONE |
13_functions | Python | snapshot | orginal | ✅DONE |
14_agent | Python | snapshot | orginal | ✅DONE |
15_tasks | Python | snapshot | orginal | ✅DONE |
16_nocode | Python | snapshot | orginal | ✅DONE |
17_tree | Python | snapshot | orginal | ✅DONE |
18_knowledge | Python | snapshot | orginal | ✅DONE |
19_llama | Python | snapshot | orginal | ❌WAITING |
20_catch | Python | snapshot | orginal | ✅DONE |
21_similarity | Python | snapshot | orginal | ✅DONE |
22_simple | Python | snapshot | orginal | ✅DONE |
23_fragmented | Python | snapshot | orginal | ✅DONE |
24_files | Python | snapshot | orginal | ✅DONE |
25_correct | Python | snapshot | orginal | ✅DONE |
26_summarize | Python | snapshot | orginal | ✅DONE |
27_qdrant | Python | snapshot | orginal | ✅DONE |
28_intent | Python | snapshot | orginal | ✅DONE |
29_notify | Python | snapshot | orginal | ✅DONE |
30_youtube | Python | snapshot | orginal | ✅DONE |
CHAPTER/LESSON | Task | Tags | Description |
---|---|---|---|
C01L01 | C01L01_helloapi.py | [course_api_only] | Testing api to connect with course platform |
C01L04 | C01L04_blogger.py | [OpenAI] | Base connection with OpenAI |
C01L04 | C01L04_moderation.py | [OpenAI][Guard] | openai/moderations endpoint (to prevent user input make us banned) |
C01L05 | C01L05_liar.py | [Guard] | Very basic example of checking if input/api response looks like we expected |
C02L02 | C02L02_inprompt.py | [langchain][pandas] | Example of filtering longer data to provide valuable context for model |
C02L03 | C02L03_embedding.py | [OpenAI] | openai/embeddings endpoint - example of convertig text to numbers |
C02L04 | C02L04_whisper.py | [OpenAI][whisper] | Example of generating transcriptions |
C02L05 | C02L05_functions.py | [course_api_only] | Example of writing function definition for openai |
C03L01 | C03L01_rodo.py | [course_api_only] | Example of writing prompt that will suggest model to do something |
C03L02 | C03L02_scraper.py | [langchain][webscrap] | Example of trying to gather data from server that may have some basic prevention/random errors |
C03L03 | C03L03_whoami.py | [langchain][ConversationChain] | Task to practice with store chat history/using ConversationChain |
C03L04 | C03L04_search.py | [qdrant][embeddings][similarity search][vector database] | Embeddings creation, indexation and performing similarity search for dynamic context generation for LLM with qdrant |
C03L05 | C03L05_people.py | [pandas][langchain][filtering data] | Filtering only desired context for llm (in my case using PANDAS DATAFRAME) |
C04L01 | C04L01_knowledge.py | [function calling][langchain][apis] | Connecting to different APIs basing on LLM decision (with function_calling) |
C04L02 | C04L02_tools.py | [function_calling][langchain] | Another function_calling example to get correct json (Letting LLM determine incoming action). Also, adding extra systempromt with context to let model understand dates. |
C04L03 | C04L03_gnome.py | [langchain][vison][analyzing images] | Example using Vision model to let model analyze content of some image |
C04L04 | C04L04_ownapi.py | [langchain][api][flask][ngrok] | Create app that will catch post requests, extract user question and get answer from LLM |
C04L05 | C04L05_ownapipro.py | [langchain][ConversationChain][api][flask][ngrok] | Create app that will catch post requests, extract user question and get answer from LLM while holding user messages in memory |
C05L01 | C05L01_meme.py | [RenderForm][documents from templates] | Use renderform prepare template and generate images basing on this template |
C05L02 | C05L02_optimaldb.py | [langchain][LLMLingua][context compresssion][context optimalization] | Multiple ways to optimize text data to contain less space. Testes methods was LLMLingua, asking LLM for optimize and asking LLM for summarization |
C05L03 | C05L03_google.py | [langchain][flask][ngrok][serpapi][google] | Preparing keywords from user question to execute search in google using serpapi lib |
They are saved mostly for searching for specific prompt examples. I didn't see any reason to keep then in separated files so they are all available here
Lesson | Name | Description | status |
---|---|---|---|
C01L01 | getinfo | Forcing ChatGPT to output the word BANANA without using that word in the prompt. Difficulties - some words are disabled in the prompt. | ✅DONE |
C01L02 | maxtokens | Providing the name of a river flowing through the capital of a given country, while staying within the max token limit. | ✅DONE |
C01L03 | category | Making ChatGPT assign an appropriate category (home/work/other) to a task and return the answer in JSON format. | ✅DONE |
C01L03 | books | Preparing a JSON array with book titles and authors using one-shot prompting with GPT-3.5-turbo. | ✅DONE |
C01L05 | injection injection2 | Using prompt injection to extract a secret word from the prompt, with increasing difficulty levels and models (GPT-3.5 and GPT-4). | ✅DONE |
C02L01 | optimize | Defining the 'system' field in a query to perform a given task while staying within the character limit, which is more challenging than token limit. | ✅DONE |
C02L01 | fixit | Convincing GPT-4 to fix and optimize provided source code, handle errors properly, and return zero for all incorrect inputs. | ✅DONE |
C02L02 | parsehtml | Extracting readable article text from HTML code (in paragraphs), converting it to Markdown format, and returning only the three paragraphs without any HTML code. | ✅DONE |
C02L03 | structure | Preparing a prompt that works with both GPT-3.5-Turbo and GPT-4 models to generate a JSON object with a specific structure, taking into account the strengths and weaknesses of GPT-3.5-Turbo. | ✅DONE |
C02L05 | cities | Generating a list of 7 interesting facts about a given city without using the city name in the prompt or the generated response, while working with the GPT-3.5-turbo model. | ✅DONE |
C03L01 | tailwind | Writing a system message that returns a <button> element consistent with the user's message, ensuring the model's response contains only the <button> element without additional comments or tags. |
✅DONE |
C03L02 | format | Creating a converter from an old African markup language to HTML code, instructing GPT-3.5-turbo on how to handle and interpret the code. | ✅DONE |
C03L05 | planets | Generating a JSON array consisting of 9 planet names in the solar system (including Pluto), with names in lowercase Polish, without mentioning planets, solar system, JSON, or the Polish language in the prompt. | ✅DONE |