Skip to content

osehmathias/langchain-extract

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🚧 Under Active Development 🚧

Please expect breaking changes, a bunch of additional features. We're just getting started.

πŸ¦œβ›οΈ LangChain Extract

CI License: MIT Twitter Open Issues

langchain-extract is a simple web server that allows you to extract information from text and files using LLMs. It is build using FastAPI, LangChain and Postgresql.

The backend closely follows the extraction use-case documentation and provides a reference implementation of an app that helps to do extraction over data using LLMs.

This repository is meant to be a starting point for building your own extraction application which may have slightly different requirements or use cases.

Functionality

  • πŸš€ FastAPI webserver with a REST API
  • πŸ“š OpenAPI Documentation
  • πŸ“ Use JSON Schema to define what to extract
  • πŸ“Š Use examples to improve the quality of extracted results
  • πŸ“¦ Create and save extractors and examples in a database
  • πŸ“‚ Extract information from text and/or binary files
  • πŸ¦œοΈπŸ“ LangServe endpoint to integrate with LangChain RemoteRunnnable

πŸ“š Documentation

See the example notebooks in the documentation to see how to create examples to improve extraction results, upload files (e.g., HTML, PDF) and more.

Documentation and server code are both under development!

🍯 Example API

Below are two sample curl requests to demonstrate how to use the API.

These only provide minimal examples of how to use the API, see the documentation for more information about the API and the extraction use-case documentation for more information about how to extract information using LangChain.

Create an extractor

curl -X 'POST' \
  'http://localhost:8000/extractors' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "Personal Information",
  "description": "Use to extract personal information",
  "schema": {
      "type": "object",
      "title": "Person",
      "required": [
        "name",
        "age"
      ],
      "properties": {
        "age": {
          "type": "integer",
          "title": "Age"
        },
        "name": {
          "type": "string",
          "title": "Name"
        }
      }
    },
  "instruction": "Use information about the person from the given user input."
}'

Response:

{
  "uuid": "32d5324a-8a48-4073-b57c-0a2ebfb0bf5e"
}

Use the extract endpoint to extract information from the text (or a file) using an existing pre-defined extractor.

curl -s -X 'POST' \
'http://localhost:8000/extract' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'extractor_id=32d5324a-8a48-4073-b57c-0a2ebfb0bf5e' \
-F 'text=my name is chester and i am 20 years old. My name is eugene and I am 1 year older than chester.' \
-F 'mode=entire_document' \
-F 'file=' | jq .

Response:

{
  "data": [
    {
      "name": "chester",
      "age": 20
    },
    {
      "name": "eugene",
      "age": 21
    }
  ]
}

βœ… Running locally

The easiest way to get started is to use docker-compose to run the server.

Configure the environment

Add .local.env file to the root directory with the following content:

OPENAI_API_KEY=... # Your OpenAI API key

Build the images:

docker compose build

Run the services:

docker compose up

This will launch both the extraction server and the postgres instance.

Verify that the server is running:

curl -X 'GET' 'http://localhost:8000/ready'

This should return ok.

Contributions

Feel free to develop in this project for your own needs! For now, we are not accepting pull requests, but would love to hear questions, ideas or issues.

Development

To set up for development, you will need to install Poetry.

The backend code is located in the backend directory.

cd backend

Set up the environment using poetry:

poetry install --with lint,dev,test

Run the following script to create a database and schema:

python -m scripts.run_migrations create 

From /backend:

OPENAI_API_KEY=[YOUR API KEY] python -m server.main

Testing

Create a test database. The test database is used for running tests and is separate from the main database. It will have the same schema as the main database.

python -m scripts.run_migrations create-test-db

Run the tests

make test

Linting and format

Testing and formatting is done using a Makefile inside [root]/backend

make format

About

πŸ¦œβ›οΈ Did you say you like data?

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Rich Text Format 95.6%
  • Python 3.7%
  • Other 0.7%