Hello! Welcome to Simon. Simon is a Python library that powers your entire semantic search stack: OCR, ingest, semantic search, extractive question answering, textual recommendation, and AI chat.
Check out 🌐 this online demo of the tool and browse 📖 the full documentation!
- PostgresQL 15 with the Vector Plugin
- A cloud service like neon, supabase, digital ocean is probably easiest
- OR, you can also self host the database following these instructions
- OpenAI API key
- Python 3.9 or above. We recommend Python 3.11.
- Optional: Java if you want to use Simon's built in OCR tooling
You can get the package from PyPi.
pip install simon-search -U
import simon
# connect to your database
context = simon.create_context(
"PROJECT_NAME", # an arbitrary string id to silo your data.
# (store and search are per-project.)
"sk-YOUR_OPENAI_API_KEY", # must support GPT-4
# postgres options. get these from your postgres provider.
{ "host": "your_db_host.com",
"port": 5432,
"user": "your_username",
"password": "password", # or None
"database": "your_database_name"
}
)
# if needed, provision the database
simon.setup(context) # do this *only once once per new database*!!
The project_name
is an arbitrary string you supply as the "folder"/"index" in the database where your data get stored. That is, the data ingested for one project cannot be searched in another.
You optionally can store the OpenAI key and Database info in an .env
file or as Bash shell variables following these instructions.
ds = simon.Datastore(context)
# storing a remote webpage (or, if Java is installed, a PDF/PNG)
ds.store_remote("https://en.wikipedia.org/wiki/Chicken", title="Chickens")
# storing a local file (or, if Java is installed, a PDF/PNG)
ds.store_file("/Users/test/file.txt", title="Test File")
# storing some text
ds.store_text("Hello, this is the text I'm storing.", "Title of the Text", "{metadata: can go here}")
To learn more about ingestion, head on over to the ingest overview page!
We all know why you came here: search!
s = simon.Search(context)
# Semantic Search
results = s.search("chicken habits")
# Recommendation (check out the demo: https://wikisearch.shabang.io/)
results = s.brainstorm("chickens are a species that")
# LLM Answer and Extractive Question-Answering ("Quoting")
results = s.query("what are chickens?")
To learn more about search, including how to perform a boring keyword search or to stream your LLM output, head on over to the search overview page!
That's it! Simple as that.
Check out the full documentation available here available here: from customizing your LLM, a REST API, and streaming your search results—we've got you covered.
We are always looking for more friends to build together. If you are interested in learning more, getting enterprise support, or just want to chat, visit this page.
If you have a question about the package, please feel free to post a discussion.
(C) 2024 Shabang Systems, LLC. Built with ❤️ and 🥗 in the SF Bay Area