
AIknowlEDGE is a desktop application built with Electron.js and Python FastAPI to showcase Disconnected Containers.
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
Table of Contents
Azure AI containers gives you the flexibility to run Azure AI services locally in containers. Connected containers run locally in your environment and send usage information to the cloud for billing. Disconnected containers are intended for scenarios where no connectivity with the cloud is needed for the containers to run.
Follow these steps to run the project locally.
- Python 3.12+
- Docker
- VS Code
- Ollama
-
Clone the repository:
git clone https://github.com/Azure-Samples/AI-knowlEDGE.git cd AI-knowlEDGE
-
Install Docker and start. Then open the cmd and pull the container by running the following command:
docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:cpu
-
Get your Cognitive Services keys and endpoints for the two products (available on Azure) - Azure Document Intelligence and Azure AI Language:
AZURE_DOCUMENT_ANALYSIS_ENDPOINT AZURE_DOCUMENT_ANALYSIS_KEY LANGUAGE_ENDPOINT LANGUAGE_KEY
-
Create a folder on your C:/ drive named
ExtractiveModel
. -
Download the SLMs for the Summarization Service. Start Docker and run:
docker run -v C:\ExtractiveModel:/models mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:cpu downloadModels=ExtractiveSummarization billing=LANGUAGE_ENDPOINT apikey=LANGUAGE_KEY
-
Set up the Python environment and install dependencies:
cd backend python -m venv venv venv\Scripts\activate # On Linux use `source venv/bin/activate` pip install -r requirements.txt
-
Create a docker-compose.yml with the following:
version: "3.9" services: azure-form-recognizer-read: container_name: azure-form-recognizer-read image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1 environment: - EULA=accept - billing=<document-intelligence-endpoint> - apiKey=<document-intelligence-key> ports: - "5000:5000" networks: - ocrvnet textanalytics: image: mcr.microsoft.com/azure-cognitive-services/textanalytics/summarization:cpu environment: - eula=accept - rai_terms=accept - billing=<language-endoint> - apikey=<language-key> volumes: - "C:\\ExtractiveModel:/models" ports: - "5001:5000" networks: ocrvnet: driver: bridge
-
Create the container by running:
docker-compose up
-
Make a .env file:
AZURE_DOCUMENT_ANALYSIS_ENDPOINT=http://localhost:5000 AZURE_DOCUMENT_ANALYSIS_KEY=<document-intelligence-key> LANGUAGE_ENDPOINT=http://localhost:5001 LANGUAGE_KEY=<language-key>
-
Download Ollama and install at least one SLM and one embedding model:
ollama pull phi3 ollama pull nomic-embed-text
-
Start the application from VS Code: F5 or Run > Start Debugging
-
Start the FastAPI backend:
cd backend uvicorn main:app --port 8000
-
Start the Streamlit app:
cd frontend streamlit run app.py --server.port=8501
- Other Containers Integration
- Speech service
- Translation service
- Other Use Cases Integration
- Packaging
- Single-click installation
- Cross-platform installation
See the open issues for proposed features and known issues.
Contributions make open source great. We appreciate all contributions.
- Fork this repo
- Create a Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE.txt
for more info.
- Raoui Lassoued LinkedIn
- Serge Retkowsky LinkedIn
- Farid El Attaoui LinkedIn
- Alibek Jakupov @ajakupov1 LinkedIn
Project Link: AIKnowlEDGE
- Microsoft France