Skip to content

Application-Natural-Language-Interface/Python-ANLI

Repository files navigation

Application Natural Language Interface (ANLI)

ANLI Logo

ANLI is a Python package designed to enable developers to wrap their applications and tools with a natural language interface. It leverages Large Language Models (LLMs) to process unstructured natural language input into structured commands, allowing both humans and AI agents to interact intuitively with software components.

Installation

Install from source:

git clone https://github.com/Application-Natural-Language-Interface/Python-ANLI.git
cd Python-ANLI
pip install -e .

Post-Installation Scripts

Depending on your operating system and hardware, we provide different post-installation scripts to set up ANLI with the default configuration (using llama.cpp model). After installing ANLI, please run the appropriate script from the install_scripts folder inside your virtual environment:

  • For Linux or macOS: install_scripts/install_linux_macos.sh

    Make sure to give execute permissions to the scripts before running them:

    chmod +x ./install_scripts/install_linux_macos.sh
  • For Windows:

  1. Open PowerShell: Right-click on the Start menu and select "Windows PowerShell" or "Windows PowerShell (Admin)" for administrative privileges if required.

  2. Allow Script Execution (if needed): By default, Windows restricts the execution of PowerShell scripts. To allow the execution of scripts, you might need to modify the execution policy. Run the following command in PowerShell:

    Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

    This command allows the execution of scripts that are written on your local computer and signed scripts from the Internet.

  3. Run the Script: Navigate to the folder containing the script install_windows.ps1. Execute the script by typing .\install_windows.ps1 and pressing Enter.

Dependencies

  • redis
    docker run -d -p 6379:6379 redis/redis-stack:latest

Models

ANLI uses Large Language Models (LLMs) to process natural language input. We provide a default model based on Mistral 7B Instruct v0.1, and use llama.cpp to run inference on the model.

You can load other models by providing a config.yaml file, and load it with LLMInterface(config_file='config.yaml')

You can use HF Transformer instead of llama.cpp. But you need to do pip install anli[transformer] (or pip install -e .[transformer]) to install the dependencies. Note that it can be much slower than llama.cpp.

Features

  • Natural Language Understanding (NLU) to parse and understand user input.
  • Dialogue Management for maintaining context and handling multi-turn interactions.
  • Integration Layer for developers to easily map functions to natural language commands.
  • Command Execution for performing actions based on natural language commands.
  • Responses & Explanations Generator to dynamically create user guidance.

Documentation

For more detailed documentation, please refer to the official documentation.

Roadmap

See the Roadmap for a list of proposed features (and known issues).

Contributing

We welcome contributions! Please see our Contribution Guide for more information on how to get started.

License

This project is licensed under the Apache License 2.0.

About

The python implementation

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published