Artificial General Intelligence Experimental Framework
This repository contains code for the development of artificial general intelligence. It contains algorithm code and a framework to execute repeatable and fully logged / inspectable experiments. Every piece of data used in the algorithms can be retrospectively analysed using graphical tools that can be written after you discover there's a bug.
The code includes a simple graphical UI, an interprocess layer for distributed coordination and communication, and base classes for the entities that you need for building an AGI experiment. We also include implementations of many algorithms from the AI and ML literature.
For an introduction to the content and purpose of this repository, see the Wiki. Motivation, results, ideas and other natural language stuff is on our website. Additional technical documentation and tips can also be found in the docs directory.
The remainder of this file contains technical information for setting up and using the code in this repository.
The only requirement for running experiments with AGIEF is Docker. Additional requirements are required for setting up a development environment for building the source code, which can be found in the documentation.
Note: We currently support Linux and macOS, and aim to support Microsoft Windows in the future. However, it requires a custom build of the database HTTP API.
- Clone the repository using
git clone https://github.com/ProjectAGI/agi.git
- Set variables:
/resources/variables-template.shand overwrite with values suitable for your environment
- Copy it to a convenient location and set an environmental variable
VARIABLES_FILEto point to it using the full path
- Note: We recommend you set that up in
.bashrcso that it is always defined correctly.
- Note: We recommend you set that up in
Note: The favoured (and our current) approach is to use 'in memory' persistence, specified in
node.properties in the working folder. However,
postgres is an option. If using PostgreSQL, setup and run the db by executing
To start running experiments, the
/bin/node_coordinator/run-in-docker.sh script allows you to build and run compute in a Docker container, which means you won't need to do any environment configuration on your own computer, save for installation of Docker.
All scripts utilise environmental variables defined in a 'variables' file. Every script begins by sourcing this file.
/resources/variables-template.sh is an example with explanations of each variable. You can modify that file, or create your own instead.
IMPORTANT: Then set the ENV variable
VARIABLES_FILE to it using the full path.
That is necessary even if you are using the
- The folder that you are running from must contain the file
node.propertiesand a log4j configuration file. A working template is given in
node.propertiesallows you to set the db mode between 'jdbc' and 'node'. The former is PostgreSQL, the latter is 'in-memory'.
- You can build and run the Compute Node using the scripts in
/bin/node_coordinator. There is also the option of doing this in a docker container using
/bin/run-in-docker.sh, read the help to see how to use it.
- There scripts for running the system, they take parameters such as the node properties and the initial state of system (entities, data)
- Once a Compute Node is run as a Demo or Generic Experiment (see below), it will be running as a server. You can then load and export experiments via the HTTP API. The GUI utilises the API to make it easy to do that and to visualise all data structures, entity tree and entity configurations.
Run a Demo
The simplest way to run an experiment is to choose a Demo. There are a bunch of examples in the package
- Choose a demo. By convention they are named
- Launch a Compute Node by running the appropriate
main()method for that Demo.
- They can be run within the IDE or using
- Send an
updatesignal to the root
Experimentnode to start the experiment. You can do this using the RESTful API, or with the GUI, where you can also see what's going on.
- The Demo is an experiment that has been defined in code, alternatively, you can run an experiment defined in JSON input files.
Run Generic Experiments
This describes how to run an experiment defined in JSON input files. If you don't already have them ready to go, you can run a given demo and export the input files (using GUI or RESTful API).
- Launch the framework with the generic
- You will need to import the input files that define your experiment
- It could be specified as input parameter when running, or after launched you can use the RESTful API or GUI
- As above, get the experiment started by sending an
updateto the root
Run Advanced Experiments
We have a tool called run-framework which makes it easy to run predefined experiments, locally or remotely on physical or AWS infrastructure, conduct parameter sweeps, export and upload the results and more.
There is also a set of experiment folders already defined and ready to go at experiment-definitions.
Running the GUI
- Run GUI by running the web server
/bin/www/python_server.shand going to http://localhost:8000
- Alternatively, open any of the web pages in
- Start with
There are additional useful resources available in the /resources directory. This includes a code formatting style file, log4j configuration file template, an empty run-folder with necessary assets for the working directory and a template for the
The purpose of this repository is to continue to improve AGIEF, making it better, faster and easier to use for the research community. We gladly accept contributions via GitHub pull requests that address existing bugs or propose improvements. Please read the Contributing Guide for more details.
The code is licensed under the GNU General Public License v3.0.