No description, website, or topics provided.
Python JavaScript HTML CSS TeX Shell Other
Switch branches/tags
Nothing to show
Latest commit 260de81 Jul 6, 2017 @teqwve teqwve committed with teqwve (no-ticket) Cache mechanism for remote filetracker
Change-Id: If82783713108a7eecde53b4e748cfba8f2a5cda8
Permalink
Failed to load latest commit information.
.tx (no-ticket) Add js locale to transifex config Mar 6, 2013
extra/amanda SIO-1753 Create backup prototype May 19, 2016
oioioi (no-ticket) Cache mechanism for remote filetracker Jul 5, 2017
oioioi_selenium (no-ticket) Add Selenium documentation. Jun 29, 2017
rst (no-ticket) Add Selenium documentation. Jun 29, 2017
test SIO-1755 Add load testing script Apr 23, 2016
.gitignore SIO-1997 Selenium tests Jun 6, 2017
.jshintignore
.jshintrc SIO-1979 Migrate to bootstrap 3 with sass May 13, 2017
.pep8rc (no-ticket) pep8 fixes Dec 17, 2013
.pylintrc SIO-1845 Removed the deprecated attributes from .pylintrc Nov 16, 2016
Dockerfile SIO-1997 Selenium tests Jun 6, 2017
Dockerfile.base
Dockerfile.dev
LICENSE
MANIFEST.in
README.rst (no-ticket) Cache mechanism for remote filetracker Jul 5, 2017
Vagrantfile
docker-compose-dev.yml SIO-1997 Selenium tests Jun 6, 2017
docker-compose-selenium.yml
docker-compose.yml
ez_setup.py initial commit Sep 2, 2012
oioioi_init.sh SIO-1997 Selenium tests Jun 6, 2017
requirements.txt SIO-1917 Changed SQLite DB backend to Berkeley DB (bsddb). Apr 11, 2017
setup.py SIO-1887 Upgrade Django to 1.9 Jun 22, 2017
test.sh
test_selenium.sh
upgrade_package.tar
wait-for-it.sh SIO-1909 Dockerfile created May 23, 2017
worker_init.sh SIO-1997 Selenium tests Jun 6, 2017

README.rst

OIOIOI

https://hudson.sio2project.mimuw.edu.pl/job/oioioi-nightly-unittests/badge/icon

SIO2 is a free platform for carrying out algorithmic contests and OIOIOI is its main component — the web interface.

Simple installation

You can easily start development and run oioioi out of the box with vagrant. Just enter the directory where Vagrantfile and this README are placed, and type:

vagrant up

It will create an instance of virtual machine with web server and judges running.

You can specify configuration in vagrant.yml. Supported configuration options (with example):

port: 8001  # run oioioi on port 8001 instead of the default 8000
runserver_cmd: runserver_plus  # use manage.py runserver_plus instead of manage.py runserver

Docker Installation

Additionally, there are available docker files to create images containing our services.

To start with, create oioioi-base image with a command:

docker build -t oioioi-base -f Dockerfile.base .

Then run docker-compose up to start the infrastructure.

To start additional number of workers, use docker-compose scale worker=<number> as described here.

To develop with docker after creating oioioi-base image, create oioioi image with:

docker build -t oioioi .

Then run:

OIOIOI_UID=$(id -u) docker-compose -f docker-compose-dev.yml up

to start the infrastructure in development mode. Current dirrectory with source code will be binded to /sio2/oioioi/ inside running container, and logs from services will be availible outside of the container in ./logs/.

In both cases, oioioi web interface will be availible at localhost:8000, and the user admin with password admin will be created. If you are using docker installation in production encvironment remember to change the password.

Manual Installation

It should be easier to begin with a separate folder at first:

mkdir sio2
cd sio2

and to install OIOIOI inside a virtualenv:

virtualenv venv
. venv/bin/activate

Then OIOIOI and its dependencies can be installed using the following commands:

git clone git://github.com/sio2project/oioioi.git
cd oioioi
pip install -r requirements.txt

OIOIOI is a set of Django applications, therefore you need to create a folder with Django settings and other deployment configuration:

cd ..
oioioi-create-config deployment
cd deployment

The created deployment directory looks like a new Django project, but already configured to serve the OIOIOI portal. You need to at least set the database configuration in settings.py.

In case of using PostgreSQL, install Psycopg2:

pip install psycopg2

Finally initialize the database:

./manage.py migrate

We use PostgreSQL.

Then you need to copy static files, like images and styles, to the deployment directory:

./manage.py collectstatic

Basic configuration

In the simple configuration, OIOIOI will use the system-installed compilers, and will not use the safe execution environment. User's programs will be run with the normal user privileges. This is not a safe configuration and the judging will run quite slowly. It is to easily make OIOIOI up and running for testing purposes.

Ensure that required dependencies are installed:

  • gcc/g++ (Ubuntu package: build-essential)
  • fpc (Ubuntu package: fp-compiler)
  • latex with support for Polish (Ubuntu packages: texlive-latex-base, texlive-lang-polish)

and in one terminal run the Django web server:

./manage.py runserver 0.0.0.0:8000

and in the other the evaluation daemons:

./manage.py supervisor

The supervisor process monitors all processes needed by OIOIOI, except the web server. It has many nice features.

You can create an administrator account by running:

./manage.py createsuperuser

If you see a locale error, you may want to circumvent it by providing another locale to the command:

LC_ALL=C ./manage.py createsuperuser

Now you're ready to access the site at http://localhost:8000.

Production configuration

  1. Begin with the simple configuration described above.

  2. Ensure that production-grade dependencies are installed:

    • lighttpd binary (Ubuntu package: lighttpd, shall not be run as service.)
    • uwsgi (pip install uwsgi)
  3. Make sure you are in the deployment folder and the virtualenv is activated.

  4. Install RabbitMQ. We tested version 2.8.6 from RabbitMQ Debian/Ubuntu Repos. Anything newer should work as well.

  5. Uncomment and set BROKER_URL in settings.py to point to the configured RabbitMQ vhost. The default setting corresponds to the default RabbitMQ installation.

  6. Download sandboxes:

    ./manage.py download_sandboxes
    
  7. Disable system compilers and unsafe code execution by commenting out USE_UNSAFE_EXEC = True and USE_LOCAL_COMPILERS = True in settings.py.

  8. (optionally) Disable starting the judging process on the server, especially if you want to configure judging machines (see below) for judging, what is strongly recommended. Comment out the RUN_LOCAL_WORKERS = True setting.

  9. (required only for dedicated judging machines) Enable Filetracker server by uncommenting FILETRACKER_SERVER_ENABLED, FILETRACKER_LISTEN_ADDR, FILETRACKER_LISTEN_PORT, FILETRACKER_URL in settings.py and restart the daemons.

  10. Install and configure web server. We recommend using nginx with uwsgi plugin (included in nginx-full Ubuntu package). An example configuration is automatically created as nginx-site.conf. Have a look there. What you probably want to do is (as root):

    cp nginx-site.conf /etc/nginx/sites-available/oioioi
    ln -s ../sites-available/oioioi /etc/nginx/sites-enabled/
    service nginx reload
    

    Once this is done, you no more need to run manage.py runserver.

    If you prefer deploying with Apache, an example configuration is created as apache-site.conf. You would need to install apache2 and libapache2-mod-uwsgi packages.

  11. Comment out DEBUG = True in settings.py. This is crucial for security and efficiency. Also set ALLOWED_HOSTS.

  12. Set admin email in settings. Error reports and teacher account requests will be sent there.

  13. Set SMTP server in settings. Otherwise new user registration (among others) will not work.

  14. You probably want to run manage.py supervisor -d automatically when the system starts. One way is to add the following line to the OIOIOI user's crontab (crontab -e):

    @reboot <deployment_folder>/start_supervisor.sh
    
  15. (optionally) If you have efficiency problems or expect heavy load, you may consider using gevent as uwsgi event loop. To do so, install gevent and set UWSGI_USE_GEVENT flag in settings.py.

  16. (optionally) You can also enable content caching. To do so, first you have to install dependencies:

    • memcached (Ubuntu package: memcached)
    • python-memcached (pip install python-memcached)

    Next, you have to uncomment the corresponding lines under "Cache" in settings.py and set the address of your memcached instance. Note that you can run memcached locally or on a remote server. For more information about memcached configuration see official documentation.

  17. (optionally) You can ensure users are automatically notified of certain events in the system - or notify them on your own - just enable the Notifications Server. For more information, consult the notifications/README.rst file.

Setting up judging machines

On every judging machine do the following:

  1. Create a new user account for the judging processes and switch to it.

  2. Set up virtualenv:

    virtualenv venv
    . venv/bin/activate
    
  3. Download and install the sioworkers package:

    git clone https://github.com/sio2project/sioworkers
    cd sioworkers
    python setup.py install
    
  4. Copy and adjust configuration files:

    cp config/supervisord.conf{.example,}
    cp config/supervisord-conf-vars.conf{.example,}
    

    Modify SIOWORKERSD_HOST and FILETRACKER_URL variables in config/supervisord-conf-vars.conf. By default, sioworkersd is run by supervisor on the same host as OIOIOI (SIO2). Filetracker server is also run there, by default on port 9999. You should consider changing WORKER_CONCURRENCY to smaller value if you are judging problems without oitimetool (depends on rules of concrete contest and USE_UNSAFE_EXEC in deployment/settings.py on OIOIOI host).

  5. Start the supervisor:

    ./supervisor.sh start
    
  6. You probably want to have the worker started automatically when system starts. In order to have so, add the following line to the sioworker user's crontab (crontab -e):

    @reboot <deployment_folder>/supervisor.sh start
    

Final notes

It is strongly recommended to install the librabbitmq Python module (on the server). We observed some not dispatched evaluation requests when running celery with its default AMQP binding library:

pip install librabbitmq

Celery will pick up the new library automatically, once you restart the daemons using:

./manage.py supervisor restart all

Installing on 64-bit machines

The sandboxes provided by the SIO2 Project contain 32-bit binaries. Therefore it is recommended that OIOIOI is installed on a 32-bit Linux system. Otherwise, required libraries may be missing. Here we list some of them, which we found needed when installing OIOIOI in a pristine Ubuntu Server 12.04 LTS (Precise Pangolin):

  • libz (Ubuntu package: zlib1g:i386)

Upgrading

Make sure you are in the deployment folder and the virtualenv is activated. Then run:

pip install -e git://github.com/sio2project/oioioi.git#egg=oioioi
./manage.py migrate
./manage.py collectstatic
./manage.py supervisor restart all

and restart the judging machines.

Upgrading from django 1.8

Please make sure to reinstall all packages to avoid compatibility issues.:

pip install -e git://github.com/sio2project/oioioi.git#egg=oioioi
pip install -I --force-reinstall -r requirements.txt
./manage.py migrate
./manage.py collectstatic
./manage.py supervisor restart all

Upgrading from an old version

If you're getting the "Upgrading from an old version" message when trying to sync the database, that means you had an old version of OIOIOI that was based on version 1.6 or 1.5 of the Django framework. Django 1.7 introduces a new migration system which requires a more complicated upgrade process.

IMPORTANT: BACKUP YOUR DATABASE BEFORE DOING THE NEXT STEP.

In the typical situation where you didn't create any custom migrations we've automated the process for you: make sure your database settings are valid and run:

./manage.py upgrade_to_17

That's all. If you have your own custom changes though and they are incompatible with our script or you want to understand what happens, the following needs to be done:

  1. Install Django 1.6 and South and place all of the old migrations in proper directories. The easiest way is to 'git checkout' the last commit before the 1.7 commit and do 'pip install -r requirements.txt'. If you have custom changes in your OIOIOI directory and they conflict with our changes, you'll have to merge them yourself. For our automatic script we use a temporary virtualenv and a package containing all the necessary files to run the old migrations.

  2. Now enable all applications you have ever used (in the INSTALLED_APPS setting) and run ./manage.py migrate. If you don't know which applications you've used in the past, just enable them all and run ./manage.py syncdb and then ./manage.py migrate. Our script does that. If you have your own custom migrations they could be conflicting with ours. You'll have to solve these conflicts yourself.

  3. Get the newest OIOIOI, install the needed packages and remove all of the old migrations. Again, the easiest way is to 'git checkout' the last commit and do 'pip install -r requirements.txt'.

  4. Migrate all the new Django 1.7 migrations. The necessary changes are already in the database and in most cases Django will detect this by faking the migrations - marking them as applied without actually applying them. However some migrations need to be explicitly told to be faked. The commands that need to be run in the typical case are:

    ./manage.py migrate --fake balloons 0002
    ./manage.py migrate --fake complaints 0002
    ./manage.py migrate --fake contestexcl 0002
    ./manage.py migrate --fake contestlogo 0002
    ./manage.py migrate --fake contests 0002
    ./manage.py migrate
    

    assuming that these applications are in INSTALLED_APPS. If you've had your own custom migrations before and they introduced circular dependency loops on foreign keys in different applications than those mentioned above, you also have to run the ./manage.py migrate --fake command for them as well.

  5. Run ./manage.py collectstatic and start the supervisor, your judging machines and the server.

Changes in the deployment directory

When new features are added, the configuration files in your custom deployment directory may need an update. An example valid configuration can always be found in the oioioi sources (oioioi/deployment directory, *.template files). One of the simplest ways to learn about the changes is:

diff -u path_to_deployment/changed_file path_to_oioioi/oioioi/deployment/changed_file.template

Once you have made sure that your deployment directory is up-to-date, change CONFIG_VERSION in your custom deployment/settings.py so that it equals INSTALLATION_CONFIG_VERSION in oioioi/default_settings.py.

List of changes since the CONFIG_VERSION numbering was introduced:

    • Added unpackmgr queue entry to deployment/supervisord.conf.:

      [program:unpackmgr]
      command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -E -l info -Q unpackmgr -c {{ settings.UNPACKMGR_CONCURRENCY }}
      startretries=0
      stopwaitsecs=15
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/unpackmgr.log
      
    • Added USE_SINOLPACK_MAKEFILES and UNPACKMGR_CONCURRENCY options to deployment/settings.py.:

      USE_SINOLPACK_MAKEFILES = False
      #UNPACKMGR_CONCURRENCY = 1
      
    • Added Notifications Server entries to deployment/supervisord.conf.:

      [program:notifications-server]
      command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py notifications_server
      redirect_stderr=true
      {% if not settings.NOTIFICATIONS_SERVER_ENABLED %}exclude=true{% endif %}
      
    • Added NOTIFICATIONS_ options to deployment/settings.py.:

      # Notifications configuration (client)
      # This one is for JavaScript socket.io client.
      # It should contain actual URL available from remote machines.
      NOTIFICATIONS_SERVER_URL = 'http://localhost:7887/'
      
      # Notifications configuration (server)
      NOTIFICATIONS_SERVER_ENABLED = False
      
      # URL connection string to a Notifications Server instance
      NOTIFICATIONS_OIOIOI_URL = 'http://localhost:8000/'
      
      # URL connection string for RabbitMQ instance used by Notifications Server
      NOTIFICATIONS_RABBITMQ_URL = 'amqp://localhost'
      
      # Port that the Notifications Server listens on
      NOTIFICATIONS_SERVER_PORT = 7887
      
    • Added prizesmgr queue entry to deployment/supervisord.conf:

      [program:prizesmgr]
      command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -E -l info -Q prizesmgr -c 1
      startretries=0
      stopwaitsecs=15
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/prizesmgr.log
      
    • Added ATOMIC_REQUESTS database option to deployment/settings.py:

      DATABASES = {
       'default': {
        'ENGINE': 'django.db.backends.', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
        'NAME': '',                      # Or path to database file if using sqlite3.
        'USER': '',                      # Not used with sqlite3.
        'PASSWORD': '',                  # Not used with sqlite3.
        'HOST': '',                      # Set to empty string for localhost. Not used with sqlite3.
        'PORT': '',                      # Set to empty string for default. Not used with sqlite3.
        'ATOMIC_REQUESTS': True,         # Don't touch unless you know what you're doing.
       }
      }
      
    • Added rankingsd, cleanupd, ipauthsyncd, ipauth-dnsserver entries to deployment/supervisord.conf:

      [program:rankingsd]
      command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py rankingsd
      startretries=0
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/rankingsd.log
      
      [program:cleanupd]
      command={{ PROJECT_DIR }}/manage.py cleanupd
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/cleanupd.log
      
      [program:ipauthsyncd]
      command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py ipauthsyncd
      startretries=0
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/ipauthsyncd.log
      {% if not 'oioioi.ipauthsync' in settings.INSTALLED_APPS %}exclude=true{% endif %}
      
      [program:ipauth-dnsserver]
      command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py ipauth-dnsserver
      startretries=0
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/ipauth-dnsserver.log
      {% if not settings.IPAUTH_DNSSERVER_DOMAIN %}exclude=true{% endif %}
      
    • Added new condition to sioworkersd in deployment/supervisord.conf and corresponding entry in deployment/settings.py:

      {% if settings.SIOWORKERS_BACKEND != 'oioioi.sioworkers.backends.SioworkersdBackend' or not settings.RUN_SIOWORKERSD %}exclude=true{% endif %}
      
    • Added evalmgr-zeus entry to deployment/supervisord.conf:

      [program:evalmgr-zeus]
      command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -E -l debug -Q evalmgr-zeus -c 1
      startretries=0
      stopwaitsecs=15
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/evalmgr-zeus.log
      {% if not settings.ZEUS_INSTANCES %}exclude=true{% endif %}
      
    • Deleted zeus-fetcher entry from deployment/supervisord.conf.

    • Added ZEUS_PUSH_GRADE_CALLBACK_URL entry to deployment/settings.py.:

      ZEUS_PUSH_GRADE_CALLBACK_URL = 'https://sio2.dasie.mimuw.edu.pl'
      
    • Added logging to file for logger oioioi.zeus in deployment/settings.py.:

      LOGGING['handlers']['zeus_file'] = {
          'level': 'INFO',
          'class': 'logging.handlers.RotatingFileHandler',
          'filename': '__DIR__/logs/zeus.log',
          'maxBytes': 1024 * 1024 * 5, # 50 MB same as default in supervisord
          'backupCount': 10, # same as in supervisord
          'formatter': 'date_and_level',
      }
      LOGGING['loggers']['oioioi.zeus'] = {
          'handlers': ['zeus_file'],
          'level': 'DEBUG',
      }
      
    • Removed SAFE_EXEC_MODE entry from deployment/settings.py.
    • Removed FILELOCK_BASEDIR entry from deployment/settings.py.
    • Removed ENABLE_SPLITEVAL and SPLITEVAL_EVALMGR entries from deployment/settings.py.
    • Removed evalmgr-lowprio entry from deployment/supervisord.conf.
    • New version of sioworkers with changed database backend. Please update sioworkers with:

      . venv/bin/activate
      pip install -r requirements.txt
      

      and remove old database file (deployment/sioworkersd.sqlite by default).

    • Changed database filename (--database option) in deployment/supervisord.conf:

      [program:sioworkersd]
      command=twistd -n -l- --pidfile={{ PROJECT_DIR }}/pidfiles/sioworkersd.pid sioworkersd --database={{ PROJECT_DIR }}/sioworkersd.db
      # (...)
      
    • Added commented out OIOIOI_INSTANCE_PRIORITY_BONUS and OIOIOI_INSTANCE_WEIGHT_BONUS entries to deployment/settings.py.:

      # Bonus to judging priority ang judging weight for each contest on this
      # OIOIOI instance.
      #OIOIOI_INSTANCE_PRIORITY_BONUS = 0
      #OIOIOI_INSTANCE_WEIGHT_BONUS = 0
      
    • Modified comment to SITE_NAME entry in deployment/settings.py.:

      # Site name displayed in the title and used by sioworkersd
      # to distinguish OIOIOI instances.
      SITE_NAME = 'OIOIOI'
      
    • Removed CeleryBackend from sioworkers backends, SioworkersdBackend set as new default backend. Removed [program:sioworkers] entry from deployment/supervisord.conf.
    • Added PUBLIC_ROOT_URL to deployment/settings.py.:

      # The website address as it will be displayed to users in some places,
      # including but not limited to the mail notifications.
      # Defaults to 'http://localhost'.
      #PUBLIC_ROOT_URL = 'http://enter-your-domain-name-here.com'
      
    • Added mailnotifyd, a backend for handling e-mail subscription to deployment/supervisord.conf.:

      [program:mailnotifyd]
      command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py mailnotifyd
      startretries=0
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/mailnotifyd.log
      
    • Removed SUBMITTABLE_EXTENSIONS from deployment/settings.py.
    • If you want to use Sentry (crash reporting and aggregation platform) you need to:

      • Correctly setup RAVEN_CONFIG (https://docs.sentry.io/quickstart/ should help you).:

        # Error reporting
        import raven
        
        RAVEN_CONFIG = {
            # Won't do anything with no dsn
            # tip: append ?timeout=5 to avoid dropouts during high reporting traffic
            'dsn': 'enter_your_dsn_here',
            # This should be a path to git repo
            'release': raven.fetch_git_sha(
                os.path.join(os.path.dirname(oioioi.__file__), os.pardir)),
        }
        
      • Add new filter to the logging configuration.:

        'filters': {
            ...
            'omit_sentry': {
                '()': 'oioioi.base.utils.log.OmitSentryFilter'
            },
        }
        
      • Add Sentry handler.:

        'handlers': {
            ...
            'sentry': {
                'level': 'ERROR',
                'filters': ['omit_sentry'],
                'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler',
            }
        }
        
      • Add Sentry handler to every logger.:

        'handlers': ['console', 'sentry'],
        
      • Add new loggers.:

        'loggers': {
            ...
            'raven': {
                'handlers': ['console', 'mail_admins'],
                'level': 'DEBUG',
                'propagate': False,
            },
            'sentry.errors': {
                'handlers': ['console', 'mail_admins'],
                'level': 'DEBUG',
                'propagate': False,
            }
        }
        
    • Upgrade to django 1.9 requires following changes in the config file

      • TEMPLATE_* variables got replaced with TEMPLATE array TEMPLATE_CONTEXT_PROCESSORS should be changed to.:

        TEMPLATES[0]['OPTIONS']['context_processors'] += [
        #    'oioioi.contestlogo.processors.logo_processor',
        #    'oioioi.contestlogo.processors.icon_processor',
        #    'oioioi.avatar.processors.gravatar',
        #    'oioioi.notifications.processors.notification_processor',
        #    'oioioi.globalmessage.processors.global_message_processor',
        ]
        
    • Settings should now declare an explicit SITE_ID, you can check your site id via management console.:

      $ ./manage.py shell
      >>> Site.objects.get().id
      1
      

      The returned id should be added to your config file.:

      SITE_ID = 1
      
    • Added filetracker-cache-cleaner entry to deployment/supervisord.conf:

      [program:filetracker-cache-cleaner]
      command=filetracker-cache-cleaner -c {{ FILETRACKER_CACHE_ROOT }} -s {{ FILETRACKER_CACHE_SIZE }} -i {{ FILETRACKER_CACHE_CLEANER_SCAN_INTERVAL }} -p {{ FILETRACKER_CACHE_CLEANER_CLEAN_LEVEL }}
      redirect_stderr=true
      stdout_logfile={{ PROJECT_DIR }}/logs/filetracker-cache-cleaner.log
      {% if not settings.FILETRACKER_CACHE_CLEANER_ENABLED %}exclude=true{% endif %}
      
    • Added new options related to remote_storage_factory to deployment/settings.py:

      # When using a remote_storage_factory it's necessary to specify a cache
      # directory in which necessary files will be stored.
      #FILETRACKER_CACHE_ROOT = '__DIR__/cache'
      
      # When using a remote storage it's recommended to enable a cache cleaner deamon
      # which will periodically scan cache directory and remove files what aren't
      # used. For a detailed description of each option, please read a cache cleaner
      # configuration section in the sioworkersd documentation.
      #FILETRACKER_CACHE_CLEANER_ENABLED = True
      #FILETRACKER_CACHE_CLEANER_SCAN_INTERVAL = '1h'
      #FILETRACKER_CACHE_CLEANER_CLEAN_LEVEL = '50'
      #FILETRACKER_CACHE_SIZE = '8G'
      

Usage

Well, we don't have a full-fledged User's Guide, but feel free to propose what should be added here.

Creating task packages

To run a contest, you obviously need some tasks. To add a task to a contest in OIOIOI, you need to create an archive, called task package. Here are some pointers, how it should look like:

Testing

OIOIOI has a big suite of unit tests. All utilites that are useful for testing can be found in test/ directory. Currently these are:

  • test.sh - a simple test runner
  • test_parallel.py - runs the same tests as test.sh, but uses multiple processes
  • loadtest.py - load testing script

Backup

Amanda is recommended for doing OIOIOI backups. Sample configuration with README is available in extra/amanda directory.

Contact us

Additional information can be found on our:

If you have any further questions regarding installation, configuration or usage of OIOIOI, there are some places you can reach us through: