Skip to content
Processes log updates from Travis Worker, streams them to the web client, aggregates them, and archives to S3. https://travis-ci.org
Ruby Shell
Find file
Latest commit 8b58574 @meatballhat meatballhat Merge pull request #41 from travis-ci/meat-heroku-runtime
Switch to a runtime supported by heroku
Failed to load latest commit information.
bin Bump jruby, add rubocop & simplecov
config Update config/travis.example.yml
db/migrations Bump jruby, add rubocop & simplecov
lib Bump jruby, add rubocop & simplecov
script Bump jruby, add rubocop & simplecov
spec Bump jruby, add rubocop & simplecov
.gitignore Port over some ignores from MRI branch
.rspec Add support for investigation & reporting of log contents
.rubocop.yml Bump jruby, add rubocop & simplecov
.rubocop_todo.yml
.ruby-version Bump jruby, add rubocop & simplecov
.simplecov Bump jruby, add rubocop & simplecov
.travis.yml Switch to a runtime supported by heroku
Gemfile Switch to a runtime supported by heroku
Gemfile.lock Hopefully fix the Gemfile.lock
LICENSE Brushing things off while looking around briefly
Procfile Cleaning up the README
README.md Cleaning up the README
Rakefile Bump jruby, add rubocop & simplecov
config.ru Add existence check on pusher channels for logs
travis-logs.gemspec Brushing things off while looking around briefly

README.md

Travis Logs

Build Status

Travis Logs processes log updates which are streamed from Travis Worker instances via RabbitMQ. The log parts are streamed via Pusher to the web client (Travis Web) and added to the database.

Once all log parts have been received, and a timeout has passed (10 seconds default), the log parts are aggregated into one final log.

Travis Logs archives logs to S3 and the database records are purged once it is verified that the logs are archived correctly.

Process types

Some of the process types listed in ./Procfile depend on other process types, while others are independent:

logs

The logs process is responsible for consuming log parts messages via AMQP, writing each log part to the logs database, and sending the log part to Pusher.

web

The web process runs a Sinatra web app that exposes APIs to handle Pusher webhook events and to set log contents.

aggregate

The aggregate process is responsible for finding all log parts that are eligible for "aggregation" into single log records. The aggregation itself may either be done within the aggregate process or offloaded to the aggregator process via Sidekiq. Once aggregation is complete, a job is sent for consumption by the archive process via Sidekiq.

aggregator

The aggregator process is an optional complement to the aggregate process, handling the heavy lifting via Sidekiq so that aggregation may be performed in parallel.

archive

The archive process is responsible for moving the content of each fully aggregated log record from the database to S3. Once archiving is complete, a job is sent for consumption by the purge process via Sidekiq.

purge

The purge process is responsible for setting log record content to NULL after verifying that the archived (S3) content fully matches the log record content. If there is a mismatch, the log id is sent to the archive process for re-archiving via Sidekiq.

License & copyright information

See LICENSE file.

Copyright (c) 2011-2016 Travis CI GmbH

Something went wrong with that request. Please try again.