This demo shows how Optimizely Fullstack can be used to control one (or multiple) logging service's batch sizes.
- Clone this repo
- Navigate to the root of the repo
- Create a virtual environment
venv
:
virtualenv venv
. venv/bin/activate
- Install the Optimizely Python SDK:
pip install optimizely-sdk
- To run a single logging service:
python run.py
To run multiple logging services:python run_multiple.py
- Sign up for a free trial of Optimizely Fullstack here
- Follow all of the prompts
- Create your first Optimizely Fullstack Project, name it whatever you like
- Find your production SDK key under 'Settings'
- update the SDK key on line 7 of
logging_service.py
(aside: move this to an environment variable if you plan on committing your example app)
- Create a Feature Flag
- 'Features' > 'Create New Feature'
- Enter
log_batching
as the feature key - Add a feature variable called
batch_size
Variable Key: batch_size
,Type: Integer
,Default Value: 20
- Select 'Environment' as
Production
- Toggle the feature
on
- Roll it out to 100% of traffic
- See what happens :)
- Roll out your feature to some percentage (try 50%) of traffic
- maybe change your
batch_size
to 10 to speed up the demo
- run
python run_multiple.py
to see what happens
A service which logs to console and has Optimizely configured so that feature flags and variables can control the behavior of the logging service.
Mock services which sends fake logs to the logging service
Entrypoint to a simple 'one service' demo. Bootstraps one mock service called app_server
.
Entrypoint for running multiple services at once. This is useful to show off a targeting feature, targeting individual named services.