This repository ports the latest Featurevisor JavaScript SDK to Python.
The package name is featurevisor, and it targets Python 3.10+.
This SDK is compatible with Featurevisor v2.0 projects and above.
pip install featurevisorInitialize the SDK with Featurevisor datafile content:
from urllib.request import urlopen
import json
from featurevisor import create_instance
datafile_url = "https://cdn.yoursite.com/datafile.json"
with urlopen(datafile_url) as response:
datafile_content = json.load(response)
f = create_instance({
"datafile": datafile_content,
})We can evaluate 3 types of values against a particular feature:
- Flag (
bool): whether the feature is enabled or not - Variation (
string): the variation of the feature (if any) - Variables: variable values of the feature (if any)
Context is a plain dictionary of attribute values used during evaluation:
context = {
"userId": "123",
"country": "nl",
}You can provide context at initialization:
f = create_instance({
"context": {
"deviceId": "123",
"country": "nl",
},
})You can merge more context later:
f.set_context({
"userId": "234",
})Or replace the existing context:
f.set_context(
{
"deviceId": "123",
"userId": "234",
"country": "nl",
"browser": "chrome",
},
True,
)You can also pass additional per-evaluation context:
is_enabled = f.is_enabled("my_feature", {"country": "nl"})
variation = f.get_variation("my_feature", {"country": "nl"})
variable_value = f.get_variable("my_feature", "my_variable", {"country": "nl"})if f.is_enabled("my_feature"):
passvariation = f.get_variation("my_feature")
if variation == "treatment":
passbg_color = f.get_variable("my_feature", "bgColor")Typed convenience methods are also available:
f.get_variable_boolean(feature_key, variable_key, context={})
f.get_variable_string(feature_key, variable_key, context={})
f.get_variable_integer(feature_key, variable_key, context={})
f.get_variable_double(feature_key, variable_key, context={})
f.get_variable_array(feature_key, variable_key, context={})
f.get_variable_object(feature_key, variable_key, context={})
f.get_variable_json(feature_key, variable_key, context={})all_evaluations = f.get_all_evaluations()You can pin feature evaluations with sticky values:
f = create_instance({
"sticky": {
"myFeatureKey": {
"enabled": True,
"variation": "treatment",
"variables": {
"myVariableKey": "myVariableValue",
},
}
}
})Or update them later:
f.set_sticky({
"myFeatureKey": {
"enabled": False,
}
})The SDK accepts either parsed JSON content or a JSON string:
f.set_datafile(datafile_content)
f.set_datafile(json.dumps(datafile_content))Supported log levels:
fatalerrorwarninfodebug
from featurevisor import create_instance
f = create_instance({
"datafile": datafile_content,
"logLevel": "debug",
})You can also provide a custom logger handler via create_logger.
The SDK emits:
datafile_setcontext_setsticky_set
unsubscribe = f.on("datafile_set", lambda details: print(details))
unsubscribe()Hooks support:
beforebucketKeybucketValueafter
They can be passed during initialization or added later with add_hook.
child = f.spawn({"country": "de"})
child.is_enabled("my_feature")f.close()The Python package also exposes a CLI:
python -m featurevisor test
python -m featurevisor benchmark
python -m featurevisor assess-distributionThese commands are intended for use from inside a Featurevisor project and rely on npx featurevisor being available locally.
Run Featurevisor test specs using the Python SDK:
python -m featurevisor test \
--projectDirectoryPath=/path/to/featurevisor-projectUseful options:
python -m featurevisor test --keyPattern=foo
python -m featurevisor test --assertionPattern=variation
python -m featurevisor test --onlyFailures
python -m featurevisor test --showDatafile
python -m featurevisor test --verbose
python -m featurevisor test --with-tags
python -m featurevisor test --with-scopesBenchmark repeated Python SDK evaluations against a built datafile:
python -m featurevisor benchmark \
--projectDirectoryPath=/path/to/featurevisor-project \
--environment=production \
--feature=my_feature \
--context='{"userId":"123"}' \
--n=1000For variation benchmarks:
python -m featurevisor benchmark \
--projectDirectoryPath=/path/to/featurevisor-project \
--environment=production \
--feature=my_feature \
--variation \
--context='{"userId":"123"}'For variable benchmarks:
python -m featurevisor benchmark \
--projectDirectoryPath=/path/to/featurevisor-project \
--environment=production \
--feature=my_feature \
--variable=my_variable_key \
--context='{"userId":"123"}'Inspect enabled/disabled and variation distribution over repeated evaluations:
python -m featurevisor assess-distribution \
--projectDirectoryPath=/path/to/featurevisor-project \
--environment=production \
--feature=my_feature \
--context='{"country":"nl"}' \
--n=1000You can also populate UUID-based context keys per iteration:
python -m featurevisor assess-distribution \
--projectDirectoryPath=/path/to/featurevisor-project \
--environment=production \
--feature=my_feature \
--populateUuid=userId \
--populateUuid=deviceIdThis repository assumes:
- Python 3.10+
- Node.js with
npx - Access to a Featurevisor project for CLI and tester integration
Run the local test suite:
make testRun the example project integration directly:
PYTHONPATH=src python3 -m featurevisor test \
--projectDirectoryPath=/path/to/featurevisor-project- Update version in pyproject.toml
- Push commit to main branch
- Wait for CI to complete
- Tag the release with the version number
vX.X.X - This will trigger a new release to PyPI
MIT © Fahad Heylaal