The transpiling of CEL to Python is a matter of emitting Python code for each node of the CEL syntax tree to create a function body. This is wrapped in a def statement to create a callable object that can be compiled to Python byte-code.
Approach
See https://github.com/cloud-custodian/cel-python/wiki/Evaluation-Design in the Wiki pages.
Unit Testing
Proper unit testing would separate the Transpiler from the parser. The test_evaluation module synthesizes parse trees as fixtures.
We could create a module of reusable fixtures and use this to test the transpiler.
Instead of this, the test_transpiler module uses an integration test that parses CEL source, and then transpiles, and executes it, coupling parser and transpiler testing. Since the parser and interpreter are properly tested in isolation, it seems low risk to trust the parser when testing the transpiler.
Acceptance Testing
The Acceptance Test Suite (all of it!) must be run twice: once with the InterpretedRunner and once with the CompiledRunner.
The InterpretedRunner uses the evaluation.Evaluator; the CompiledRunner uses evaluation.Transpiler to create reusable code blocks.
At some point, the compiled version may become the default, if the performance gain is adequate.
We may rename the interpreted evaluation as "debugging" mode.
The transpiling of CEL to Python is a matter of emitting Python code for each node of the CEL syntax tree to create a function body. This is wrapped in a
defstatement to create a callable object that can be compiled to Python byte-code.Approach
See https://github.com/cloud-custodian/cel-python/wiki/Evaluation-Design in the Wiki pages.
Unit Testing
Proper unit testing would separate the Transpiler from the parser. The
test_evaluationmodule synthesizes parse trees as fixtures.We could create a module of reusable fixtures and use this to test the transpiler.
Instead of this, the
test_transpilermodule uses an integration test that parses CEL source, and then transpiles, and executes it, coupling parser and transpiler testing. Since the parser and interpreter are properly tested in isolation, it seems low risk to trust the parser when testing the transpiler.Acceptance Testing
The Acceptance Test Suite (all of it!) must be run twice: once with the
InterpretedRunnerand once with theCompiledRunner.The
InterpretedRunneruses theevaluation.Evaluator; theCompiledRunnerusesevaluation.Transpilerto create reusable code blocks.Environment.environment.pymust grab a-D runneroption to set theCEL_RUNNERthat will be used for the acceptance test suite. (All -D options could be refactored here to slightly simplify the step definitions.)toxneeds to run the test suite twice.behave --tags=~@wip -D env='{envname}' -D cel_runner=interactiveandbehave --tags=~@wip -D env='{envname}' -D runner=compiled.At some point, the compiled version may become the default, if the performance gain is adequate.
We may rename the interpreted evaluation as "debugging" mode.