You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need to ensure that examples execute, but this is tricky because:
if we wrote examples in an executable way, so that their output can be verified, they would become cluttered, and we don't want that
...so, we need a way of converting examples into an executable form, which we can run & verify
...moreover, we need to display code examples without the setup code, with an additional tab saying something like "Complete example" or "Executable script" where the full version would be displayed
FWIW, it should be fun writing this type of infrastructure but it's also worth exploring if there are existing solutions that we could use.
The text was updated successfully, but these errors were encountered:
If the top of my head, there are a couple of prerequisites that will make doing this a lot easier:
Configure and use a "blessed" set of relations to be used in examples, i.e. users+tasks
Seperate the executable code from the templating system by using structured data of some sort, like a YAML definition, so it's easy to parse
Add a helper which "embeds" an executable definition in a template
Write a spec which spawns a new process with the "blessed" environment setup, and evaluates the code
TBD: Figure out how far we want to take this, i.e. do we want to simply assume if it runs that it's good enough, or do we want to write actual assert on output
@ianks@waiting-for-dev I started working on some improvements in #297 - I consider this to be the first step, because we need a nicer setup to be able to do more advanced things, testable code examples included. Once this PR is done and merged, I'll experiment with how we could have testable code examples and in general how embedding code examples could be improved. What Ian wrote in his comment is 💯% what I've been thinking about too. Also re:
TBD: Figure out how far we want to take this, i.e. do we want to simply assume if it runs that it's good enough, or do we want to write actual assert on output
I think asserting on output is something we should have. The fact something runs without crashing is not enough.
We need to ensure that examples execute, but this is tricky because:
FWIW, it should be fun writing this type of infrastructure but it's also worth exploring if there are existing solutions that we could use.
The text was updated successfully, but these errors were encountered: