-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aloe - Capturing domain knowledge using test scenarios #24
Comments
I think in that way we are changing an apple by pear, it's the quit same thing of write my own test, I dont see so much value in use it, buuuuuut the ideia/proposal is awesome and I think we could to go beyond. Once we have the entity's validation and we could to use it as metadata, we could to auto generate the basic usacase tests based on entitiy's validation. e.g: entity('Product', {
id: id(Number),
name: field(String, {
validation: {
length: { minimum: 6 } ,
contains: { notAllowed: "hello world" }
} } )
}) const { CreateProduct } = require('./usecases/createProduct')
spec(CreateProduct, {
'Successful create a simple product': scenario({
description: 'Try to create a product with just a valid name.',
request: {
name: 'A simple product'
},
user: () =>
({ canCreateProduct: true }),
injection: () =>
({ ProductRepo: { save: () => { id: 1 } } })
'my super specific scenario': check((ret) => {
assert.ok(ret.product.name.lenght/2 * 8 !== 10000)
}),
}), terminal output: Runned the tests:
⚙️ Happy Path: true
- Successful create a simple product
🟢 must to have an id
🟢 must to have a valid name
🟢 name do not have "hello world"
⚙️ Happy Path: false
- Unsuccessful create a simple product
🟢 must to not return id for invalid entity
🟢 must to return an Err XXX if invalid name
🟢 must to return an Err YYY if name contains "hello world"
⚙️ Specific scenarios:
🟢 my super specific scenario
look, we have 6 tests auto-generated by herbs, I think this way we bring the Herbs idea of provide the things based on metadatas. Other interesting scenario is when the usecase handle multiples entities, I could to generate test for all entities and combine all possible tests between then. For now, I dont know how we could to know what is all entities the usecase handle, but I think this is the way. |
@dalssoft First of all, I'd like to say that I think the idea is quite interesting, tests are really one of the things herbs hasn't reached yet. I have some doubts and questions about the points mentioned: 1- A glue instead of a libHave you considered using a test library instead of creating your own? It seems to me that we can build this on top of something consolidated, like a connector (for example mocha). This would give us several benefits such as robustness, code coverage, reporting, shortening the learning curve, and not having to go to a part that herbs don't master. 2 - About Given / When / ThenHow about we encapsulate the "3 magic words" in variables, just like we do with steps? I think it would be more readable, more reusable (for multiple entries), and also more declarative. Example: spec(ChangeItemPosition, {
'Successful change item position': scenario({
description: 'Try to change the item position in a list with a valid position.',
given = given(async (ctx) => {
request: {
itemId: 1,
position: 10
},
user: () =>
({ canChangePosition: true }),
injection: () =>
({
ListRepo: { find: (id) => ({ id, items: [] }) },
ItemRepo: { save: () => ({ position: 10 }) }
})
)}
when = when (async (ctx) => {
const uc = usecase(given.injection)
const hasAccess = await uc.authorize(given.user)
const response = await uc.run(given.request)
)}
then = then (async (ctx) => {
'Item position must have been changed and saved in the repository': check((ret) => {
assert.ok(ret.item.position === 10)
}),
'Item position must be in a valid range': check((ret) => {
assert.ok(ret.item.position >= 0)
assert.ok(ret.item.position <= 20)
})
})
}) Here I wouldn't make the when transparent, because that way you plaster the way the developer uses the code. For example, not every use case is authorized. Another point is that if you encapsulate the given, then and when at the metadata level, this can be very rich for the 3 - About spiesI think what convinces me to create a library of our own is if we could actually look deeply here. 4 - About self-generated tests from EntitiesI really like the @italojs idea, but I believe that Entities need to have their own test files, disconnected from use cases. |
I really think that ideas is great! I can see the value of an Aloe, especially for the business specialist if we add this to analyse the structure of the project into the Shelf... Thinking about this documentation when we have different contexts in the same Use Case, we can define different contexts with the mocha, like What do you think of mantain some of these labels to improve the organization of tests? Example, an UseCase with IfelseStep with bifurcation of the complex rules can use this to improve the view when we use this into the shelf and the terminal... |
@italojs, but the point is exactly that: the tests we write today (ex: with mocha or cucumber) are not part of the domain and can't export metadata
@jhomarolo maybe. I don't see much value on the runner. I may be not giving the importance due to the problem, but I don't see the runner as the complex part of the software. The complex part is to build the DSL, like we did with gotu and buchu
@jhomarolo it is something to be explored
@jhomarolo agree
@eacvortx could you elaborate on that? |
New proposed syntax. For Use Cases: const { CreateProduct } = require('./usecases/createProduct')
const productSpec =
spec(CreateProduct, {
'Successful create a simple product': scenario({
description: 'Try to create a product with just a valid name.',
happyPath: true,
'Given a simple Product': given({
request: {
name: 'A simple product'
},
user: () => ({ canCreateProduct: true }),
injection: () => ({
ProductRepo: { save: () => { id: 1 } }
}),
}),
'Product must be saved in the repository': check((ret) => {
assert.ok(ret.product.id)
}),
'Product must have a new Id': check((ret) => {
assert.ok(ret.product.id === 1)
}),
}),
'Do not save a product with invalid name': scenario({
description: 'Reject products with invalid names in the repository',
'Given a invalid Product': given({
request: { name: 'A simple product @' },
user: () => ({ canCreateProduct: true }),
injection: () => { },
}),
'Product should not be saved in the repository': check((ret) => {
assert.ok(ret.product === null)
assert.ok(ret.isInvalidEntityError)
}),
})
}) For entities: const { Project } = require('./entities/project')
const projectSpec =
spec(Project, {
'Successful create a valid Project': scenario({
description: 'Try to create a project with just a valid name.',
happyPath: true,
'Given a simple Project': given(() => {
return Project.fromJSON({
name: 'New project'
})
}),
'When check if is valid': when((product) => {
product.validate()
return product
}),
'Must be valid': check((product) => {
assert.ok(product.errors === {})
}),
}),
// *** Custom methods ***
'Successful generate a valid Project ID': scenario({
description: 'Try to create a project ID based on its name.',
happyPath: true,
'Given a simple Project': given(() => {
return Project.fromJSON({
name: 'New project'
})
}),
'When generate project ID': when((product) => {
return product.generateID()
}),
'Must be valid': check((ret) => {
assert.ok(ret === 'new-project')
}),
}),
}) Please, comment. |
I liked the changes, especially the inclusion of reserved words. Perhaps the explicit inclusion of I'm still finding it difficult to easily enable integrations with other tools that we normally use for CI (test coverage, reports, etc.). Do you believe that we will implement these functions or at some point make integrations with third-party tools? |
Regarding test coverage, I'm not sure, but it seems the coverage tools [1] are agnostic to the test runner. For others tools, what problems / issues do you expect to find? |
This feature could be an alternative to implement aloe without create a test runner: nodejs/node#42325 |
To provide a great DX and a fast learn curve, we could work with context like usecases 'Given a simple Project': given((ctx) => {
ctx.product = Product.fromJSON({
name: 'New project'
})
return ctx
}),
'When check if is valid': when((ctx) => {
ctx.product.validate()
return product
}),
'Must be valid': check((ctx.product) => {
assert.ok(ctx.product.errors === {})
}),
}), because sometimes I could put many instances inside the ctx instead only my principal entity |
@italojs on the current implementation it ends up exactly like that: |
Intro
As I have already discussed with some members of the Herbs community, I believe that in addition to the use case and the entities, the test scenarios are also part of the domain. This domain knowledge is usually translated from user stories, requirements, etc. to use cases as well as test cases. However, currently Herbs does not support to absorb this knowledge of test scenarios in a structured way.
When we think about how test scenarios are part of the domain, just think use cases are HOW, test scenarios are WHAT. Ex:
WHAT: Allow creating a client in the repository only if it is valid
HOW: Create Client (use case)
Or
WHAT: Do not allow changing a product in the repository if it is expired
HOW: Update Product (use case)
This knowledge is a fundamental part of the domain and should be discussed in a fluid way between developers, product managers, testers, etc. This knowledge should also have greater representation in the code in a structured way, as we have today with entities and use cases.
Given that, I propose a solution to capture this knowledge and expose it via metadata to other glues, especially Shelf.
Aloe - More than a test runner
Ex:
The first thing to note is that the
spec
is connected to a use case. This is important because current test runners do not have this kind of explicit link with the objects to be tested. Here we want to capture the scenarios of a specific use case and have that as metadata. I see that the natural expansion would be to havespec
not only for use cases but also for entities, but I still haven't thought about how it would be.Another important point, different from solutions like Cucumber, the code is close to the intent description.
Given / When / Then
When I started thinking about this lib, I was trying to emulate the BDD (behavior driven development) in the
spec
structure, but it became clear that some things are already given in the current structure of Herbs and would not need to be rewritten.The Given is basically the input and is formed by the set of:
injection
for dependency injection.user
for use case authorization;request
and its values;The When is the execution of the use case, going through the authorization first. But this is transparent and happens automatically.
What we're left with is Then. Here the idea is not only to verify the output of the use case, but also to capture domain knowledge, explaining what is valid and what is not valid for each scenario.
One more example:
Shelf
Having a structured form of the scenarios is important for the developer to have clarity of the scenarios that that use case is exercised.
But perhaps just as important is that we can extract this knowledge from the code and bring it to the conversation with stakeholders. For that the Shelf would be the ideal tool.
Using the use cases above as an example, it would be possible to build documentation something like:
Create Product
Change Item Position
Herbarium
Every
spec
should be informed to Herbarium as a new kind of object.It would be possible to find specs related to a use case or entity using Herbarium.
Examples / Samples
One functionality that needs to be discussed is the multiple inputs to validate the same scenario. I think of something like this:
But it needs to be discussed how each
check
has context about whichrequest
item is being executed. Maybe usectx
instead of just ret (check((ctx)
) and usectx.ret
andctx.req
, as in the use case, withctx.req
containing info about request of that execution.Spy
Advanced scenarios with spys should be allowed because in some cases it is the only way to validate the output. However, the use of mocks should be discouraged, as they validate and couple to the behavior of the use case and not to the output.
We need to dig deeper to see how these scenarios would look.
Conclusion
This is the beginning of a discussion. Conceptual and implementation insights are welcome. It would be great if someone can bring examples so that we can exercise this model as well.
The text was updated successfully, but these errors were encountered: