Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aloe - Capturing domain knowledge using test scenarios #24

Closed
dalssoft opened this issue Jan 25, 2022 · 11 comments
Closed

Aloe - Capturing domain knowledge using test scenarios #24

dalssoft opened this issue Jan 25, 2022 · 11 comments
Labels
enhancement New feature or request released Already in production

Comments

@dalssoft
Copy link
Member

Intro

As I have already discussed with some members of the Herbs community, I believe that in addition to the use case and the entities, the test scenarios are also part of the domain. This domain knowledge is usually translated from user stories, requirements, etc. to use cases as well as test cases. However, currently Herbs does not support to absorb this knowledge of test scenarios in a structured way.

When we think about how test scenarios are part of the domain, just think use cases are HOW, test scenarios are WHAT. Ex:

WHAT: Allow creating a client in the repository only if it is valid

HOW: Create Client (use case)

Or

WHAT: Do not allow changing a product in the repository if it is expired

HOW: Update Product (use case)

This knowledge is a fundamental part of the domain and should be discussed in a fluid way between developers, product managers, testers, etc. This knowledge should also have greater representation in the code in a structured way, as we have today with entities and use cases.

Given that, I propose a solution to capture this knowledge and expose it via metadata to other glues, especially Shelf.

Aloe - More than a test runner

Ex:

const { CreateProduct } = require('./usecases/createProduct')

spec(CreateProduct, {

    'Successful create a simple product': scenario({
        description: 'Try to creat a product with just a valid name.',

        happyPath: true,

        request: {
            name: 'A simple product'
        },

        user: () =>
            ({ canCreateProduct: true }),

        injection: () =>
            ({ ProductRepo: { save: () => { id: 1 } } }),

        'Product must be saved in the repository': check((ret) => {
            assert.ok(ret.product.id)
        }),

        'Product must have a new Id': check((ret) => {
            assert.ok(ret.product.id === 1)
        }),

    }),

    'Do not save a product with invalid name': scenario({
        description: 'Reject products with invalid names in the repository',

        request: {
            name: 'A simple product @'
        },

        user: () =>
            ({ canCreateProduct: true }),

        injection: () => { },

        'Product should not be saved in the repository': check((ret) => {
            assert.ok(ret.product === null)
            assert.ok(ret.isInvalidEntityError)
        }),

    })

})

The first thing to note is that the spec is connected to a use case. This is important because current test runners do not have this kind of explicit link with the objects to be tested. Here we want to capture the scenarios of a specific use case and have that as metadata. I see that the natural expansion would be to have spec not only for use cases but also for entities, but I still haven't thought about how it would be.

Another important point, different from solutions like Cucumber, the code is close to the intent description.

Given / When / Then

When I started thinking about this lib, I was trying to emulate the BDD (behavior driven development) in the spec structure, but it became clear that some things are already given in the current structure of Herbs and would not need to be rewritten.

The Given is basically the input and is formed by the set of:

  • injection for dependency injection.
  • user for use case authorization;
  • use case request and its values;

The When is the execution of the use case, going through the authorization first. But this is transparent and happens automatically.

const uc = usecase(injection)
const hasAccess = await uc.authorize(user)
const response = await uc.run(request)

What we're left with is Then. Here the idea is not only to verify the output of the use case, but also to capture domain knowledge, explaining what is valid and what is not valid for each scenario.

One more example:

spec(ChangeItemPosition, {

    'Successful change item position': scenario({
        description: 'Try to change the item position in a list with a valid position.',

        happyPath: true,

        request: {
            itemId: 1,
            position: 10
        },

        user: () =>
            ({ canChangePosition: true }),

        injection: () =>
        ({
            ListRepo: { find: (id) => ({ id, items: [] }) },
            ItemRepo: { save: () => ({ position: 10 }) }
        }),

        'Item position must have been changed and saved in the repository': check((ret) => {
            assert.ok(ret.item.position === 10)
        }),

        'Item position must be in a valid range': check((ret) => {
            assert.ok(ret.item.position >= 0)
            assert.ok(ret.item.position <= 20)
        })

    }),

    'Do not change item position when a position is invalid': scenario({
        description: 'A position for a item is invalid when it is out of range',

        request: {
            itemId: 1,
            position: 100
        },

        user: () =>
            ({ canChangePosition: true }),

        injection: () =>
        ({
            ListRepo: { find: (id) => ({ id, items: [] }) },
            ItemRepo: { save: () => ({ position: 10 }) }
        }),

        'Item position should not be changed in the repository': check((ret) => {
            assert.ok(ret.isInvalidPositionError)
        }),

    })

})

Shelf

Having a structured form of the scenarios is important for the developer to have clarity of the scenarios that that use case is exercised.
But perhaps just as important is that we can extract this knowledge from the code and bring it to the conversation with stakeholders. For that the Shelf would be the ideal tool.

Using the use cases above as an example, it would be possible to build documentation something like:

  • Create Product

    • Successful registration with a simple product
      • Try to register a product with just a valid name
      • Product must be saved in the repository
      • Product must have a new Id
    • Do not save a product with invalid name
      • Reject products with invalid names in the repository
      • Product should not be saved in the repository
  • Change Item Position

    • Successful change item position
      • Try to change the item position in a list with a valid position.
      • Item position must have been changed and saved in the repository
      • Item position must be in a valid range
    • Do not change item position when a position is invalid
      • A position for a item is invalid when it is out of range
      • Item position should not be changed in the repository

Herbarium

Every spec should be informed to Herbarium as a new kind of object.

module.exports =
    herbarium.specs
        .add(CreateProductSpec)
        .spec

It would be possible to find specs related to a use case or entity using Herbarium.

Examples / Samples

One functionality that needs to be discussed is the multiple inputs to validate the same scenario. I think of something like this:

spec(CreateProduct, {

    'Successful create a simple product': scenario({
        ...

        request: [
            { name: 'A simple product' }, 
            { name: 'ProdName' }, 
            { name: 'A simple product' }
        ],

But it needs to be discussed how each check has context about which request item is being executed. Maybe use ctx instead of just ret (check((ctx)) and use ctx.ret and ctx.req, as in the use case, with ctx.req containing info about request of that execution.

Spy

Advanced scenarios with spys should be allowed because in some cases it is the only way to validate the output. However, the use of mocks should be discouraged, as they validate and couple to the behavior of the use case and not to the output.

We need to dig deeper to see how these scenarios would look.

Conclusion

This is the beginning of a discussion. Conceptual and implementation insights are welcome. It would be great if someone can bring examples so that we can exercise this model as well.

@dalssoft dalssoft added the enhancement New feature or request label Jan 25, 2022
@italojs
Copy link
Member

italojs commented Jan 26, 2022

I think in that way we are changing an apple by pear, it's the quit same thing of write my own test, I dont see so much value in use it, buuuuuut the ideia/proposal is awesome and I think we could to go beyond.

Once we have the entity's validation and we could to use it as metadata, we could to auto generate the basic usacase tests based on entitiy's validation. e.g:

entity('Product', {
        id: id(Number),
        name: field(String, { 
                       validation: { 
                                 length: { minimum: 6 } ,
                                 contains: { notAllowed: "hello world" } 
          } } )
    })
const { CreateProduct } = require('./usecases/createProduct')

spec(CreateProduct, {

    'Successful create a simple product': scenario({
        description: 'Try to create a product with just a valid name.',

        request: {
            name: 'A simple product'
        },

        user: () =>
            ({ canCreateProduct: true }),

        injection: () =>
            ({ ProductRepo: { save: () => { id: 1 } } })

         'my super specific scenario': check((ret) => {
            assert.ok(ret.product.name.lenght/2 * 8  !== 10000)
        }),
    }),

terminal output:

Runned the tests:

⚙️ Happy Path: true
- Successful create a simple product
🟢  must to have an id
🟢  must to have a valid name
🟢  name do not have "hello world"

⚙️  Happy Path: false
- Unsuccessful create a simple product
🟢  must to not return id for invalid entity
🟢  must to return an Err XXX if invalid name
🟢  must to return an Err YYY  if name contains "hello world"

⚙️  Specific scenarios:
🟢  my super specific scenario

look, we have 6 tests auto-generated by herbs, I think this way we bring the Herbs idea of provide the things based on metadatas.

Other interesting scenario is when the usecase handle multiples entities, I could to generate test for all entities and combine all possible tests between then.

For now, I dont know how we could to know what is all entities the usecase handle, but I think this is the way.

@jhomarolo
Copy link
Contributor

@dalssoft First of all, I'd like to say that I think the idea is quite interesting, tests are really one of the things herbs hasn't reached yet.

I have some doubts and questions about the points mentioned:

1- A glue instead of a lib

Have you considered using a test library instead of creating your own? It seems to me that we can build this on top of something consolidated, like a connector (for example mocha). This would give us several benefits such as robustness, code coverage, reporting, shortening the learning curve, and not having to go to a part that herbs don't master.

2 - About Given / When / Then

How about we encapsulate the "3 magic words" in variables, just like we do with steps? I think it would be more readable, more reusable (for multiple entries), and also more declarative.

Example:

spec(ChangeItemPosition, {

    'Successful change item position': scenario({
        description: 'Try to change the item position in a list with a valid position.',

        given = given(async (ctx) => { 
        request: {
            itemId: 1,
            position: 10
        },

        user: () =>
            ({ canChangePosition: true }),

        injection: () =>
        ({
            ListRepo: { find: (id) => ({ id, items: [] }) },
            ItemRepo: { save: () => ({ position: 10 }) }
        })
        )} 

        when = when (async (ctx) => { 
         const uc = usecase(given.injection)
          const hasAccess = await uc.authorize(given.user)
          const response = await uc.run(given.request)
       )}

        then = then (async (ctx) => { 
        'Item position must have been changed and saved in the repository': check((ret) => {
            assert.ok(ret.item.position === 10)
        }),

        'Item position must be in a valid range': check((ret) => {
            assert.ok(ret.item.position >= 0)
            assert.ok(ret.item.position <= 20)
        })

    })
})

Here I wouldn't make the when transparent, because that way you plaster the way the developer uses the code. For example, not every use case is authorized. Another point is that if you encapsulate the given, then and when at the metadata level, this can be very rich for the shelf and for the herbarium.

3 - About spies

I think what convinces me to create a library of our own is if we could actually look deeply here.
Example: connecting the audit with the test runner or knowing exactly which step of the use case broke the test or even performing specific validations in steps. But I think we can do it with glue still (I need to study more on the subject)

4 - About self-generated tests from Entities

I really like the @italojs idea, but I believe that Entities need to have their own test files, disconnected from use cases.
In my opinion, your idea does not conflict with @dalssoft idea, they are two different initiatives that can be matured in different topics because in my view both are valid.

@eacvortx
Copy link

I really think that ideas is great! I can see the value of an Aloe, especially for the business specialist if we add this to analyse the structure of the project into the Shelf...

Thinking about this documentation when we have different contexts in the same Use Case, we can define different contexts with the mocha, like context or describe...

What do you think of mantain some of these labels to improve the organization of tests?

Example, an UseCase with IfelseStep with bifurcation of the complex rules can use this to improve the view when we use this into the shelf and the terminal...

@dalssoft
Copy link
Member Author

dalssoft commented Feb 4, 2022

I think in that way we are changing an apple by pear, it's the quit same thing of write my own test, I dont see so much value in use it, buuuuuut the ideia/proposal is awesome and I think we could to go beyond.

@italojs, but the point is exactly that: the tests we write today (ex: with mocha or cucumber) are not part of the domain and can't export metadata

1- A glue instead of a lib

@jhomarolo maybe. I don't see much value on the runner. I may be not giving the importance due to the problem, but I don't see the runner as the complex part of the software. The complex part is to build the DSL, like we did with gotu and buchu

2 - About Given / When / Then

@jhomarolo it is something to be explored

4 - About self-generated tests from Entities

@jhomarolo agree

Example, an UseCase with IfelseStep with bifurcation of the complex rules can use this to improve the view when we use this into the shelf and the terminal...

@eacvortx could you elaborate on that?

@dalssoft
Copy link
Member Author

dalssoft commented Feb 24, 2022

New proposed syntax.

For Use Cases:

const { CreateProduct } = require('./usecases/createProduct')

const productSpec =
    spec(CreateProduct, {

        'Successful create a simple product': scenario({
            description: 'Try to create a product with just a valid name.',
            happyPath: true,

            'Given a simple Product': given({
                request: {
                    name: 'A simple product'
                },
                user: () => ({ canCreateProduct: true }),
                injection: () => ({
                    ProductRepo: { save: () => { id: 1 } }
                }),
            }),

            'Product must be saved in the repository': check((ret) => {
                assert.ok(ret.product.id)
            }),

            'Product must have a new Id': check((ret) => {
                assert.ok(ret.product.id === 1)
            }),
        }),

        'Do not save a product with invalid name': scenario({
            description: 'Reject products with invalid names in the repository',

            'Given a invalid Product': given({
                request: { name: 'A simple product @' },
                user: () => ({ canCreateProduct: true }),
                injection: () => { },
            }),

            'Product should not be saved in the repository': check((ret) => {
                assert.ok(ret.product === null)
                assert.ok(ret.isInvalidEntityError)
            }),
        })
    })

For entities:

const { Project } = require('./entities/project')

const projectSpec =
    spec(Project, {

        'Successful create a valid Project': scenario({
            description: 'Try to create a project with just a valid name.',
            happyPath: true,

            'Given a simple Project': given(() => {
                return Project.fromJSON({
                    name: 'New project'
                })
            }),

            'When check if is valid': when((product) => {
                product.validate()
                return product
            }),

            'Must be valid': check((product) => {
                assert.ok(product.errors === {})
            }),
        }),

        // *** Custom methods ***
        'Successful generate a valid Project ID': scenario({
            description: 'Try to create a project ID based on its name.',
            happyPath: true,

            'Given a simple Project': given(() => {
                return Project.fromJSON({
                    name: 'New project'
                })
            }),

            'When generate project ID': when((product) => {
                return product.generateID()
            }),

            'Must be valid': check((ret) => {
                assert.ok(ret === 'new-project')
            }),
        }),
    })

Please, comment.

@jhomarolo
Copy link
Contributor

I liked the changes, especially the inclusion of reserved words. Perhaps the explicit inclusion of then also makes sense.

I'm still finding it difficult to easily enable integrations with other tools that we normally use for CI (test coverage, reports, etc.). Do you believe that we will implement these functions or at some point make integrations with third-party tools?

@dalssoft
Copy link
Member Author

dalssoft commented Mar 2, 2022

I'm still finding it difficult to easily enable integrations with other tools that we normally use for CI (test coverage, reports, etc.).

Regarding test coverage, I'm not sure, but it seems the coverage tools [1] are agnostic to the test runner.

For others tools, what problems / issues do you expect to find?

[1] https://github.com/jaydenseric/coverage-node

@jhomarolo
Copy link
Contributor

This feature could be an alternative to implement aloe without create a test runner: nodejs/node#42325

@italojs
Copy link
Member

italojs commented Jun 13, 2022

To provide a great DX and a fast learn curve, we could work with context like usecases

           'Given a simple Project': given((ctx) => {
                ctx.product  = Product.fromJSON({
                    name: 'New project'
                })
                return ctx
            }),

            'When check if is valid': when((ctx) => {
                ctx.product.validate()
                return product
            }),

            'Must be valid': check((ctx.product) => {
                assert.ok(ctx.product.errors === {})
            }),
        }),

because sometimes I could put many instances inside the ctx instead only my principal entity

@dalssoft
Copy link
Member Author

we could work with context like usecases

@italojs on the current implementation it ends up exactly like that:

https://github.com/herbsjs/herbs-cli/blob/8862645afca06433ccd605b0e06ff1cb56cee026/src/templates/domain/useCases/tests/update.spec.ejs#L22

@dalssoft
Copy link
Member Author

Since Aloe has already landed on herbs and CLI as beta, I'm closing this issue. Any suggestions or improvements, please use Aloe repo.

@jhomarolo jhomarolo added the released Already in production label Jun 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request released Already in production
Projects
None yet
Development

No branches or pull requests

4 participants