Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add testing #1

Closed
17 tasks done
tejashah88 opened this issue Jun 7, 2018 · 27 comments · Fixed by #8
Closed
17 tasks done

Add testing #1

tejashah88 opened this issue Jun 7, 2018 · 27 comments · Fixed by #8

Comments

@tejashah88
Copy link
Owner

tejashah88 commented Jun 7, 2018

  • Add unit and integration tests
    • Vendor operations
    • Skill operations
    • Interaction model operations
    • Account linking operations
    • Skill enablement operations
    • Skill testing operations
      • Validation
      • Invocation
      • Simulation
    • intent request history
    • Skill certification operations
    • node-alexa-smapi Custom operations
    • node-alexa-smapi helper methods
    • node-alexa-smapi error scenarios
  • Add Travis CI integration
  • Add coveralls.io integration
@tejashah88 tejashah88 self-assigned this Jun 7, 2018
@marcelobern
Copy link
Collaborator

@tejashah88 I am just now wrapping up a new suite of tests.

At this point all SMAPI v0 & v1 operations are being tested with the exception of skill testing (invocation, simulation), and intent request history.

I plan to build a "live" Alexa skill matching the test suite interaction model so we can be run unit test these last operations.

@marcelobern
Copy link
Collaborator

marcelobern commented Sep 3, 2018

@tejashah88 thanks for the help, the promisified version of setTimeout did the trick and the Travis CI build for Nodejs versions 6 through 10 is now passing!

As I checked the Nodejs support matrix I see that currently only Nodejs 6, 8, and 10 are under support, all others are end of life. So any reason why we should add support for Nodejs 7 and 9 on the Travis CI build?

@tejashah88
Copy link
Owner Author

@marcelobern Technically it's not required, but better safe than sorry, in case someone is using one of those versions.

@tejashah88 tejashah88 removed their assignment Sep 4, 2018
@marcelobern
Copy link
Collaborator

@tejashah88 I see the Travis CI builds are failing and it seems like no environment variables have been setup.

Could you please setup the following variables up:

  • REFRESH_TOKEN
  • CLIENT_ID
  • CLIENT_SECRET

Once this is done both the Travis CI & coveralls report should start working once we bring in PR #6.

You may want to create a Amazon Developer test account instead of using your own. Please let me know in case you prefer me to do it but I still would need you to configure Travis CI as I do not have access to the project settings.

@tejashah88
Copy link
Owner Author

@marcelobern Just added the environmental variables. I'll merge #6 and hopefully we can get this squared away.

@tejashah88
Copy link
Owner Author

@marcelobern One question: since the refresh token expires in 1 hour, how do you propose we handle that? Also, I noticed that while testing the withdrawal function, the tests go into an infinite loop and the message shows this: "No skill submission record found for SkillId: (Insert Skill ID here)".

On a side note, I realize that we are expanding the scope of the tests too much, as we now have mostly integration testing with the API and very few unit tests. Since this is a client library for an API, we shouldn't really be testing the functionality of the API, since that's supposedly Amazon's job. Additionally, this makes the testing process quite slow and non-GET requests make it tricky to maintain the state of test accounts. I'd say that we should capture the JSON responses of all the requests and mock the API calls for unit testing.

@tejashah88
Copy link
Owner Author

Additionally, to still make use of what we have so far, we can have the integration tests run once a month, to update the JSON requests and responses, in case of a possible API change.

@marcelobern
Copy link
Collaborator

@tejashah88 only the access token expires in 1 hour, the refresh token is valid until the removal of the authorization from the account.

As for the withdrawal function, it is not an infinite loop, it is up 10 retries, 1 minute apart (until it passes or completes all retries). In the case you are describing, I noticed that the first response indicates the certification submission is still being processed, and then later on the response indicates no certification request exists. I suspect the SMAPI is "getting lost" and processing a withdrawal after sending a response back. Would need to check further to confirm this hypothesis and open an issue with the SMAPI team.

As for unit testing, you are correct, I went straight into writing test code to actually validate URLs and check that documented request/response formats were correct so I could write/check the code. And I ended up using that code in lieu of mocking SMAPI and writing unit tests.

Considering the speed Amazon seems to be changing SMAPI (I even saw changes in SMAPI behavior during the few days I was testing the code) my gut feeling is that we might be better off keeping the current approach for a while but I will defer to your decision on this, as you seem to have more experience than I do on the validity of working with stored JSON responses in Alexa related projects ;-)

For now, in order to reduce the test time, I added a certification flag in the test file (without certification the test cases complete in about 2-3 min). We could make that an environment variable so it can be easily configured in the Travis CI build. This could keep us until the mocked unit tests are in place.

In the end, any approach which will make this test automated (i.e. generating the JSON responses regularly, cleaning up the test account, etc.) should be fine.

Finally, it would be interesting to find out the level of (unit and integration) testing other client projects do. Thoughts?

@tejashah88
Copy link
Owner Author

@marcelobern I agree that we should hold off for the current approach until it stabilizes. Having a certification flag would help quite a bit for the time being.

From my experience with making similar client libraries, I follow the general guideline, which is to only write tests for what you make and prefer unit tests over anything else. Since client libraries are meant to be wrapper interfaces to the API they are interfacing with, they typically don't add much more functionality. This means that testing the functionality of external APIs is generally out of the scope of the client library's tests since the developers of the API would have presumably done thorough testing on it.

So in this case, the only thing we would be testing is how this library responds to the mocked API and some of the transformations like destructuring the parameters, or what kind of string is constructed from the string templates.

What do you think?

@marcelobern
Copy link
Collaborator

@tejashah88 that works for me. I did a quick search and came across nock. Is that what you have used to mock HTTP requests?

@tejashah88
Copy link
Owner Author

@marcelobern Actually, you may want to try out the axios-mock-adapter, since it connects directly with axios.

@marcelobern
Copy link
Collaborator

@tejashah88 thanks for the suggestion!

Turns out that using axios-mock-adapter I could just create a mock adapter and use it to run the existing test cases!

I added an env variable so we can decide if the mocha test suite runs in "unit test mode" (default) or "integration test mode" ;-)

Now will look into automating the capture of SMAPI responses to create the JSON response templates for the unit tests.

@tejashah88
Copy link
Owner Author

@marcelobern I'm glad it will help. I wish I could do more than just cheer from the sidelines (getting quite a lot of school work recently), but good luck! Let me know if you have any issues or questions.

@marcelobern
Copy link
Collaborator

@tejashah88 no worries.

What classes are you taking?

@tejashah88
Copy link
Owner Author

@marcelobern I'm taking intro to engineering, calculus 2, "advanced" C/C++, and classical mechanics. I'm also self studying from one of the UC Berkeley CS courses called "CS61A: Structure and Interpretation of Computer Programs".

@marcelobern
Copy link
Collaborator

@tejashah88 nice! I miss my advanced math classes, too bad I do not get to use it more often ;-)

BTW, I figured what what is going on with the "No skill submission record found for SkillId: (Insert Skill ID here)" message. The skill certification fails (and a certification feedback email is sent) between calls to withdraw. I am updating the test case to deal with that.

FYI, will be creating a PR once I finish updating the test cases. At that point only invocation & simulation test cases should be missing.

@marcelobern
Copy link
Collaborator

@tejashah88 just finished all test cases and will now open PR to close this issue. Code coverage @ 97.89%!

The missing cases (invalid version number) really make no sense to implement as at those points the code would not work with an invalid version number (and the constructor prevents it anyway...).

@tejashah88
Copy link
Owner Author

@marcelobern That's fine. I doubt that there would be a serious bug that would appear from an invalid version, causing some unexpected results anyways.

I'll merge that PR so that we can finally resolve this issue. Thank you for all the contributions for this library!

@marcelobern
Copy link
Collaborator

@tejashah88 you are very welcome. Thank you for the support!

FYI, here is a short article I wrote on this experience. Hopefully you will enjoy it ;-)

@tejashah88
Copy link
Owner Author

@marcelobern Great article (love the title BTW)! It'll help a lot for some of the other client libraries that I'm maintaining as I've also learned a lot from the strategies we used to tackle the testing problems we've encountered.

P.S. Right below the "Magicians Never Reveal Their Secrets" section, I believe you spelled "good thing" as "good think" ;)

@marcelobern
Copy link
Collaborator

@tejashah88 thanks for the feedback. Would love to hear your take on the learning in case you decide to write it up ;-)

As for the other client libraries you are maintaining, I was considering wrapping up the mockup and response code into a small util library and add some examples for others to leverage the experience. Thoughts?

@tejashah88
Copy link
Owner Author

@marcelobern I was thinking of the same thing, since it would save a lot more time in testing client libraries in general. It would be interesting to find out if anyone else has similar ideas and/or has tried doing this before. As of now, I haven't found anything like it so it would be worth experimenting this idea.

What I'm planning to do is add unit/integration tests to some of the other client libraries that I maintain to better generalize how to make tests for them. Since they have a very similar structure to each other, it shouldn't be hard to conceive a prototype for this experiment.

@tejashah88
Copy link
Owner Author

I was also thinking of making a tool or library to generate a client library from an API spec based on how I made these client libraries before.

@marcelobern
Copy link
Collaborator

@tejashah88 I believe I have a good feeling on how to generalize the testing side.

On the automated generation of the client APIs, I was actually scratching my head on why write the client manually over a swagger based approach.

I just did a quick search and I came across:

  • swagger which is supposed to be for the server side and seems is not being maintained...
  • swagger-client which we would need to investigate and check if this might be a good starting place. Thoughts?

@tejashah88
Copy link
Owner Author

tejashah88 commented Sep 13, 2018

@marcelobern Interesting findings. I think the issue is that since the developers of the libraries aren't providing client libraries out of the box, it makes it tricky to keep it up to date on their specs. The swagger-client seems interesting and probably does the job already. (No need to reinvent the wheel ;) )

I'd say that you should explore automating the testing side for client libraries since you've done 90% of the work. The only question is the scope of the compatibility: will it support only the libraries I've made, or axios-based libraries, or even bigger?

@marcelobern
Copy link
Collaborator

@tejashah88 as I continued the research, seems like this is the ultimate approach so far: https://github.com/swagger-api/swagger-codegen as it claims to generate client SDKs and server stubs.

Will see if I have time to play with it a little bit. The first step would be to either find or write SMAPI in swagger format.

@tejashah88
Copy link
Owner Author

tejashah88 commented Sep 14, 2018

@marcelobern Actually, I found a specific version of the library for javascript: https://github.com/wcandillon/swagger-js-codegen. You might want to check that out.

As for rewriting the SMAPI in swagger, I'd hold off on that until we can get a prototype of these ideas working, so that we'll have more confidence in rewriting this entire library without breaking anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants