Skip to content
This repository has been archived by the owner on Aug 26, 2021. It is now read-only.

Implement CI/CD with the bot on itself #10

Closed
Cobular opened this issue May 24, 2019 · 7 comments
Closed

Implement CI/CD with the bot on itself #10

Cobular opened this issue May 24, 2019 · 7 comments
Projects

Comments

@Cobular
Copy link
Owner

Cobular commented May 24, 2019

This is a huge ask, but it needs to be done at some point so it gets an issue! When I start working on this (eventually...), I will make a new feature branch (or a new repo - IDK how this will end up working)

@Cobular Cobular created this issue from a note in Add CI/CD (To do) May 24, 2019
@Cobular Cobular added enhancement New feature or request help wanted Extra attention is needed labels May 24, 2019
@Cobular
Copy link
Owner Author

Cobular commented May 24, 2019

I have a few ideas, thought I would get them down.

  1. We should probably take a well tested and known good version of the bot and either make a dedicated branch that it lives on and never changes or make a whole new repo for that frozen bot. Then, we use that bot to run tests on this bot while in interactive mode. We won't be able to test the code that only runs in CLI mode, but I don't anticipate that becoming a big issue.
  2. We can also try to refactor the whole bot to pull out as much logic as possible from the tests but since the tests are pretty slim to start with, this probably isn't worthwhile to pursue. Let me know if you think it is, tho.

Nothing is being done now (other than the automatic code review thing I set up see #8 ).

@JosephFKnight
Copy link
Collaborator

JosephFKnight commented May 26, 2019

Unfortunately I think the only real way to test this framework is with application testing via a test suite designed to test every function in the interface. Our example modules can serve this purpose nicely, I think.

@Cobular
Copy link
Owner Author

Cobular commented May 27, 2019

That's true. You suggest running the two example bots as a test? I can setup something on some CI/CD service that just does that every commit.
I did a code coverage test, and with CLI we hit 71% (IIRC) of the code. If we also launch in interactive, even if we don't do any tests and just let startup happen, that should be almost all the code being run and should catch just about anything wrong.

@JosephFKnight
Copy link
Collaborator

JosephFKnight commented May 27, 2019 via email

@Cobular
Copy link
Owner Author

Cobular commented May 29, 2019

Ok this works now? I'm going to leave this open for a little while tho in case things aren't going as planned

@Cobular
Copy link
Owner Author

Cobular commented May 29, 2019

Actually I might want to try to run the tests with some code coverage on them, which I will do soonly (tm)

@Cobular Cobular added test and removed enhancement New feature or request help wanted Extra attention is needed labels May 29, 2019
@Cobular
Copy link
Owner Author

Cobular commented May 29, 2019

image
It's all working right now!

@Cobular Cobular removed the test label Jun 6, 2019
@Cobular Cobular pinned this issue Jun 6, 2019
@Cobular Cobular unpinned this issue Jun 22, 2019
@Cobular Cobular closed this as completed Feb 1, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
No open projects
Add CI/CD
  
To do
Development

No branches or pull requests

2 participants