Skip to content
Steve Tu edited this page Jul 21, 2021 · 5 revisions

Last updated Feb 21, 2021

Welcome!

The purpose of this wiki is to provide more detailed documentation on the program and to provide common use cases and examples on how to use/design your own combat scripts suited to your needs.

What is the program doing exactly during runtime?

At the start of the program, it will create a new Thread via the multiprocessing library in Python and runs the main program process on that new Thread. It will also keep track of a flag in memory called isBotRunning that is shared between backend and frontend such that when the backend signals that the bot has ended, the frontend can inform the user as such and vice-versa. A Queue is also maintained by the backend such that it can share informational and/or debugging messages for the frontend to display them in a scrollable text log.

After initial preparations are completed, it will attempt to confirm if it is currently at the Home screen and if not, navigate to it using PyAutoGUI's mouse operations. At the same time, it will calibrate the dimensions of the browser window and save it to prevent the mouse from going out of bounds and throwing an Exception. This will also ensure that image matching operations performed by PyAutoGUI will be constrained to that region only. This however does not apply to the fallback method using GuiBot as it template matches across the whole screen. If it cannot detect the entirety of the game screen, that means some part of the browser window is visibly obscured by another window and the user will need to move it out of the way and restart the initialization of the bot process for the purposes of image processing operations later.

After that, it will execute one of several modes that the user selected:

  • If the user selected to farm a Quest mission, the program will go to the Quest screen and will navigate itself to the correct island that the mission takes place in. After that, it will click the correct chapter node and then start the specified mission.

  • If the user selected to farm a Special mission, the program will go to the Special screen and will navigate itself to the correct mission and difficulty and then start the specified mission.

  • If the user selected to farm a Coop mission, the program will go to the Coop screen and host that mission's room.

  • If the user selected to farm a Raid mission, the program will navigate to the Backup Requests screen and will then start pinging the Twitter API with a query containing what the user wanted to search for and will then grab several most recent tweets that match the query. It will then start copy and pasting the parsed room codes from each tweet into the room code textbox until it successfully enters the Raid. If all the room codes fail, the program will go into a 60 second cooldown period and will repeat again until it successfully enters a Raid. At the same time, it will keep the ID of every tweet it grabs to ensure that it does not read duplicate/old tweets. Same thing goes for room codes as players can post multiple unique tweets of the same room.

  • If the user selected to farm an Event mission, program navigation will ultimately depend on whether or not the user selected Event or Event (Token Drawboxes) as these two types of Events have completely different UI layouts.

  • If the user selected to farm a Dread Barrage mission, the program will go to the Dread Barrage screen and will start the specified difficulty mission.

The backbone of the entire program relies on OCR technology frameworks such as the ones provided by PyAutoGUI and/or GuiBot. PyAutoGUI is used first to template match on the screen using the specified image file in the /images/ folder. I found that PyAutoGUI's default OCR capability pales in comparison to GuiBot's. However, PyAutoGUI is superior in specifying exactly what similarity to match for and is especially really helpful in operations where I need to find multiples of a template in the browser window.

How does it detect how many of an item has dropped after a battle?

Whenever the program needs to detect amounts of an item on a Quest Results screen at the end of a successful mission, it template matches an already processed item image in Photoshop and then moves the region of detection to the bottom right corner of the image and crops that area such that the numerical amount is fully visible with as much background noise cut out as possible. It then uses EasyOCR for text detection and will parse the number from that cropped region. When item detection is complete, it will take a screenshot of the game window whose dimensions were provided by a calibration method at program start and saves it to the /results/ folder at root. If the folder does not exist, it will create it for you.