In this repository the code produced by the Team "Ctrl + Alt + Elite" at the SICK Solution Hackathon 2024 is collected.
The team members were:
- Andi, aka anp369
- Carl, aka Leesment
- Franz, aka flemk
- Jonatan, aka Lophtix8
This README aims to give a brief overview of the algorithm for corner detection and rectangle fitting.
You can find the slides of the pitch in doc/slides.pdf.
Also since the code was created during a Hackathon, the code quality is relatable messy - sorry.
This repository contains the cri repository by jloyd237 in
/criand was used to establish a communication to the ABB robot.
- Find the intersections (overlapping) of the (given) bounding boxes (given by the neural network on the SICK Visionary S camera)
- Find the corners of the (rotated) box:
- Top and bottom corners X coordinates equal the average (center) on the minimal or maximal Y line
- Left and right corners equal to the left / right-most pixel found
- Do not use areas where overlapping occurs. If there are corners detected, calculate / reconstruct them from other "valid" / good corner coordinates using trigonometry
- Since the four detected corner points do not construct a perfect rectangle (angles do not equal 90 deg) the algorithm needs creates a bounding rectangle which is perfect:
- This is the rectangle with minimal area that contains all calculated corner points
- Extension of this idea could be: The minimal rectangle that contains all pixel points
- Using affine transforms to draw the rotated rectangle on the image
- Another idea we discussed was using a gradient descent procedure:
- Start with the original (neural network) bounding box
- Find the smallest possible rectangle that contains all pixel points (colors) by rotating
- "Smallest" meaning here: Minimizing the area which is not colored / detected as box
- Tackling overlapping:
- Eiter: Omitting overlapped areas entirely (gradient descent should still find optimal solution)
- Or: Introducing weights, where pixels inside overlapping areas are not weighted / considered as much as "clear cases"
- We added a checkpoint functionality to save images / data from the camera locally, to not always connect to the camera and fetch new data. Decreasing testing time.
box.py: Interface for defining a box and sorting a list of boxes by sizerobot_interaction.py: Code for interacting with the robotrun_palloc.py: Code for interacting with the camera and doing box detection