Skip to content

DavidePerticone/os-ev3-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Report project "Eurecom kart"

Authors: Davide Perticone, Cosma Alex Vergari, Lorenzo Pisanò Name of the robot: Big Chungus

Description of the architecture of the robot:

The robot developed for this project consists of 3 motors and 3 sensors:

  • 2 "Large Servo Motors"
  • 1 "Medium Servo Motor"
  • 1 Gyro Sensor
  • 1 Touch Sensor
  • 1 Ultrasonic Sensor

The 2 Large Motors are used for the translation of the robot while the Medium Motor allows releasing obstacles. The layout is of type FWD (front-wheel drive). Two rubber wheels are attached to the motors and allow moving forward as well as steering. A Technic Steel Ball mounted on the rear part is used as a third wheel: it is omnidirectional and provides a third point to balance the robot (not possible with just two rubber wheels).

The Ultrasonic Sensor is susceptible to vibration and for this reason, it is mounted on the front of the robot, between the two wheels, in the lowest position possible. Indeed, the rubber wheels dampen the vibrations generated by the movement well enough to allow the proper function of the sensor. This allows the correct detection of the wall, fence, and obstacles installed in the stadium. The position of the Gyro Sensor is on the right side as vibrations do not impact its efficiency and correctness: any other position would be equivalent. Last, the touch sensor is mounted between the two large motors: it is a strategic position that is vital for the correct calibration of the robot.

The robot body is minimal and has been designed to have the lowest possible center of gravity. On the front part, two arms are present and are used to calibrate the gyroscope at each lap (exact purpose is explained here). A basket is positioned on the top and it is attached to the Medium Motor through an axle. It contains the various obstacles that must be dropped while racing.

Some pictures to understand better the architecture of the robot.

In the video below you can see the beast in action.

Video.2022-01-25.19-14-12-1.mp4

Algorithms

The developed program follows a 2 Layered Architecture. The bottom layer is responsible for the actuation of the motors and for reading the sensors. The top one implements the logic of the program and is the brain of the robot. They communicate through some global variables that are set at each iteration of the core of the program.

Coroutines have been extensively used throughout the code. They allow having concurrent execution flows as well as synchronizing them without the overhead of threads and kernel synchronization primitives (semaphore, barriers, etc...). This is even more important in this context, as the EV3 is provided with a mono-core and low-power processor.

Following is a brief explanation of the two layers composing the program. For an extensive explanation, it is better to read the code.

Bottom Layer

The bottom layer contains the initialization function, ev3_init(). It is called once at the start of the program to properly detect all motors and sensors and initialize all needed variables. An infinite while loop in the main function is used to call all the coroutines that compose the program. They can be divided into two main categories depending on which layer they belong to:

  • get_proximity, get_distance, get_angle, get_touch, drive: they belong to the bottom layer and are in charge of reading the values from the sensor (get_*) and of actuating the motors based on what the top layer has decided (drive).
  • DFA: the only coroutine belonging to the top layer (explained in the proper section).

The logical flow at each iteration of the while is the following. All the current sensors' values are read by the respective coroutines and their global variables are set. After, the coroutine DFA (belonging to the top layer) evaluates the read values and decides what action the robot must perform. It sets some global variables that are used to tell the next coroutine, drive, what to do. The latter actuates the motors based on what DFA has decided.

Top Layer

The top layer implements the logic of the program, and the only coroutine present is called DFA. The name is not random but has a specific meaning. It is the acronym of "Deterministic Finite State Automaton", and this program tries to behave like one.

Indeed, the coroutine implements a sort of DFA that :

  • given the state in which it is now, sets global variables so that the lower layer executes the actions corresponding to that state

  • given the values read by the lower layer, moves to the next state accordingly.

    There are mainly three categories of states:

  1. States identifying major parts of the circuit (first left turn, first straight corridor, etc...):
  • STATE_START
  • STATE_FORWARD1
  • STATE_FORWARD2
  • STATE_DROP_OBS
  • STATE_FORWARD5
  • STATE_FORWARD6
  1. States identifying corrective actions (when the angle needs to be corrected, when the robot needs to go backward, etc...)
  • STATE_PROXIMITY_CORRECTION
  • STATE_ANGLE_CORRECTION
  • STATE_PROXIMITY_CORRECTION_BEFORE_ANGLE
  • STATE_PROXIMITY_OBSTACLE
  1. States identifying calibration actions:
  • STATE_GYRO_CAL_BUTTON
  • STATE_REG_LAP

Each state monitors certain sensor values and sets global variables according to the action it wants to perform. In this way, while the DFA is in a certain state, the corresponding action is performed and the next state is evaluated based on the value of (some) sensors. The next state could be the same in which the DFA is at that moment or another one.

From a high-level point of view, the first category describes the topology of the circuit. To finish a lap, the DFA must be at least once in each of those states. For example, when starting, the DFA is in the state STATE_START and tells the lower layer to move the robot forward for a certain amount. After the operation is performed, the DFA moves to the next state, which is STATE_FORWARD1, and so on.

Unfortunately, this is not enough to have a fully functioning robot. In most situations, it is necessary to perform corrective action, sometimes due to the inaccuracy of the motors or sensors and sometimes due to external interference (other robots competing). To account for them, corrective actions such as turning the robot of x degrees from its position are needed. The states identifying corrective actions are used to perform this.

Let's suppose that the DFA is in STATE_START and the robot is moving forward towards the wall. If for some reason the robot abruptly finds itself not perpendicular to the wall towards which it is going, it is necessary to correct the angle. The STATE_START checks the values of the variable storing the angle at which the robot is and, if above or below a certain threshold, it decides to take corrective action. This translates into the decision of moving to a corrective state, in this case, STATE_ANGLE_CORRECTION. This state consists of repositioning the robot at the right angle. When the objective is achieved, the DFA moves back to the state it was before the corrective one. This is the logic behind the corrective states.

Sometimes, 2 or more level of depth is achieved. A corrective state might decide to go into another corrective state.

Finally, the calibration states are used to calibrate sensors or perform counting actions. The state STATE_GYRO_CAL_BUTTON is used to account for the error accumulated by the gyroscope sensor after one 360 turn. The state STATE_REG_LAP counts the number of laps done by the robot. They do not command the lower layer to perform a specific action but only perform calculations.

Gyro calibration routine

The calibration of the gyroscope is of paramount importance for the correct functioning of the robot. We arrived to this conclusion because the compass sensor is an unreliable sensor in the setting of the race. Although the compass sensor is precise, it relies on the Earth magnetic field to work properly. However the race will happen in the EURECOM environment, and in particular on top of a thick piece of metal. The effect of this sheet is a magnetic isolation of the Earth field and probably the generation of small magnetic fields due to the interaction with the electric fields in the surrounding environment.

Since we don't have any stable point of reference now that the compass sensor is out of discussion, and since the gyroscope on itself is not precise enough to make two laps (especially when hitting or being hit by something) we adopted a new strategy to calibrate the gyroscope at every lap. The only certainty that we have in the circuit is the straightness of the walls, so we decided to perform after each completed lap a calculated *bump* into the wall after the finish line. What happens is that the little arms on either side of the front of the robot (see pictures in the section above) ensure that the robot is facing the wall as straight as possible, and the protruding button detects that the wall has been hit only when the robot is completely against it.

With both these conditions satisfied, we virtually reset the gyro, setting the current angle as the new zero angle and we continue with the next lap. This calibration is done in the state STATE_GYRO_CAL_BUTTON.

Source code and instructions

The source code is available in the drive.c present in the repository. To download, cross-compile it makes it run on the robot carefully follow these steps:

$ cd ev3dev-c/eg/
$ git clone git@github.com:DavidePerticone/os-ev3-project.git
$ cd ../../..
$ docker run --rm -it -h ev3 -v $(pwd)/:/src -w /src ev3cc /bin/bash
$ cd ev3dev-c/eg/os-ev3-project
$ make
$ exit
$ cd ev3dev-c/eg/os-ev3-project/Debug/
$ scp drive robot@192.168.43.183:   

Now ssh to your robot and execute the file drive (./drive).

Workload division

From the organizational point of view, working on this project as a group of three was not easy as there is a high level of coupling among all the components. This means that the workload cannot be split and carried out independently by each member. We worked in the following way.

Before developing each component of the program, the team met in order to decide how to proceed. After that, the various pieces to develop were split among the members, trying to always assign to each one component that had something to do with the others he already developed.

Obviously, after writing the code, the team met and reviewed other members’ solutions, in order to understand thoroughly how the code works and be able to debug it (a lot). This translates into all the members knowing how the program works in all its parts. Of course, the member that developed a specific piece has a deeper knowledge of it, but the others can with no problem understand the code and its functionality.

After a component was finished and was supposed to deliver new functionality, it was tested. For most of the development, the testing was carried out in a custom arena, built at the moment. Most of the time there were bugs to be solved. The debugging has been carried out in team.

Due to the way the code has been developed, it is not possible to clearly define who wrote what. Usually, the pieces to develop were very small and heavily modified concurrently by all members of the team during debugging. All the members of the team were always present at each meeting.

The repository of GitHub has been mostly used as a versioning system. The code has been developed in meetings where all the members worked on their part. For this reason, most of the merges of the code were done manually. This avoided conflicts in the code and a deeper understanding of the code by all members.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published