You can follow our team on social media to stay updated with our latest projects and achievements. We share behind-the-scenes moments, progress updates, and highlights from our journey in robotics and technology. Follow us on:
- Instagram click here
- Facebook click here
3DModels This folder has the robot’s 3D design and the final rendered version
Car Pictures This folder contains images of the robot car captured from six different angles, providing a complete visual overview of its design and structure
Document (PDF form) Includes a short document with project info from GitHub, saved as a PDF for A4 printing
FlowChart This folder includes flow diagrams that represent the logic and structure of the robot’s control system, showing how it interprets sensor data and executes movement decisions during tasks.
Mechanical parts parts Contains all mechanical parts used in our robot
ProblemsDocuments the problems encountered during the project and their corresponding solutions.
Tables This folder includes all the tables we used in our project, such as time plans, task schedules, and data organization charts.
Program Contains all the code we wrote for the WRO 2025 Future Engineers and some charts that explain it.
Robot-Photos Includes multiple photos of the robot from various angles — top, side, and bottom — to provide a full visual overview of the design.
SchemesDrawings that show how the robot’s parts and wires are connected.
To him work Explains how the robot works
Videos Contains links to videos demonstrating how the robot performs and completes both missions successfully.
ReadmeThis README file is divided into four main sections for better organization and understanding of the project’s structure.
This video is made with great care and dedication. It compiles all the information we included in the document you can click here to view the video we created.
We are three students from Hebron, Palestine, and we belong to the Programming and Artificial Intelligence clubs under the guidance of our coach, Abeer Al-Jabari. Our team was established with our participation in the Future Engineer program on September 1st. We have also previously participated in programming and STEM clubs.
The name “NOVA” represents a new star that suddenly shines with renewed energy — symbolizing innovation, brilliance, and growth. It reflects our team’s spirit of creativity and ambition, as we strive to illuminate the field of engineering and technology with fresh ideas that drive progress and inspire excellence.
We are a group of passionate students participating in the Future Engineer competition
Our goal is to learn, build, and showcase our creativity through robotics and programming
Our team works to distribute tasks in an organized manner that ensures the integration of efforts and the effective achievement of project objectives
Raghad Wissam Al-Jaabari
- From Hebron
- 16 years old
- My hobbies are horse riding and badminton
- Responsible for assembling the robot and executing programming tasks, integrating mechanical components with software to achieve the required performance and ensure the system operates efficiently
- ragadaljabari1@gmail.com
Rasha Islam Al-Fakhouri
- From Hebron
- 15 years old
- I love reading literature and philosophy, and I enjoy playing badminton
- Responsible for technical and organizational aspects, participated in presenting changes in project structures, in addition to continuous follow-up between stops to clarify the overall vision
- rashfakh9@gmail.com
Layan Yousri Amro
- From Hebron
- 14 years old
- my hobbies are sports and handicrafts
- She is responsible for documenting and writing reports on the project, and works on organizing the content and accurately documenting the work stages and results to present them in a professional and clear manner
- amrolayan95@gmail.com
Despite our different ages and personal interests, we share a common passion: a love for technology and robotics, and a continuous desire to learn and improve. This passion drives us to participate in the Future Engineer competition and work together as a team
Our vision is to develop an intelligent robot capable of performing smart commands independently. We aim to build a system that can analyze its surroundings, make decisions, and complete tasks efficiently without direct human control. This project represents our first step toward creating innovative solutions that support our community and inspire future technological advancements Through this competition, we have gained valuable experience in robotics, programming, and teamwork. We also developed essential skills such as critical thinking, creativity, and planning In the future, we plan to use what we’ve learned to improve our projects, support younger students, and continue developing technologies that make a positive impact
We developed a work plan to organize our steps and manage tasks within the team. It outlines what needs to be done each day, helping us set priorities, track our progress, and work with focus and teamwork to achieve our goals efficiently and accurately:

- This car is a self-assembled (DIY) model that is built on a metallic chassis and uses an Ackermann Steering system to simulate the motion of real cars The components of the car were connected and operated using a Raspberry Pi, programmed in Python to control movement and steering
This is the chassis
Chassis |
- Watch this video from the manufacturing source click here
Car size
Car Size |
- This car is not a remote-controlled vehicle; rather, it is an experimental platform developed to apply the principles of mechanics, control, and programming in the field of intelligent vehicles. Thanks to the use of the Ackermann Steering system, the car replicates the actual steering mechanism of real vehicles, unlike the differential steering systems used in many traditional robots
- The car was entirely assembled and programmed manually as part of participation in the Future Engineer competition, using the Raspberry Pi platform
- This is the servo that controls the wheels using the Ackermann Steering system:
Ackermann Steering |
Ackermann Steering Geometry: Precision Path Management The chassis design of our autonomous vehicle is fundamentally based on the Ackermann Steering Geometry. We implemented a custom-modified version of this linkage system to ensure optimal cornering performance.
-
Principle of Operation Unlike simpler steering methods (like skid steering), the Ackermann principle ensures that during a turn, the steering axis of all four wheels intersects at a single, momentary center point. This is achieved by ensuring the inner wheel (the wheel closer to the turn’s center) rotates at a sharper angle than the outer wheel. This difference in rotation angle is critical because it forces the wheels to follow four distinct radii, allowing the inner wheel to travel a shorter path and the outer wheel a longer path, preventing any lateral slippage. In our RWD system, the front wheels execute the steering motion around their respective pivots, while the single DC motor drives the non-steering rear wheels.
-
Mathematical Description (The Ideal Geometry)
Ackermann |
This diagram illustrates the Ideal Ackerman Steering Geometry. This geometry ensures that all four wheels, when steered, trace concentric circles around a single, common point, the Ackerman Center (C_a). This fundamental relationship is governed by the vehicle's fixed dimensions—the Track Width (T) and the Wheelbase (W)—and the required steering angles: cot(a_o) - cot(a_i) = T/W Where a_i is the inner wheel steering angle and a_o is the outer wheel steering angle. In this ideal setup: The Inner Wheel Angle (a_i) is always greater than the Outer Wheel Angle (a_o) (a_i > a_o). The lines drawn perpendicular to the plane of each wheel intersect precisely at C_a, ensuring all wheels roll without excessive side-slip. Note: Real-world steering systems use a Modified Ackerman principle to optimize performance across the entire steering range, deviating slightly from this ideal formula.
- Advantages for Our Robotic System: Implementing the Ackermann geometry was a strategic choice that provided several specific benefits essential for the competition requirements: Elimination of Tire Slippage: The main advantage is the prevention of tire scrubbing and slippage, which is crucial for accurate path following and maintaining maximum traction on the carpet surface. Precise Path Management: The system offers high control over the front wheel's angle, allowing for precise steering control necessary to navigate between colored obstacles and adhere to strict course boundaries. Single-Motor Compliance: This configuration naturally supports the requirement of using only a single DC motor for propulsion, simplifying the mechanical drivetrain while maintaining high maneuverability Watch this video to understand the system click here
- Wheels
The wheels of this smart robot car have specific material and performance characteristics: Wheel Rims (Hubs): The rim material is ABS (Acrylonitrile Butadiene Styrene). They are purely aesthetic Tires (Rubber): The tire material is Rubber Performance: The rubber material provides a large coefficient of friction and strong grip (traction) force. This design is crucial for stable and controlled movement across different surfaces Internal Structure: All tires are internally fitted with a foam lining (insert). This foam insert provides necessary support and structure to the soft rubber tire, which is essential for consistent performance and shock absorption in RC and robot cars
Wheels |
- Robot Car Gears
In our project, we replaced the large gears with smaller gears in the robot car to control its movement speed Gears are mechanical parts that transfer motion and rotation from the motor to the wheels, affecting both speed and torque By replacing the large gears with smaller ones, we were able to reduce the car's speed and increase control accuracy, especially when performing precise movements or working with the distance sensor and relay.
- This modification helped make the car's movement more stable and safe, while maintaining the motor's power to drive the wheels smoothly
Robot Car Gears |
BOM |
| Part Name | Quantity | Description |
|---|---|---|
| Front metal chassis plate | 1 pcs | Front base for Ackermann steering system |
| Rear metal chassis plate | 1 pcs | Rear base for motor mounting |
| Electronics mounting plate | 1 pcs | Plate for Arduino / controller installation |
| Rubber wheels | 4 pcs | High-traction rubber wheels |
| Suspension spring | 1 pcs | Simple shock absorption system |
| Motor/servo mounting bracket | 1 pcs | Holder for servo or steering mechanism |
| Pulley / gear wheels | 2 pcs | Used in steering / drive linkage |
| Metal rods | Several | Used for steering linkage and frame connection |
| Bearings | Several | To reduce friction and improve smooth steering |
| Rod ends / ball joints | 4 pcs | Steering linkage endpoints |
| Servo mount brackets | 2 pcs | For attaching the steering servo |
| Metal steering arm | 1 pcs | Transfers servo motion to the wheels |
| Servo motor | 1 pcs | Controls front wheel steering |
| Spacer washers | 4 pcs | To maintain spacing between plates |
| Hex standoffs | Multiple | Structural support and spacing |
| L-shaped brackets | 2 pcs | Support brackets for structure |
| Screws (M2 / M3 assorted) | Set | Used for assembly |
| Nuts (M2 / M3 assorted) | Set | Used with screws |
| Metal shafts / pins | Multiple | For wheel/steering mechanism |
| Bearings housings | 2 pcs | Mounting for wheel axles |
| Plastic components | Several | Mechanical support parts |
Brass Hex Standoffs |
- Brass Hex Standoffs
These brass hex standoffs are used to elevate and support the Raspberry Pi and other electronic components. They provide extra stability, improve airflow, and help organize the structure of the robot. We used them to raise the robot’s layers securely, and they come in different sizes depending on the design requirements.
- 3D machine
Creality Ender 5S1
Creality Ender 5 S1 |
The Ender-5 S1 is an FDM 3D printer with a stable cube-frame structure. It was designed to deliver better performance than previous models in the Ender series, offering faster and more responsive printing capabilities. It is considered a great option for hobbyists and medium-level prototyping, thanks to its advanced features such as motion acceleration, direct-drive extrusion, and automatic bed-leveling support
| Specification | Value |
|---|---|
| Build Volume | 220 × 220 × 280 mm |
| Nozzle Diameter | 0.4 mm (standard) |
| Hotend Type | All-Metal |
| Max Hot-End Temperature | 300 °C |
| Filament Diameter | 1.75 mm |
| Supported Filaments | PLA, PETG, TPU, ABS, ASA |
| Extruder Type | Sprite Direct-Drive Dual Gear Extruder |
| Max Printing Speed | 250 mm/s |
| Max Acceleration | 2000 mm/s² |
| Frame Type | Cube-frame Cartesian |
| Auto Bed Leveling | CR-Touch |
| Specification | Value |
|---|---|
| Build Plate Surface | Spring steel sheet with PEI coating (magnetic flexible plate) |
| Build Plate Size | 235 × 235 mm |
| Max Build Plate Temperature | Up to 110 °C |
| Heating Method | AC/Heatbed (magnetic heated bed) |
| Compatibility | Compatible with PC/PEI/Glass plates (optional upgrades available) |
| Specification | Value |
|---|---|
| Max Speed | 500 mm/s |
| Max Acceleration | 20 m/s² |
| Specification | Value |
|---|---|
| Printer Dimensions | 425 × 460 × 570 mm |
| Package Size | 578 × 474 × 340 mm |
| Net Weight | 12.2 kg |
| Gross Weight | 14.7 kg |
| Specification | Value |
|---|---|
| Voltage | 100–240 VAC |
| Frequency | 50/60 Hz |
| Rated Power | ~350W |
| Power Supply Output | 24V DC |
PLA Filament |
PLA (Polylactic Acid) is one of the most widely used filaments in 3D printing. It is a biodegradable thermoplastic derived from renewable resources such as cornstarch or sugarcane. PLA is favored for its ease of printing, low warping, and minimal odor, making it ideal for FDM (fused deposition modeling) 3D printers. This filament is best suited for prototyping, decorative items, and educational projects, offering good strength and surface finish.
Diameter: 0.75 mm
Printer Compatibility: Most FDM 3D printers, including Creality Ender-5 S1
Ultrasonic holder
- We encountered an issue with the ultrasonic sensor readings due to instability during operation. To address this, we designed a custom 3D-printed mount to securely hold the sensor in place
After implementing this design, the ultrasonic readings became significantly more stable and accurate
The final design is shown below:
|
Camera holder
- Similarly, we faced a challenge in determining the optimal placement for the camera. To solve this, we designed a special 3D-printed mount that allowed us to position the camera securely and achieve the desired angle for accurate image captur
The final designs are shown below:
|
- Robot mind
The Raspberry Pi 4 Model B serves as the Main Processing Unit (MPU), acting as the robot’s central brain.
It executes the complex Python code and coordinates all real-time sensory data and actuator commands.
We selected the Raspberry Pi 4 for its quad-core Arm Cortex-A72 processor (1.8 GHz) and 64-bit architecture, providing a reliable balance between performance and efficiency for real-time robotic applications.
Its computational power enables the system to simultaneously handle:
- Computer Vision (CV): Processing real-time video frames from the USB camera using the OpenCV library to detect and track colored obstacles.
- Sensor Fusion: Integrating data from the IMU (gyroscope) and the ultrasonic sensors for path correction and obstacle avoidance.
- Peripheral Control: Managing multiple modules such as relays, motors, and sensors through the GPIO interface with stable performance.
In compliance with competition rules, the code is preloaded onto the Raspberry Pi 4 before the start.
The autonomous routine is initiated by a physical push button connected to a GPIO pin, which activates the main Python script.
References:
Raspberry Pi 4 |
The Raspberry Pi 5 serves as the Main Processing Unit (MPU), acting as the robot’s central brain.
It executes the complex Python code and coordinates all real-time sensory data and actuator commands.
We selected the Raspberry Pi 5 for its high-performance quad-core Arm Cortex-A76 processor (2.4 GHz) and 64-bit architecture, delivering approximately 2×–3× the CPU performance of the previous generation.
Its computational power is essential, providing the necessary capacity to simultaneously manage:
- Computer Vision (CV): Processing real-time video frames from the USB camera using the OpenCV library to detect and track colored obstacles.
- Sensor Fusion: Rapidly integrating data from the IMU (gyroscope) and the four ultrasonic sensors for path correction and obstacle avoidance.
- High-Speed I/O: Equipped with the new RP1 I/O controller (south-bridge), offering improved USB and storage throughput to ensure minimal latency in peripheral responses.
In compliance with competition rules, the code is preloaded onto the Raspberry Pi 5 before the start.
The entire autonomous routine is initiated by a physical push button connected to a GPIO pin, which activates the main Python script.
References:
- Raspberry Pi 5 Product Brief (Official Datasheet)
- RaspberryTips — Raspberry Pi 5 Performance Overview
Raspberry Pi 5 |
| Feature | Raspberry Pi 4 Model B | Raspberry Pi 5 |
|---|---|---|
| Processor | Quad-core ARM Cortex-A72 @ 1.8 GHz | Quad-core ARM Cortex-A76 @ 2.4 GHz |
| Architecture | 64-bit | 64-bit |
| CPU Performance | Baseline | ~2×–3× faster than Pi 4 |
| GPU | VideoCore VI | VideoCore VII (improved graphics performance) |
| RAM Options | 2GB, 4GB, 8GB LPDDR4 | 4GB or 8GB LPDDR4X |
| Storage | microSD card | microSD card + PCIe 2.0 (via FPC connector) |
| USB Ports | 2 × USB 3.0, 2 × USB 2.0 | 2 × USB 3.0, 2 × USB 2.0 (faster bus) |
| GPIO | 40-pin header | 40-pin header (same pinout) |
| Networking | Gigabit Ethernet, Wi-Fi 5, Bluetooth 5.0 | Gigabit Ethernet, Wi-Fi 5, Bluetooth 5.0 |
| Video Output | 2 × micro HDMI (4K @ 60 fps) | 2 × micro HDMI (dual 4K @ 60 fps) |
| I/O Controller | USB controller integrated on main SoC | Dedicated RP1 I/O controller (higher bandwidth) |
| Power Connector | USB-C (5 V / 3 A) | USB-C (5 V / 5 A, higher current support) |
| Operating Temperature | 0 – 50 °C | 0 – 50 °C (improved thermal management) |
| Launch Year | 2019 | 2023 |
Summary:
Raspberry Pi 5 delivers significantly higher CPU and GPU performance, improved I/O speed, and better responsiveness — making it ideal for robotics, computer vision, and real-time control applications.
- Motor Driver L298N
L298N Dual H-Bridge Motor Driver: The Power Bridge The L298N module serves as an essential protective and intermediary power stage between the robot's battery and its drive motor. The Necessity of a Driver: The Raspberry Pi (our main controller) operates using low-power signals and cannot safely source or sink the high current required by the DC propulsion motor. Attempting to connect the high-power battery circuit directly to the Pi's low-voltage GPIO pins would result in immediate and irreversible damage to the controller. Therefore, the primary role of the L298N is to act as a current amplifier and insulator, safely handling the large electrical load demanded by the motor. How the System Works (The Dam Analogy): The L298N can be visualized as a large, reinforced concrete dam built to manage the powerful current flowing from the 9V battery (the reservoir) to the motor (the turbine). The Raspberry Pi acts as the control mechanism that precisely regulates the dam's sluice gates (via PWM signals). The Pi merely sends low-voltage commands to the L298N, instructing when and how much of the high-power battery current should be directed to the motor, without ever having to handle the power itself. Component Description: This module is based on the widely used L298 Dual H-Bridge Integrated Circuit. While the board is capable of controlling two DC motors independently (up to 2A peak current each), our application utilizes only one motor port to drive our single-motor RWD system. The unit is optimized for microcontroller interfacing, requiring only a few digital lines for full control. The board also features essential integrated components, including LED power indicators, internal protection diodes, and an onboard +5V voltage regulator which can supply power to the low-voltage controller (like the Raspberry Pi) for convenience. Key Technical Data:
- Driver: L298N Dual H-Bridge DC Motor Driver
- Operating Voltage Range: DC 5 V - 35 V (Motor Power)
- Peak Current: 2 Amp per motor
- Input Logic Voltage (Control Signal):
- Low: -0.3 ≤ Vin ≤ 1.5V (control signal is invalid).
- High: 2.3V ≤ Vin ≤ Vss (control signal active).
- Maximum Power Dissipation: \text{20W} (at T = 75^\circ C)
- On-board Feature: Integrated +5V regulated output supply.
- Approximate Dimensions: 4.3cm × 4.3cm × 2.7cm
Motor Driver L298N |
- Motor DC
This motor belongs to the class of DC Geared Motors, specifically the JGA25-370 series, known for providing a necessary trade-off between speed and torque for robotics and automation applications. It is designed to be highly compatible with popular microcontroller platforms such as Arduino, Raspberry Pi, and STM32, utilizing common DC motor driver modules.
- Model Name and Type: JGA25-370 DC Gear Motor. The name implies a 25mm gearbox diameter attached to a 370 series motor.
- Operating Voltage: The nominal operating voltage is 12V DC.
- Output Speed (No-load): It provides an output speed of 210RPM (Revolutions Per Minute) under no-load conditions.
- Torque Capability: The motor delivers up to 12Kg • cm of torque. This makes it suitable for medium-load mechanical motion control projects.
- Gearbox Material: It features a durable all-metal gearbox for enhanced strength and long service life, ensuring stable and reliable continuous operation.
- Output Shaft: The typical output shaft diameter is 6mm.
- Typical Applications: This motor is ideal for building robots, conveyor systems, smart vehicles, and other DIY projects requiring controlled motion.
- Stall Current: The internal motor (before the gearbox) can draw a Stall Current (maximum current draw when the shaft is blocked) of approximately 2.2 Amps at 12V.
- Motor Driver Requirement: This means the chosen motor driver (the module linking the motor to the Raspberry Pi) must have a continuous current rating exceeding 2.2 Amps per channel to safely operate the motor, especially when the car is starting or pushing against an obstacle. 2.3.2) Encoder: The motor comes integrated with an encoder to provide closed-loop control of speed and position, which is essential for precise robotic applications.
- Encoder Type and Output Signal:
- The integrated encoder uses Hall Sensor technology.
- It provides two square wave outputs, designated as Channel A and Channel B.
- The signals are approximately 90° out of phase. This phase difference is crucial for determining the direction of rotation (quadrature encoding).
- The voltage output of the Hall sensor signals ranges from 0V to Vdc (the encoder's supply voltage).
- Wiring and Physical Characteristics:
- Leads: The encoder assembly is terminated by 6 color-coded leads.
- Connector: These leads are typically terminated into a 6 female header with a 0.1''pitch (standard spacing).
- Length: The lead length is approximately 15cm.
- Mounting: The motor faceplate includes 2 mounting holes for M3 screws. The distance between these mounting holes is 18mm apart.
- Wire Function (Based on common JGA25-370 Encoders):
- The 6 wires generally correspond to:
- Two Wires (Motor): For the 12V motor power (e.g., Red/White or Red/Black).
- Four Wires (Encoder): VCC (Encoder Power, usually 5V or 3.3V), GND, and the two signal lines (Channel A and Channel B).
Motor DC |
Encoder |
- Servo Motor
Precise Steering Actuator The Servo Motor is the critical actuator responsible for translating the path-planning commands from the Raspberry Pi into physical steering motion.
- Integration with Ackermann Geometry Function: The servo is connected via a custom linkage to the front wheels, enabling the precise, differential steering angles required by the Ackermann Steering Geometry. Model: Based on its characteristics (and likely MG996R designation), the model used is a Digital, High-Torque Servo with Metal Gears. The high torque rating (~ 10kg ٠ cm) ensures sufficient force to overcome the mechanical friction and maintain the steering angle under load.
- Control and Power Control Protocol: The servo angle is precisely controlled by sending Pulse Width Modulation (PWM) signals from the Raspberry Pi's GPIO header. The duration of the pulse dictates the angular position of the angular position of the wheel. Power: The servo requires a stable operating voltage typically in the 5V - 6V range. This power is reliably supplied by the Step-down Module (Voltage Regulator), ensuring steady operation independent of the main 9V battery fluctuations
Servo Motor |
- voltage regulator XL4015
The DC-DC Step-down Module is a non-isolated buck converter based on the widely used LM2596 integrated circuit. It serves as a crucial power conditioning component, managing the transition from the high-power drive system to the sensitive control electronics.
- Power Regulation Necessity The robot's propulsion system operates on a 9V battery supply, necessary for the DC drive motor. However, the sensitive low-voltage components—namely the Raspberry Pi 4 5V/3A minimum), the Servo Motor, and all digital Sensors—require a stable and lower operating voltage. The primary function of this module is to safely and efficiently step-down the 9V input voltage to a regulated 5V output. This prevents potential damage to the control board and ensures power stability, which is vital for the reliable operation of the Raspberry Pi's CPU and GPIO communications.
- Key Operational Data The module accepts a wide input range 4.5V to 40V), easily accommodating the 12V battery output. It is manually tuned to provide a precise 5V output voltage and is rated for a continuous output current of approximately 3A, providing a sufficient power budget for all low-power electronics in the system. The high conversion efficiency (> 80%) minimizes energy loss as heat, preserving battery life
Voltage Regulator XL4015 |
- Relay
A relay is an electronic component used to control the switching ON or OFF of electrical circuits using a small electrical signal from a microcontroller such as the Raspberry Pi The relay acts as an interface between low-voltage control circuits and high-voltage electrical loads It operates on the principle of electromagnetism, containing a copper coil (Coil) that creates a magnetic field when current passes through it. This magnetic field pulls a metal armature, changing the connection state between its internal terminals:
- COM (Common): The common contact
- NO (Normally Open): Open by default and closes when the relay is activated
- NC (Normally Closed): Closed by default and opens when the relay is activated When a signal is sent from the microcontroller to the relay, current flows through the coil, generating a magnetic field that moves the metal armature. This action switches the connected circuit ON or OFF
Relays are used to control devices that require higher current or voltage than the microcontroller can supply, such as:
- Running motors and fans.
- Turning lights or pumps on and off.
- Cutting power in safety or smart control systems. Thus, the relay allows microcontrollers to safely and efficiently control high-power electrical systems without damaging sensitive electronic components
Relay Module |
- gyroscope MPU6050
Inertial Measurement Unit (IMU): MPU-6050 The MPU-6050 Inertial Measurement Unit (IMU) serves as the primary sensor for motion and orientation tracking, correcting for mechanical and environmental inaccuracies that cannot be managed by vision or distance sensors.
- Function and Integrated Components The MPU-6050 is a sophisticated MotionTracking device that integrates a 3-axis gyroscope and a 3-axis accelerometer. It utilizes 16-bit Analog-to-Digital Converters (ADCs) for high-precision data capture.
- Gyroscope (Angular Rate): Measures the rate of rotation around the X, Y, and Z} axes. This is essential for detecting unwanted yaw (heading deviation) and precisely correcting the Ackermann steering angle. The range is user-programmable up to +- 2000⁰/sec.
- Accelerometer (Linear Acceleration): Measures linear acceleration and gravity, used to determine the robot's tilt angle (roll and pitch) and estimate linear distance traveled. The range is programmable up to +-16g.
- Digital Motion Processor (DMP): This integrated hardware unit processes complex 6-axis MotionFusion algorithms. Crucially, the DMP offloads computation from the Raspberry Pi by processing raw sensor data into clean, ready-to-use orientation information directly on the chip.
- System Integration and Power The MPU-6050 communicates with the Raspberry Pi 4 using the efficient I2C serial communication protocol. This protocol is highly efficient, requiring a minimum number of dedicated GPIO pins.
- Power: The module operates between 3V to 5V and is safely powered by the 5V} regulated output from the LM2596 module.
- Role in Navigation The IMU provides the core data for internal navigation (dead reckoning). Regardless of whether the system implements explicit sensor fusion with the ultrasonic sensors, the highly accurate angular rate and acceleration data are crucial for stabilizing the robot's movement and ensuring that the programmed path is maintained against physical disturbances
Gyroscope MPU6050 |
- Ultrasonic
Ultrasonic Sensors (HC-SR04): Comprehensive ToF Ranging System The robot is equipped with a total of four HC-SR04 Ultrasonic Sensors that form the primary short-range ranging system. These sensors provide the vehicle with echolocation capabilities, on par with how bats and dolphins locate objects in complete darkness and beneath the water surface. This array is fundamental for robust, real-time obstacle avoidance and precise navigation along the course boundaries. Strategic Implementation and System Integration The four HC-SR04 sensors are strategically positioned to maximize situational awareness: one mounted in the front, one in the rear, and one on each of the right and left sides of the chassis. This arrangement minimizes blind spots and provides the angular data necessary for advanced Sensor Fusion. Power and Interface: The sensors require a stable +5V regulated supply, drawing less than 15mA each. Each sensor utilizes two dedicated GPIO pin (Trigger and Echo) on the Raspberry Pi for control and data acquisition. Sensor Fusion Role: The simultaneous input from four points allows the control system to accurately determine both the distance and the angular position of an obstacle. This redundancy is vital for high-reliability autonomy, allowing the algorithm to differentiate between a critical frontal obstacle and a simple boundary wall alongside the vehicle.
Principle of Operation and Technical Specification The HC-SR04 module operates on the principle of measuring the Time-of-Flight (ToF) of an ultrasonic sound wave. 1–Triggering: Measurement begins when the Raspberry Pi sends a minimum 10uS high pulse to the Trigger pin, prompting the transmitter to emit an 8-cycle burst of ultrasonic sound at 40kHz. 2– Echo Reception: The sound wave reflects off an object, and the duration it takes to return is measured. The sensor outputs a high pulse on the Echo pin, the width of which is directly proportional to the distance. 3– Calculation: The Raspberry Pi measures this pulse width (time), and the distance is calculated using the formula Distance = Speed × Time, divided by two since the time measured is for the signal's round trip. The sensor boasts a practical measuring range of 2cm to 400cm and an accuracy that can reach 3mm. This combination of range and precision makes it perfectly suited for the short-range, dynamic obstacle avoidance challenges presented in the competition course
Ultrasonic Sensor HC-SR04 |
- switch
This switch controls the connection and disconnection of power between the battery and the robot. Since the rules require cutting off power before operation, we added this switch to comply with that. We soldered the red (positive) wire to the input side of the switch and another red wire to the output side. When the switch is turned on, power flows from the battery to the step-down module and then to the rest of the robot’s components
Switch |
This picture shows our robot in real life, displaying how it looks with all its components assembled
Final Robot |
- It shows you the connection of various components to the Raspberry Pi
Power Distribution Diagram |
- It shows the ports of various components connected to the Raspberry Pi, as well as the voltage levels supplied to the Raspberry Pi and its connected components
Wiring Diagram |
To navigate the initial stage intelligently, we developed a dedicated algorithm that empowered the robot to make autonomous directional decisions.
Before encountering the first turn, the robot was uncertain whether to proceed clockwise or counterclockwise.
We incorporated ultrasonic sensors into the robot’s design to provide continuous spatial awareness. These sensors measured the distance to nearby obstacles on both the left and right sides.
- Left Sensor:
If the right sensor measured a distance greater than 160 cm, the robot decided to move counterclockwise.
This is the clockwise rotation of the robot.
- Right Sensor:
If the left ultrasonic sensor detected a distance greater than 160 cm (indicating open space on that side), the robot chose to move clockwise.
This is the counterclockwise rotation of the robot.
While traveling in a straight path, the robot maintained a safety buffer of 20 cm from the inner wall.
If this distance was violated, the system automatically adjusted its trajectory to stay stable and balanced.This algorithm allowed the robot to dynamically adapt to its surroundings — making intelligent path choices and real-time corrections, simulating a level of spatial awareness similar to advanced autonomous systems.
Each time the robot detected a corner and a large distance (~160 cm), it increased its lap counter.
After completing 12 laps (where 1 round = 4 corners), the robot automatically stopped at a specific location, marking the completion of the task.This video shows our robot completing the first round(Open Challenge), you can click here to view the video we created.
- Section 1 [Open Challenge round]
Basic Import and Initialization
import RPi.GPIO as GPIO import time import smbus # Motor Pins MOTOR_IN3 = 38 MOTOR_IN4 = 32 # Servo Pin SERVO_PIN = 37 # Ultrasonic Sensor Pins FRONT_TRIG = 13 FRONT_ECHO = 11 RIGHT_TRIG = 31 RIGHT_ECHO = 29 LEFT_TRIG = 10 LEFT_ECHO = 8 # Servo Angles SERVO_STRAIGHT = 60 SERVO_RIGHT = 95 SERVO_LEFT = 10 # MPU6050 Address MPU6050_ADDR = 0x68 bus = smbus.SMBus(1) # Distance thresholds (cm) DETECTION_DISTANCE = 80 DIRECTION_CHECK_DISTANCE = 360 # Movement Timing (seconds) TURN_DURATION = 1.1 STRAIGHT_AFTER_TURN = 1.5
Explanation: This section imports the essential libraries to control the robot, defines the GPIO pins connected to the Raspberry Pi for motors, servo, and ultrasonic sensors, sets the three servo angles, and establishes the distance thresholds. These thresholds indicate when an obstacle is close enough to require stopping or detecting the path direction (clockwise or counterclockwise). It also initializes communication with the MPU6050 for orientation tracking and sets basic timing values for movement and turns.
- Section 2 [Open Challenge round]
GPIO, MPU6050, and Ultrasonic Initialization
def setup_gpio(): GPIO.setmode(GPIO.BOARD) GPIO.setwarnings(False) # Setup motor and servo pins GPIO.setup(MOTOR_IN3, GPIO.OUT) GPIO.setup(MOTOR_IN4, GPIO.OUT) GPIO.setup(SERVO_PIN, GPIO.OUT) # Setup ultrasonic sensor trigger pins for trig in [FRONT_TRIG, RIGHT_TRIG, LEFT_TRIG]: GPIO.setup(trig, GPIO.OUT) GPIO.output(trig, False) # Setup ultrasonic sensor echo pins for echo in [FRONT_ECHO, RIGHT_ECHO, LEFT_ECHO]: GPIO.setup(echo, GPIO.IN) def mpu6050_init(): try: bus.write_byte_data(MPU6050_ADDR, 0x6B, 0) time.sleep(0.1) return True except: return False
Explanation: setup_gpio() configures the Raspberry Pi GPIO pins for motors, servo, and ultrasonic sensors. Trigger pins are outputs and echo pins are inputs. mpu6050_init() initializes the MPU6050 sensor to enable orientation tracking. These steps prepare the robot’s hardware for autonomous navigation and obstacle detection.
- Section 3 [Open Challenge round]
Motor Control
def motor_forward(): GPIO.output(MOTOR_IN3, GPIO.LOW) GPIO.output(MOTOR_IN4, GPIO.HIGH) def motor_stop(): GPIO.output(MOTOR_IN3, GPIO.LOW) GPIO.output(MOTOR_IN4, GPIO.LOW)
Explanation: motor_forward() makes the robot move forward by setting the motor pins appropriately. motor_stop() stops the robot by turning off both motor pins. These functions provide basic motor control needed for autonomous navigation.
- Section 4 [Open Challenge round]
Servo Control
def setup_servo(): global servo_pwm servo_pwm = GPIO.PWM(SERVO_PIN, 50) servo_pwm.start(0) time.sleep(0.1) def set_servo_angle(angle): duty = 2.5 + (angle / 180.0) * 10 servo_pwm.ChangeDutyCycle(duty) time.sleep(0.3) servo_pwm.ChangeDutyCycle(0) def steer_straight(): set_servo_angle(SERVO_STRAIGHT) def steer_left(): set_servo_angle(SERVO_LEFT) def steer_right(): set_servo_angle(SERVO_RIGHT)
Explanation: setup_servo() initializes the servo motor for steering. set_servo_angle(angle) sets the servo to a specific angle. steer_straight(), steer_left(), and steer_right() position the servo to go straight, left, or right. These functions allow precise steering control for autonomous navigation.
- Section 5 [Open Challenge round]
Ultrasonic Sensor Distance Reading
def get_distance(trig_pin, echo_pin): GPIO.output(trig_pin, True) time.sleep(0.00001) GPIO.output(trig_pin, False) timeout = time.time() while GPIO.input(echo_pin) == 0: pulse_start = time.time() if pulse_start - timeout > 0.1: return -1 while GPIO.input(echo_pin) == 1: pulse_end = time.time() if pulse_end - pulse_start > 0.1: return -1 pulse_duration = pulse_end - pulse_start distance = pulse_duration * 17150 distance = round(distance, 1) return distance if distance < 400 else -1 def get_front_distance(): return get_distance(FRONT_TRIG, FRONT_ECHO) def get_right_distance(): return get_distance(RIGHT_TRIG, RIGHT_ECHO) def get_left_distance(): return get_distance(LEFT_TRIG, LEFT_ECHO)
Explanation: These functions read distances using ultrasonic sensors. get_distance(trig_pin, echo_pin) sends a pulse and calculates the time it takes to receive the echo, converting it into distance in centimeters. get_front_distance(), get_right_distance(), and get_left_distance() return the distances from the respective sensors. This allows the robot to detect obstacles and measure the space around it for autonomous navigation.
- Section 6 [Open Challenge round]
Direction Detection Logic
def detect_direction(): left_readings = [] right_readings = [] for i in range(10): left = get_left_distance() right = get_right_distance() if left > 0: left_readings.append(left) if right > 0: right_readings.append(right) time.sleep(0.1) avg_left = sum(left_readings) / len(left_readings) if left_readings else 0 avg_right = sum(right_readings) / len(right_readings) if right_readings else 0 if avg_left > DIRECTION_CHECK_DISTANCE: return 'CCW' # Counter-clockwise elif avg_right > DIRECTION_CHECK_DISTANCE: return 'CW' # Clockwise else: return 'CCW' if avg_left > avg_right else 'CW'
Explanation: detect_direction() uses multiple ultrasonic readings from the left and right sensors to determine the robot's optimal turning direction. If left side > 360 cm → move counter-clockwise (CCW). If right side > 360 cm → move clockwise (CW). If both < 360 cm → choose the side with more space. This ensures the robot selects the safest and widest path during autonomous navigation.
- Section 7 [Open Challenge round]
90-Degree Turn Function
def turn_90(turn_right): if turn_right: steer_right() else: steer_left() motor_forward() time.sleep(TURN_DURATION) steer_straight()
Explanation: turn_90(turn_right) performs a 90-degree turn If turn_right is True, the robot turns right; otherwise, it turns left. The function moves the robot forward while turning, waits for the specified TURN_DURATION, and then resets the steering to straight. This allows precise cornering during autonomous navigation.
- Section 8 [Open Challenge round]
Navigation Logic
def navigate(): global sections_completed, direction, running motor_forward() steer_straight() while running and sections_completed < TOTAL_SECTIONS: front_dist = get_front_distance() if front_dist <= DETECTION_DISTANCE and front_dist > 0: motor_stop() if direction is None: # First time detecting obstacle: wait 2 seconds and detect direction time.sleep(2) direction = detect_direction() else: # Subsequent times: short wait time.sleep(0.1)
Explanation:
-
navigate() handles the robot's forward movement and obstacle detection.
-
The robot moves forward and keeps steering straight.
-
If an obstacle is detected within 80 cm (DETECTION_DISTANCE), the robot stops.
-
On the first detection, it waits 2 seconds and determines the turning direction.
-
On subsequent detections, it only pauses briefly (0.1 seconds)
-
This logic allows safe and intelligent navigation around obstacles during the Open Challenge.
-
Section 9 [Open Challenge round]
Post-Obstacle Navigation and Completion
if direction == 'CW': turn_90(True) else: turn_90(False) motor_forward() steer_straight() time.sleep(STRAIGHT_AFTER_TURN) sections_completed += 1 else: motor_forward() steer_straight() time.sleep(0.05) motor_stop() steer_straight() time.sleep(0.5) running = False
Explanation: After detecting a direction, the robot executes a 90-degree turn (turn_90) depending on whether it should go clockwise or counter-clockwise. It then moves forward for 1.5 seconds (STRAIGHT_AFTER_TURN) to continue on the path.
-
If no obstacles are detected, the robot simply continues straight with small adjustments. -The sections_completed counter increases with each segment. Once 24 sections are completed, the robot stops, resets steering, and ends the run. This logic ensures the robot navigates through all track sections safely and autonomously.
-
Section 10 [Open Challenge round]
Main Function — Setup and Start
def main(): global running try: # Initialize hardware setup_gpio() setup_servo() mpu6050_init() time.sleep(0.5) # Reset steering and stop motors steer_straight() motor_stop() # Start navigation running = True navigate()
Explanation: The main() function prepares all hardware components and starts the Open Challenge.
- GPIO, servo, and MPU6050 are initialized.
- Steering is set straight and motors are stopped before starting.
- The navigate() function is called to begin autonomous movement and obstacle handling. This ensures the robot is fully ready before starting the challenge.
In this challenge, the robot is required to complete three full laps around a track containing randomly placed red and green traffic signs.
Each traffic sign acts as a visual cue guiding the robot’s movement along the track.- ** Red Sign:** The robot must stay on the right side of the lane.
Red Sign Line
- ** Green Sign:** The robot must stay on the left side of the lane.
Green Sign
- The robot must not move or displace any traffic signs during the run.
- After completing the three laps, the robot must locate a parking space and perform parallel parking correctly.
- Demonstrate the robot’s ability to detect signs and make accurate decisions in real time.
- Maintain stable and precise movement while following all track rules.
- Section 1 [Obstacle Challenge round]
Color Detection with Camera (LAB-based)
import cv2 import numpy as np def normalize_color(frame): """Gray world color normalization to reduce lighting effect""" result = frame.astype(np.float32) avg_b = np.mean(result[:, :, 0]) avg_g = np.mean(result[:, :, 1]) avg_r = np.mean(result[:, :, 2]) avg_gray = (avg_b + avg_g + avg_r) / 3 result[:, :, 0] = np.minimum(result[:, :, 0] * (avg_gray / avg_b), 255) result[:, :, 1] = np.minimum(result[:, :, 1] * (avg_gray / avg_g), 255) result[:, :, 2] = np.minimum(result[:, :, 2] * (avg_gray / avg_r), 255) return result.astype(np.uint8) # Initialize camera (USB or Pi camera) camera = cv2.VideoCapture(0) camera.set(3, 640) camera.set(4, 480) while True: ret, frame = camera.read() if not ret: break # Optional: normalize to reduce color cast from lighting frame = normalize_color(frame) # Convert BGR to LAB lab = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB) L, A, B = cv2.split(lab) # --- Detect RED (A-channel is high for red) --- red_mask = cv2.inRange(A, 150, 255) # high A → red green_mask = cv2.inRange(A, 0, 110) # low A → green # Clean up noise kernel = np.ones((5, 5), np.uint8) red_mask = cv2.morphologyEx(red_mask, cv2.MORPH_OPEN, kernel) green_mask = cv2.morphologyEx(green_mask, cv2.MORPH_OPEN, kernel) # Find contours for red contours_red, _ = cv2.findContours(red_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for c in contours_red: area = cv2.contourArea(c) if area > 400: x, y, w, h = cv2.boundingRect(c) cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 255), 2) cv2.putText(frame, "RED", (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2) # Find contours for green contours_green, _ = cv2.findContours(green_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for c in contours_green: area = cv2.contourArea(c) if area > 400: x, y, w, h = cv2.boundingRect(c) cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) cv2.putText(frame, "GREEN", (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2) # Display the output cv2.imshow("Color Detection (LAB-based)", frame) if cv2.waitKey(1) & 0xFF == 27: # ESC key to exit break camera.release() cv2.destroyAllWindows()
Explanation:
- The camera captures frames in real-time and normalizes the colors to reduce lighting effects.
- Conversion to LAB color space helps separate luminance from color, improving red and green detection.
- Red and green masks are created using the A-channel of LAB.
- Morphological operations remove small noise.
- Contours are detected for each color, and bounding rectangles with labels (“RED” or “GREEN”) are drawn.
- The live frame shows the detected objects, updating in real-time.
- Press ESC to exit the program.
We chose Python as the main programming language for our WRO robot because it is simple, powerful, and widely used in robotics and computer vision. Python allows us to write clear and organized code, making it easier to test, debug, and improve our program.
One of the main reasons we selected Python is its strong support for the OpenCV library, which we used to process images and recognize objects or patterns. OpenCV gave us the ability to detect lines, colors, and QR codes — features that were essential for our robot’s mission.
Python also works perfectly with Raspberry Pi, which we used as the main controller. Its flexibility allowed us to easily connect sensors and motors, and integrate the results from OpenCV to make real-time decisions.
Using Python helped us focus on problem-solving and teamwork, since the language is easy to understand and modify. Overall, Python and OpenCV together provided a powerful combination that made our robot more intelligent and efficient during the WRO challenge.
Python is used in our project because it is easy to learn and helps us control the robot accurately. With Python, we can program the robot’s movements, read sensor data, and analyze images from the camera. It also supports powerful libraries like OpenCV, NumPy, and more, which help us complete tasks quickly and efficiently during the competition challenges.
The RPi.GPIO library is specifically designed to interact directly with the physical world through the general-purpose input/output pins on the Raspberry Pi. Imported here as
GPIO, its primary function is to manage the flow of digital signals.- You can set a pin as an output to send a High (1) or Low (0) signal, controlling devices like LEDs, motors, and actuators (turning them ON or OFF).
- Conversely, you can set a pin as an input to read the state of physical components, such as checking whether a button has been pressed or if a sensor has detected an object.
This library forms the foundational layer for low-level hardware control.
OpenCV is a powerful and widely used library for computer vision tasks. In our WRO project, we use OpenCV to help the robot understand its surroundings through camera input.
- It allows the robot to detect colors, follow lines, and recognize objects or obstacles.
- OpenCV works well with Python and provides many ready-to-use functions that make image processing faster and easier.
This helps improve the robot’s accuracy and responsiveness during competition challenges.
The time library is a standard module built into Python that provides essential functions for managing time within a program. It is crucial for hardware projects, as physical devices often require specific timing to operate correctly.
- Its most common function is
time.sleep(X), which pauses the execution of the program for a specified number of seconds (X). - This delay is vital for tasks like creating rhythmic flashing patterns, allowing mechanical components time to move into position, or stabilizing sensor readings before processing them.
- The library also offers functions for measuring time intervals and retrieving the current time.
The smbus library serves as a software interface for communicating with external devices using the I²C (Inter-Integrated Circuit) protocol, a very common serial communication method in embedded systems.
- I²C allows multiple devices, such as sophisticated sensors, memory chips, and display drivers, to communicate with the Raspberry Pi using only two wires (SDA and SCL).
- By importing
smbus, Python code gains the ability to address specific external chips and send commands (write operations) or receive complex data (read operations) from them.
This is essential for integrating advanced components that require a dedicated communication bus rather than simple ON/OFF signals.
During the development of our WRO robot project, we used several applications to design, plan, and present our work efficiently:
-
Flowchart Design – BoardMix
We used BoardMix to create clear and organized flowcharts for our robot’s logic and program structure. This helped us plan the sequence of operations before writing the actual code. -
Circuit Design – Fritzing
Fritzing was used to design and visualize the electronic circuits of the robot. It allowed us to connect sensors, motors, and other components on a virtual breadboard before physical assembly. -
Presentation and Background – Canva
Canva was used to create professional backgrounds, diagrams, and visual materials for presentations and documentation, making our project visually appealing and easy to understand.
Component Estimated Cost (USD) ASRC-CM-DIY Version $142 Raspberry Pi 5 Model B $35 Motor Driver L298N $5 Motor DC $20 Servo Motor $6 Voltage Regulator XL4015 $7 Gyroscope MPU6050 $15 Four Ultrasonic Sensors $40 PC i7 $1000 Smart Car Motors GA25 370 $100 Total $1370 - Right Sensor:
































