Things used in this project
Hardware components
| MentorPi M1 Chassis |
×1
| Bracket Set |
×1
| Raspberry Pi 5 |
×1
| 64 GB SD Card |
×1
| Cooling Fan |
×1
| Raspberry Pi Power Supply C |
×1
| RPC Lite Controller + RPC Data Cable |
×1
| Battery Cable |
×1
| Lidar + 4PIN Wire |
×1
| Lidar Data Cable |
×1
| 8.4V 2A Charger (DC5.5*2.5 Male Connector) |
×1
| 3D Depth Camera |
×1
| Depth Data Cable |
×1
| Wireless Controller |
×1
| Controller Receiver |
×1
| EVA Ball (4… |
Things used in this project
Hardware components
| MentorPi M1 Chassis |
×1
| Bracket Set |
×1
| Raspberry Pi 5 |
×1
| 64 GB SD Card |
×1
| Cooling Fan |
×1
| Raspberry Pi Power Supply C |
×1
| RPC Lite Controller + RPC Data Cable |
×1
| Battery Cable |
×1
| Lidar + 4PIN Wire |
×1
| Lidar Data Cable |
×1
| 8.4V 2A Charger (DC5.5*2.5 Male Connector) |
×1
| 3D Depth Camera |
×1
| Depth Data Cable |
×1
| Wireless Controller |
×1
| Controller Receiver |
×1
| EVA Ball (40mm) |
×1
| Card Reader |
×1
| 3PIN Wire (100mm) |
×1
| WonderEcho Pro AI Voice Interaction Box + Type C Cable |
×1
| Accessory Bag |
×1
| User Manual |
×1
Story
Want to build a robot car that can see, navigate, and drive itself? While most are still struggling to source hardware and configure complex ROS systems, you can get a head start. Hiwonder MentorPi M1 is your key — an "out-of-the-box" robotics platform that integrates a Raspberry Pi 5, LiDAR, depth camera, and a complete ROS2 software stack. It removes all the barriers of starting from scratch, allowing you to focus directly on what matters most: giving a machine intelligence. This guide is your quick-start manual, showing you how to build an autonomous car prototype with environmental mapping, self-navigation, and visual recognition in no time. Ready? Let’s start the engine and hit the fast lane of robotics development.
Part 1: Deep Dive – What’s Inside Your "Autonomy Kit"?
Before starting the project, let’s think like engineers and understand our toolkit. Every piece of hardware in the MentorPi M1 plays a critical role in the autonomous system.
**1.****The "Brain" – Raspberry Pi 5: **This is the main controller, running the entire ROS2 system and all AI algorithms. Its powerful processing capability is sufficient for real-time sensor data fusion and decision-making, making it the true intelligence hub.
**2.**The "Eyes" – Multi-Sensor Fusion:
TOF Lidar: This is the core sensor for Simultaneous Localization and Mapping (SLAM). By emitting laser beams and measuring reflection times, it creates a precise 2D point-cloud map of distances around the car. Simply put, it lets the car "see" where walls and passages are, allowing it to map its environment and locate itself.
3D Depth Camera: Like our own eyes, it provides both color images and depth perception. In this project, we’ll primarily use it for visual recognition—identifying objects like traffic signs—giving the car a semantic understanding of its surroundings.
**3.**The "Limbs" – Precise Motion & Control:
**Mecanum Wheel Chassis + Encoder Motors: **This combo grants the car omnidirectional movement, allowing it to strafe sideways like a crab for incredibly flexible positioning in tight spaces. The built-in motor encoders act like an "odometer, " constantly feeding back precise rotation data, providing the crucial odometry information that is foundational for accurate navigation.
**The "Interface" – Voice & Intelligence: **The kit also includes a voice interaction module powered by a large language model (like ChatGPT). While this tutorial focuses on autonomy, it opens the door for future projects where you could command the car using natural language.
👉Get your MentorPi Tutorials or follow Hiwonder GitHub to check repositories.
Part 2: Getting Started – Unboxing and Setup
First, assemble the hardware by following the well-illustrated manual. The process is like building a high-grade model kit—the key is ensuring all cables are securely plugged into their correct ports.
Next is software setup, which is simpler than you might think:
- Get the System Image: Download the custom system image for the MentorPi M1 from the link HiWonder provides. This image comes pre-installed with Ubuntu, ROS2 Humble, drivers for all sensors, and foundational example code.
- Flash and Boot: Use a tool like Raspberry Pi Imager to flash this image onto a MicroSD card. Insert it into the Pi 5, power on, and the car will boot into this ready-made robotics development environment.
- Remote Connection: We typically don’t attach a screen and keyboard to the car itself. By configuring Wi-Fi, you can use your everyday computer to SSH remotely into the car’s Raspberry Pi system. From then on, your computer acts as the "command terminal, " sending all instructions via the command line.
To understand the next steps, let’s build an intuitive sense of three core ROS2 concepts:
- Node: An independent executable program. For example, the "Lidar Driver" is one node, and the "Camera Driver" is another.
- Topic: A "broadcast channel" for passing data between nodes. The lidar node publishes scan data to a topic called /scan, which the navigation node can subscribe to.
- Service: A "request-response" style of communication. For instance, you can call a "save map" service, and the car will execute it and reply with success or failure.
Part 3: Core Practice – Step-by-Step Implementation of Autonomous Features
Now, let’s move to the most exciting part: giving the car its intelligence.
Step 1: Let the Car "Perceive the World" – Sensor Drivers and Data Visualization
First, we start the driver nodes for the lidar and depth camera from the remote terminal. The sensors are now active, and data is flowing internally. To actually "see" this data, we launch ROS2’s powerful 3D visualization tool, RVIZ2, on our computer. With simple configuration, you can watch a real-time point cloud from the lidar and the color feed from the camera appear in the RVIZ2 window. This step verifies the "eyes" are working—the foundation for all that follows.
Step 2: Draw a Map – Lidar-Based SLAM Mapping
This is the process of teaching the car its environment. We launch the SLAM mapping package (e.g., slam_toolbox), which begins subscribing to the /scan lidar data and odometry. Now, you use a remote control to slowly drive the car around the area you want it to navigate autonomously (like a living room or hallway). In RVIZ2, you’ll see a clear 2D grid map being drawn in real-time—white for free space, black for walls. Once mapping is complete, we call a "save map" service to store this map as the car’s "memory blueprint" for future navigation.
Step 3: Load the Map into the Car’s "Brain" – Autonomous Navigation
Now, we enter the navigation phase. We restart the Navigation2 stack and load the saved map as the "global map." Navigation2 is complex, but MentorPi handles most configuration. In RVIZ2, you simply click on the map to set a goal point for the car. Watch the magic: the car immediately calculates an optimal path (a green curve) from its current location to the goal and starts moving. It uses real-time lidar data to create a "local costmap" around the global path, allowing it to avoid unmarked, temporary obstacles. Watching the car cruise smoothly to its destination is your first real taste of autonomous technology in action.
Step 4: Add a "Wise Eye" – Vision-Based Traffic Sign Recognition
Just mapping and obstacle avoidance isn’t "smart" enough. We start the pre-configured visual recognition node, which uses a YOLOv5 model to analyze the camera feed. The model has been pre-trained on images of signs like "STOP." When the car sees such a sign during autonomous navigation, the vision node immediately publishes a message to a ROS topic, something like detected_traffic_sign: stop. We then create a simple decision node that subscribes to this topic. Upon receiving "stop, " it sends a command to the navigation system to cancel the current goal or set speed to zero. Thus, a basic traffic rule—"stop at a stop sign"—is bestowed upon the car. You can extend this logic for "speed limit, " "turn, " and other signs.
Part 4: Going Further – Creating a More Intelligent Autonomous Experience
Completing the above steps gives you a powerful autonomous prototype. How to make it smarter? The key is system integration.
Imagine a complex mission: "Navigate autonomously from the living room (Point A) to the kitchen (Point B), but if a ‘Do Not Enter’ sign is detected in the hallway, abandon the original route and return to the living room."
This requires deep integration between the navigation and vision modules. Upon detecting the sign, the vision node must trigger more complex decision logic. A decision node, aware of the car’s current mission state, would then request the navigation system to set a new goal point back to the living room. The entire system collaborates through ROS topics and services, forming a simple state machine— an "intelligent decision system."
You can go further by using the built-in ChatGPT voice module for "voice-command navigation." Saying "Go to the study" could trigger the module to look up the mapped coordinates for "study" and call the navigation service to go there. At this point, you have a robot project integrating environmental perception, intelligent decision-making, and natural interaction.
Conclusion
Through this project, you’ve completed a practical chain of core robotics technologies: from basic ROS2 operations and sensor data acquisition to SLAM mapping, Navigation2 autonomous navigation, and computer vision integration. The value of the MentorPi M1 is that it packages these cutting-edge, industry-standard technologies into an accessible, ready-to-use experimentation platform.
More importantly, this is just the beginning. The open-source nature and modular design of the MentorPi M1 are your blank canvas. You can experiment with more advanced vision models (like YOLOv8), explore 3D SLAM, try multi-robot coordination, or add a robotic arm for mobile manipulation… the only limit is your imagination. We hope this guide serves as a solid stepping stone for your exploration. Now, power on your MentorPi M1 and start building your own world of intelligent mobility.
**Read more