Sunday, 22 February 2026

Robbie 2.0: The Reinvention of a Home-Built Robot

A few months on, the project has evolved significantly.

The original wheel configuration ultimately proved unreliable. While the system functioned and integration with Home Assistant showed promise, performance was inconsistent. The use of ChatGPT was extremely helpful during this phase and accelerated development considerably. However, over time, weaknesses became apparent. The wheel drive assemblies were prone to slipping, and I²C communication with the motor drivers was intermittently unstable. After several weeks of operation, the limitations of the older hardware and software became clear — much of it had not been designed specifically for this application.

The introduction of Claude Code marked a turning point in the project. Development speed increased dramatically. We replaced the original wheels with Waveshare DDSM115 hub motors, which feature integrated rubber tyres and RS485 communication. This change required a complete rewrite of the motor driver software.

The rewrite addressed several architectural weaknesses. The legacy serial data gateway was slow and cumbersome, particularly when handling dual serial ports. Using Claude Code, we restructured the system to:

  • Implement proper closed-loop control for the stepper systems

  • Read the IMU directly over I²C instead of through a secondary MCU

  • Use native serial interfaces rather than custom device abstractions

  • Eliminate unnecessary communication layers

One of the most powerful aspects of using Claude Code was its ability to generate complete, production-ready programs from detailed specification files, while referencing similar implementations from related projects. This dramatically reduced iteration time.

As development progressed, driving and navigation testing exposed timing issues and logic flaws. Leveraging AI to generate targeted test code and analyze system behavior proved invaluable. The ability to propose architectural changes — even major ones — and receive complete, coherent code in real time reduced what would traditionally take months into a matter of days. Debugging navigation behavior in particular became far more efficient.

We also completely rewrote the voice recognition and text-to-speech subsystems. These are now hosted via a web server interface, allowing commands to be issued either by spoken input or via text through a browser interface.

Current Status

The system is now significantly more robust and capable:

  • Reliable autonomous navigation

  • Automatic docking and charging

  • Arm control via MoveIt

  • Voice command integration

  • Local LLM support for general interaction

  • Facial recognition with the ability to enroll new users dynamically

  • Arm and base pose recording with sequence playback synchronized to voice

Overall, the project has transitioned from a proof-of-concept system built on repurposed components into a far more stable, purpose-designed robotic platform. The acceleration in software development and fault diagnosis has been transformative, enabling rapid iteration and architectural refinement that would previously have been impractical.