Sunday, 22 February 2026

Robbie 2.0: The Reinvention of a Home-Built Robot

A few months on, the project has evolved significantly.

The original wheel configuration ultimately proved unreliable. While the system functioned and integration with Home Assistant showed promise, performance was inconsistent. The use of ChatGPT was extremely helpful during this phase and accelerated development considerably. However, over time, weaknesses became apparent. The wheel drive assemblies were prone to slipping, and I²C communication with the motor drivers was intermittently unstable. After several weeks of operation, the limitations of the older hardware and software became clear — much of it had not been designed specifically for this application.

The introduction of Claude Code marked a turning point in the project. Development speed increased dramatically. We replaced the original wheels with Waveshare DDSM115 hub motors, which feature integrated rubber tyres and RS485 communication. This change required a complete rewrite of the motor driver software.

The rewrite addressed several architectural weaknesses. The legacy serial data gateway was slow and cumbersome, particularly when handling dual serial ports. Using Claude Code, we restructured the system to:

  • Implement proper closed-loop control for the stepper systems

  • Read the IMU directly over I²C instead of through a secondary MCU

  • Use native serial interfaces rather than custom device abstractions

  • Eliminate unnecessary communication layers

One of the most powerful aspects of using Claude Code was its ability to generate complete, production-ready programs from detailed specification files, while referencing similar implementations from related projects. This dramatically reduced iteration time.

As development progressed, driving and navigation testing exposed timing issues and logic flaws. Leveraging AI to generate targeted test code and analyze system behavior proved invaluable. The ability to propose architectural changes — even major ones — and receive complete, coherent code in real time reduced what would traditionally take months into a matter of days. Debugging navigation behavior in particular became far more efficient.

We also completely rewrote the voice recognition and text-to-speech subsystems. These are now hosted via a web server interface, allowing commands to be issued either by spoken input or via text through a browser interface.

Current Status

The system is now significantly more robust and capable:

  • Reliable autonomous navigation

  • Automatic docking and charging

  • Arm control via MoveIt

  • Voice command integration

  • Local LLM support for general interaction

  • Facial recognition with the ability to enroll new users dynamically

  • Arm and base pose recording with sequence playback synchronized to voice

Overall, the project has transitioned from a proof-of-concept system built on repurposed components into a far more stable, purpose-designed robotic platform. The acceleration in software development and fault diagnosis has been transformative, enabling rapid iteration and architectural refinement that would previously have been impractical.

Saturday, 19 July 2025

Phase 2 Update: Integrating Home Assistant and Wyoming Satellite


As we kick off Phase 2, we’re moving away from our custom speech‑to‑text solution and leaning on Home Assistant as Robbie’s primary voice interface. By piping audio through the Wyoming satellite link and feeding commands into MQTT, we not only gain rock‑solid STT accuracy but also unlock powerful automation hooks:

  • Battery Alerts: Whenever Robbie’s battery dips below a threshold, Home Assistant will flash LEDs and trigger a voice warning over the satellite channel.

  • Natural‑Language Commands: Spoken instructions matching our custom “command list” get forwarded to ROS as action goals. Anything else is treated as a query and sent to our LLM, using conversational context for sharper, more reliable responses.

  • Seamless Feedback Loop: All robot responses—whether status updates or LLM-generated replies—are spoken back via Home Assistant’s text‑to‑speech and broadcast over Wyoming.

With Home Assistant automations handling the heavy lifting, we can rapidly script new behaviors and integrate sensors, without reinventing the wheel on voice processing. Phase 2 promises a smarter, more conversational Robbie—stay tuned!

 

next adding vision 


Sunday, 20 April 2025

Robbie Phase 1 complete

 


Robbie Phase 1 Complete

Robbie has successfully completed Phase 1, and we now have a fully operational robot capable of performing a wide range of core functions. Key features include:

  • Autonomous navigation to any given location

  • SLAM (Simultaneous Localization and Mapping) for real-time environment mapping

  • Automated docking with the charging station

  • Executing navigation sequences, moving between multiple waypoints

  • Full MoveIt integration for both arms

  • Arm and head coordination, enabling the arms to reach a target while the head tracks it

  • YOLOv5-based object detection

  • Howie chatbot serving as the onboard AI for conversational interaction and control

  • Custom GUI for arm manipulation, wheel status monitoring, sending navigation commands, and triggering preset arm poses

The robot's arms are based on the Aster robot design, which itself is derived from the Poppy platform. They are well-proportioned for the chassis and just clear the deck during motion. The first two joints use 40kg servos, which remain surprisingly cool during operation. We've implemented the ServoEasing Arduino library to smooth out the movements—greatly reducing the initial jerkiness and overall shakiness of the robot. As confidence in the arm kinematics grows, we can safely increase movement speed to produce a more natural and lifelike appearance.

The arms are resin printed, which gives them a clean aesthetic, though long-term durability is still under evaluation. The grippers are currently more placeholders than practical, serving primarily for visual and alignment purposes. Proper pick-and-place performance will likely require a redesign of the grippers—and possibly the entire arm assembly. Once the outer covers are added, we may encounter thermal challenges, but with winter approaching, overheating shouldn't be a major concern. On the plus side, the covers will help keep out dust and foreign object debris (FOD).


Phase 2 Plans

Looking ahead, Phase 2 will focus on increasing autonomy, interactivity, and dexterity. Planned milestones include:

  • Routine autonomous behaviors – Robbie will regularly roam the house, perform idle animations like fidgeting or muttering, and initiate casual conversation using Markov chains for randomized speech patterns

  • Enhanced pick-and-place capabilities – introducing true object grasping, lifting, and positioning using updated grippers and motion logic

  • Face detection, tracking, and recognition – allowing Robbie to greet known individuals by name, follow faces during conversation, and log new encounters

  • Photo capture missions – patrolling the house while using YOLO to detect interesting objects and take photos automatically

     


     


Tuesday, 25 March 2025

March updates

Not a good couple of weeks for the robot the torso was to unstable the robot was in danger of tipping over. then the main computer failed long story short i replaced the it with a new I7 main computer. that works much better the WIFI is only 5 but still works. the battery charger started to fail resulting in a shorter run time. the new charger gives a 2 hour run time now. Navigation is starting to head in the right direction. we still get stuck in tight spots we have achieved a few way point navs.

A few thoughts

  • charging system needs beefing up we are having wire power loss
  • add in the 3rd battery for a extra 18AH
  • start navigation tuning with a new map
  • see how a boost charge affects run time

using a LLM with vscode has made tuning the robot easier

Tuesday, 21 January 2025

networks and bugs

 

For the last couple of months I have been chasing bugs first the network was slow which made the robot uncontrollable I switched out the I5 for a RPI5 but had to add a UPS to give the RPI enough power. Better but not enough I switched the middle ware to Zenoh but that required a upgrade to ROS2 Jazzy and Ubuntu 24.4 it soon became apparent that the RPI5 didn’t have the speed to run the complete stack not even enough to run the robot and cameras. I replaced the WiFi in the I5 from a USB dongle to a express WiFi card so we we ran the network at the same speed as the RPI but I was getting packet drop I added a extra WiFi access point tied to the wired network the I5 went from 120 kbit/s to 300 mbit/s a big improvement, we can now run the robot with everything required running with very little lag. The next task was getting auto dock working to dock in reverse in the end it was simpler just to write my own node. With all the systems running on the robot I started having battery fails we replace the battery for the other robot but that didn’t last very long. A bit more digging the robot draws 4 amps for the moment we changed the way we operate by docking the robot between tasks. Now that only left Navigation to sort out after a few days of tuning and reading we adjusted the global cost map inflation to give smooth pathways and the velocity smoother to keep velocities that work with the robot. We can navigate around the house and it will work most of the time still more tuning required but that is for another day. We also started to add the covers on the robot now that we can navigate through door ways it time to put on the torso

 




 

Friday, 1 November 2024

 Robbie now with torso


I wanted to test the navigation system with the whole robot I'm looking at the acceleration profiles and how the robot handles doorways and corridors and just driving in general





Wednesday, 30 October 2024

 Docking Server

Here is a clip showing the nav2 docking server Robbie is docking forward so we can work on the other systems. I have mapped the un/dock sequence to the joystick this seems to make a good natural work flow. the undock should be called from the BT server before the start of navigation