Sunday, 22 March 2026

Update mar 22 new auto updates

 In a effort to keep track of change in Robbies Code a summary of changes and mods we will post a summary of activity 

 

 

March 2026: Swerve Drive Gets Smarter

The swerve drive looked right on paper but had an annoying judder in practice: wheels would oscillate back and forth when trying to hold a near-zero velocity. The root cause was a missing hysteresis band around the wheel flip decision. When a wheel needs to go, say, 91° from its current position, it can either rotate 91° or flip 180° and go 89° the other way — both are mechanically equivalent. Without a proper dead-band, the controller would keep flip-flopping between the two options near the boundary. Adding a 45°/90° hysteresis band eliminated the oscillation.

Two other quality-of-life improvements went in at the same time: a hardware steering limit (MAX_STEERING_RAD) that makes the wheel flip rather than clamp when it hits the physical range, and first-order velocity smoothing (MOTOR_RAMP_TAU) to take the edge off acceleration. The robot moves noticeably more smoothly as a result.

Tunable Firmware Over Serial

One thing worth highlighting: we added a maxrate parameter to the ESP32 steering firmware. Previously, adjusting the maximum stepper speed meant changing a constant, recompiling, and reflashing — a five-minute interruption that breaks your testing flow. Now you send a single serial command:

echo -e "S 800 200 1000\n" > /dev/esp

That sets max rate to 800 steps/s, stall detection timeout to 200ms, and watchdog to 1000ms. The setting is saved to NVS flash and survives a power cycle. To query the current config:

echo -e "q\n" > /dev/esp

This sounds like a small thing but makes a real difference when tuning — try a value, drive the robot, adjust, repeat without stopping anything.

Splitting the Arm's Hardware Interface

The SCARA arm had a different kind of conflict. The arm's vertical axis (joint 0) is a stepper motor driven by a separate ESP32-based controller, while all the other joints use Feetech/ST serial servos. Originally both were lumped into a single ros2_control hardware block, which isn't how ros2_control is designed to work — different hardware interfaces need separate blocks. Splitting them into _j0 (stepper, via stepper_ros2_driver) and _servos (Feetech, via feetech_ros2_driver) cleaned this up properly.

We also added a use_mock_hardware:=true xacro argument so the arm can be launched in simulation mode without any physical hardware attached — useful for testing MoveIt2 motion planning on the desk before trusting the real arm.


March 2026: Teaching the Robot to See Low Obstacles

This is the big one from this week, and the one that required the most debugging.

The Problem: The Lidar Is Blind Below 390mm

Robbie's RPLidar is mounted at 390mm above the floor. Its beam is horizontal, so it simply cannot detect anything shorter than that — a 305mm drinks can sitting on the floor is completely invisible to it from a navigation standpoint. This isn't a configuration issue; it's physics.

The RealSense D435 at the front of the robot is mounted at 300mm and has a 57° vertical field of view, so it does see low obstacles clearly. The question was getting that data into the Nav2 costmap properly.

Ghost Obstacles: A 3D Raycasting Problem

We switched the local costmap from a 2D ObstacleLayer to a VoxelLayer, which can ingest 3D pointcloud data and mark obstacles at the correct height. The obstacle marking worked fine — slide a box in front of the camera and it appeared in the costmap immediately. The problem was clearing: when you removed the box, the mark stayed. Move someone's foot through the space and the footprint remained permanently. The robot would start navigating around nothing.

The cause was subtle. The VoxelLayer stores a 3D grid of voxels (cubes). We had it configured with 16 height slices of 5cm each, covering 0–80cm. The lidar clears free space by raycasting — sweeping a beam horizontally at 390mm. That beam only intersects voxel slice index 7 (0.35–0.40m). Obstacle points from the RealSense were marking slices 2 through 6 (0.10–0.30m, the height of a typical low obstacle). The lidar's clearing rays passed right over those slices and never touched them. Once marked, they stayed forever.

The fix is almost embarrassingly simple in hindsight: collapse all 16 slices into one single 80cm-tall voxel.

z_voxels: 1
z_resolution: 0.8   # one voxel covering 0–0.8m

Now the lidar ray at any height within 0–80cm intersects the single voxel and clears the whole column. The 3D height discrimination is gone, but for a local costmap whose only job is to answer "is there something in the way", that trade-off is fine.

The Wrong Topic Name

While digging through the config we found that the pointcloud topic in nav2_params.yaml was set to /front_depth_camera/depth/color/points but the RealSense launch file sets camera_name: front_camera, making the actual published topic /front_camera/depth/color/points. The costmap had been silently failing to subscribe to any pointcloud data at all — which explains why obstacles weren't appearing in the first place. A frustrating but instructive bug: always verify what topics are actually being published before assuming your configuration is working.


Fixing the Crab-Walking

With the costmap now working, a different problem became obvious: the robot was navigating like a crab.

On a route that required turning — go forward, turn left, continue — the robot was cutting diagonally across the corner rather than following the path and rotating to face the next leg. Because the RealSense only covers the forward 86°, a robot strafing sideways is effectively navigation-blind in its actual direction of travel. The voxel layer it had just been given was immediately undermined by the robot pointing the camera the wrong way.

This came down to two settings in the MPPI (Model Predictive Path Integral) controller:

  1. PathAngleCritic.max_angle_to_furthest: 1.0 — this critic only fires when the robot's heading is more than 1.0 radian (57°) off the path direction. So on a 45° diagonal approach the critic was completely silent, and the robot was free to strafe without penalty. Dropping this to 0.4 radians (23°) means the critic activates much earlier and pushes the robot to align with the path.
  2. PathAlignCritic.use_path_orientations: false — the path planner knows what direction the robot should be facing at every point along the path. With this set to false, MPPI was ignoring that information entirely — just trying to stay near the path line without caring about heading. Flipping it to true gives the controller the directional information it needs.

We also raised PathAngleCritic.cost_weight from 5.0 to 12.0 to make it the dominant force when the robot is heading the wrong way.

Sunday, 22 February 2026

Robbie 2.0: The Reinvention of a Home-Built Robot

A few months on, the project has evolved significantly.

The original wheel configuration ultimately proved unreliable. While the system functioned and integration with Home Assistant showed promise, performance was inconsistent. The use of ChatGPT was extremely helpful during this phase and accelerated development considerably. However, over time, weaknesses became apparent. The wheel drive assemblies were prone to slipping, and I²C communication with the motor drivers was intermittently unstable. After several weeks of operation, the limitations of the older hardware and software became clear — much of it had not been designed specifically for this application.

The introduction of Claude Code marked a turning point in the project. Development speed increased dramatically. We replaced the original wheels with Waveshare DDSM115 hub motors, which feature integrated rubber tyres and RS485 communication. This change required a complete rewrite of the motor driver software.

The rewrite addressed several architectural weaknesses. The legacy serial data gateway was slow and cumbersome, particularly when handling dual serial ports. Using Claude Code, we restructured the system to:

  • Implement proper closed-loop control for the stepper systems

  • Read the IMU directly over I²C instead of through a secondary MCU

  • Use native serial interfaces rather than custom device abstractions

  • Eliminate unnecessary communication layers

One of the most powerful aspects of using Claude Code was its ability to generate complete, production-ready programs from detailed specification files, while referencing similar implementations from related projects. This dramatically reduced iteration time.

As development progressed, driving and navigation testing exposed timing issues and logic flaws. Leveraging AI to generate targeted test code and analyze system behavior proved invaluable. The ability to propose architectural changes — even major ones — and receive complete, coherent code in real time reduced what would traditionally take months into a matter of days. Debugging navigation behavior in particular became far more efficient.

We also completely rewrote the voice recognition and text-to-speech subsystems. These are now hosted via a web server interface, allowing commands to be issued either by spoken input or via text through a browser interface.

Current Status

The system is now significantly more robust and capable:

  • Reliable autonomous navigation

  • Automatic docking and charging

  • Arm control via MoveIt

  • Voice command integration

  • Local LLM support for general interaction

  • Facial recognition with the ability to enroll new users dynamically

  • Arm and base pose recording with sequence playback synchronized to voice

Overall, the project has transitioned from a proof-of-concept system built on repurposed components into a far more stable, purpose-designed robotic platform. The acceleration in software development and fault diagnosis has been transformative, enabling rapid iteration and architectural refinement that would previously have been impractical.

Saturday, 19 July 2025

Phase 2 Update: Integrating Home Assistant and Wyoming Satellite


As we kick off Phase 2, we’re moving away from our custom speech‑to‑text solution and leaning on Home Assistant as Robbie’s primary voice interface. By piping audio through the Wyoming satellite link and feeding commands into MQTT, we not only gain rock‑solid STT accuracy but also unlock powerful automation hooks:

  • Battery Alerts: Whenever Robbie’s battery dips below a threshold, Home Assistant will flash LEDs and trigger a voice warning over the satellite channel.

  • Natural‑Language Commands: Spoken instructions matching our custom “command list” get forwarded to ROS as action goals. Anything else is treated as a query and sent to our LLM, using conversational context for sharper, more reliable responses.

  • Seamless Feedback Loop: All robot responses—whether status updates or LLM-generated replies—are spoken back via Home Assistant’s text‑to‑speech and broadcast over Wyoming.

With Home Assistant automations handling the heavy lifting, we can rapidly script new behaviors and integrate sensors, without reinventing the wheel on voice processing. Phase 2 promises a smarter, more conversational Robbie—stay tuned!

 

next adding vision 


Sunday, 20 April 2025

Robbie Phase 1 complete

 


Robbie Phase 1 Complete

Robbie has successfully completed Phase 1, and we now have a fully operational robot capable of performing a wide range of core functions. Key features include:

  • Autonomous navigation to any given location

  • SLAM (Simultaneous Localization and Mapping) for real-time environment mapping

  • Automated docking with the charging station

  • Executing navigation sequences, moving between multiple waypoints

  • Full MoveIt integration for both arms

  • Arm and head coordination, enabling the arms to reach a target while the head tracks it

  • YOLOv5-based object detection

  • Howie chatbot serving as the onboard AI for conversational interaction and control

  • Custom GUI for arm manipulation, wheel status monitoring, sending navigation commands, and triggering preset arm poses

The robot's arms are based on the Aster robot design, which itself is derived from the Poppy platform. They are well-proportioned for the chassis and just clear the deck during motion. The first two joints use 40kg servos, which remain surprisingly cool during operation. We've implemented the ServoEasing Arduino library to smooth out the movements—greatly reducing the initial jerkiness and overall shakiness of the robot. As confidence in the arm kinematics grows, we can safely increase movement speed to produce a more natural and lifelike appearance.

The arms are resin printed, which gives them a clean aesthetic, though long-term durability is still under evaluation. The grippers are currently more placeholders than practical, serving primarily for visual and alignment purposes. Proper pick-and-place performance will likely require a redesign of the grippers—and possibly the entire arm assembly. Once the outer covers are added, we may encounter thermal challenges, but with winter approaching, overheating shouldn't be a major concern. On the plus side, the covers will help keep out dust and foreign object debris (FOD).


Phase 2 Plans

Looking ahead, Phase 2 will focus on increasing autonomy, interactivity, and dexterity. Planned milestones include:

  • Routine autonomous behaviors – Robbie will regularly roam the house, perform idle animations like fidgeting or muttering, and initiate casual conversation using Markov chains for randomized speech patterns

  • Enhanced pick-and-place capabilities – introducing true object grasping, lifting, and positioning using updated grippers and motion logic

  • Face detection, tracking, and recognition – allowing Robbie to greet known individuals by name, follow faces during conversation, and log new encounters

  • Photo capture missions – patrolling the house while using YOLO to detect interesting objects and take photos automatically

     


     


Tuesday, 25 March 2025

March updates

Not a good couple of weeks for the robot the torso was to unstable the robot was in danger of tipping over. then the main computer failed long story short i replaced the it with a new I7 main computer. that works much better the WIFI is only 5 but still works. the battery charger started to fail resulting in a shorter run time. the new charger gives a 2 hour run time now. Navigation is starting to head in the right direction. we still get stuck in tight spots we have achieved a few way point navs.

A few thoughts

  • charging system needs beefing up we are having wire power loss
  • add in the 3rd battery for a extra 18AH
  • start navigation tuning with a new map
  • see how a boost charge affects run time

using a LLM with vscode has made tuning the robot easier

Tuesday, 21 January 2025

networks and bugs

 

For the last couple of months I have been chasing bugs first the network was slow which made the robot uncontrollable I switched out the I5 for a RPI5 but had to add a UPS to give the RPI enough power. Better but not enough I switched the middle ware to Zenoh but that required a upgrade to ROS2 Jazzy and Ubuntu 24.4 it soon became apparent that the RPI5 didn’t have the speed to run the complete stack not even enough to run the robot and cameras. I replaced the WiFi in the I5 from a USB dongle to a express WiFi card so we we ran the network at the same speed as the RPI but I was getting packet drop I added a extra WiFi access point tied to the wired network the I5 went from 120 kbit/s to 300 mbit/s a big improvement, we can now run the robot with everything required running with very little lag. The next task was getting auto dock working to dock in reverse in the end it was simpler just to write my own node. With all the systems running on the robot I started having battery fails we replace the battery for the other robot but that didn’t last very long. A bit more digging the robot draws 4 amps for the moment we changed the way we operate by docking the robot between tasks. Now that only left Navigation to sort out after a few days of tuning and reading we adjusted the global cost map inflation to give smooth pathways and the velocity smoother to keep velocities that work with the robot. We can navigate around the house and it will work most of the time still more tuning required but that is for another day. We also started to add the covers on the robot now that we can navigate through door ways it time to put on the torso

 




 

Friday, 1 November 2024

 Robbie now with torso


I wanted to test the navigation system with the whole robot I'm looking at the acceleration profiles and how the robot handles doorways and corridors and just driving in general