[Documentation] [TitleIndex] [WordIndex

Planet ROS

Planet ROS - http://planet.ros.org

Planet ROS - http://planet.ros.org[WWW] http://planet.ros.org


ROS Discourse General: LSEP: Open protocol for standardized robot-to-human state communication (light + sound + motion)

Hello ROS community,

I’d like to introduce LSEP (Light Signal Expression Protocol) — an open standard I’ve been developing for how robots communicate their internal state to nearby humans using coordinated light signals, sound, and motion cues.

The problem LSEP solves:

Every robot manufacturer currently invents their own LED patterns and sound cues. There’s no shared vocabulary. A blinking blue light could mean “charging” on one platform and “human detected” on another. With the EU AI Act (Art. 50) now requiring transparency for human-facing AI systems, the industry needs a standardized approach.

What LSEP defines:

- 6 core states: IDLE, AWARENESS, INTENT, CARE, CRITICAL, THREAT

- 3 extended states: MED_CONF, LOW_CONF, INTEGRITY (for sensor uncertainty and self-diagnostics)

- Each state maps to specific light color + pulse pattern, optional sound, and motion modifier

- State transitions driven by Time-to-Contact (TTC) physics, not heuristics

- 1.5m proximity floor: any human within 1.5m triggers minimum AWARENESS

Technical details:

- RFC style specification (v2.0)

- Machine readable JSON signal definitions

- Unity prototype (HDRP) with 74 tests, including sensor noise simulation and tracking dropouts

- MIT licensed — use it however you want

Why I’m posting here:

ROS is where robot software gets built. If LSEP is going to be useful, it needs to work in your stacks — as a ROS node, a topic publisher, or a behavior tree integration. I’m looking for:

1. Feedback on the state model — Do 9 states cover the scenarios you encounter? What’s missing?

2. Integration ideas — How would you want to consume LSEP in a ROS 2 pipeline? As a `/lsep_state` topic? A lifecycle node?

3. Real-world edge cases — What breaks first when you imagine deploying this on your robot?

Links:

- Specification + demo: [lsep.org](https://lsep.org)

- GitHub: [ GitHub - NemanjaGalic/LSEP: Open protocol for standardized human-robot communication — 9 states, 3 modalities, 1 grammar. Physics-based. EU AI Act ready. · GitHub ]( GitHub - NemanjaGalic/LSEP: Open protocol for standardized human-robot communication — 9 states, 3 modalities, 1 grammar. Physics-based. EU AI Act ready. · GitHub )

Happy to answer questions and discuss. The goal is to make this the “USB-C of robot communication” — one standard, every platform.

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/lsep-open-protocol-for-standardized-robot-to-human-state-communication-light-sound-motion/52997

ROS Discourse General: Rover + LiDAR perception inside a Forest3D-generated world (Gazebo Harmonic)

Rover + LiDAR inside a Forest3D-generated world (Gazebo Harmonic)

A quick demonstration of spawning a robot and running LiDAR perception inside a Forest3D-generated environment with realistic visuals, making it a solid base for mapping and navigation tasks.

:play_button: Watch on YouTube

Performance can be improved by tuning the mesh decimation level depending on your use case.

Current work: Integrating terramechanics for more realistic rover-terrain interaction,

Forest3D supports a variety of environments beyond forests, including lunar and other unstructured terrains. Feel free to reach out

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/rover-lidar-perception-inside-a-forest3d-generated-world-gazebo-harmonic/52977

ROS Discourse General: Part 2: Canonical Observability Stack Tryout | Cloud Robotics WG Meeting 2026-03-09

Please come and join us for this coming meeting at Mon, Mar 9, 2026 4:00 PM UTCMon, Mar 9, 2026 5:00 PM UTC, where we plan to continue deploying an example Canonical Observability Stack (COS) instance based on information from the tutorials and documentation. This session will pick up where the last session left off: an AWS instance hosting the COS server side, and a VirtualBox VM hosting the robot side.

Last session, we started working through the documentation for setting up both a COS server instance and a robot instance. Unfortunately, the recording cut out shortly into the meeting due to lack of disk space. After this point, we switched to hosting in AWS and were able to host a COS instance, although it was misconfigured and the robot was unable to connect. If you’re interested to watch the recorded part of the meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/part-2-canonical-observability-stack-tryout-cloud-robotics-wg-meeting-2026-03-09/52967

ROS Discourse General: Pixhawk - ardusub setup ( Roll hold)

Hi all,

We’re building an ROV using Pixhawk (ArduSub) with a Raspberry Pi companion computer (ROS2 + MAVROS). The vehicle needs to attach and operate along vertical surfaces, so maintaining controlled roll while maneuvering is a core requirement.

Stack

Goal

We want joystick-based control similar to POSHOLD stability, but still allow roll control so the vehicle can move along the surface while attached.

Thanks in advance — happy to share more details about the vehicle config if helpful.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/pixhawk-ardusub-setup-roll-hold/52954

ROS Discourse General: Is there a working group for maintaining ROS 2-based robots in industry? 🤖

Hi everyone,

We’re curious — does a dedicated working group (or similar community) already exist for maintaining and operating ROS 2-based robots in industrial environments? If not, maybe it’s time to build one.

At Siemens, our ROS 2 efforts are focused on four key challenges:

We’d love to connect with the community and learn what’s already out there! :globe_showing_europe_africa:

We’re actively looking to engage with others working in this space — whether you’re building solutions, facing the same challenges, or have already found answers we haven’t discovered yet.

Here are some data points we’ve gathered so far:

Exciting tools that just dropped :hammer_and_wrench:

The community has been busy! A few noteworthy new tools:

Big shoutout to @doisyg for sharing impressive insights on how they manage upgrades across a large fleet of robots in the field! :clap:
And I am sure there is a big vast of further open source tools out there that can help all of us.

What Siemens has shared so far (all talks in English)

We’ve been open about our own challenges and learnings:

:speech_balloon: Our concrete question to you:

Would you be interested in joining a regular working group to discuss these topics and align our open-source efforts?

Vote below — even a single click tells us a lot! :backhand_index_pointing_down:

Click to view the poll.

Let’s build in the open — together! :handshake:

We’re strong believers in open collaboration. Whether you’re a researcher, developer, or industry practitioner — let’s align our efforts and avoid reinventing the wheel.

A few things we’d especially love to hear about:

Cheers from Germany :clinking_beer_mugs:
Florian


Update as of Tue, Mar 3, 2026 11:00 PM UTC

Let’s try to ball point where a potential virtual meeting could already happen:
(Please also vote if the day does not fit, right now I am more interested in finding the right time of day)

Click to view the poll.

16 posts - 7 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/is-there-a-working-group-for-maintaining-ros-2-based-robots-in-industry/52899

ROS Discourse General: ROS Meetup Medellín Colombia - 29-30 Apr 2026

We are pleased to officially announce ROS Meetup Medellín 2026, a space designed to bring together the robotics, ROS, and autonomous systems community in Colombia.

:round_pushpin: April 29 – Universidad EIA (Poster Session)
:round_pushpin: April 30 – Parque Explora (Talk Session)

Medellín, recognized for its strong innovation and technology ecosystem, will be the perfect setting to connect academia, industry, and the open-source community around ROS and robotics.

:microphone: Call for Speakers open
:framed_picture: Call for Posters open
:busts_in_silhouette: Attendee registration available

If you are developing ROS-based projects, conducting robotics research, or building AI-driven and autonomous systems solutions, we invite you to share your work and actively participate in the event.

Find all the information and registration links here:
:link: https://linktr.ee/IEEE_RAS_Colombia

We look forward to having you join us in this edition and to continue strengthening the ROS community in Colombia.

See you in Medellín :robot:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-meetup-medellin-colombia-29-30-apr-2026/52892

ROS Discourse General: MAHE Mobility Challenge 2026 (MIT Bengaluru)

Hello ROS Community,

MAHE Mobility Challenge 2026 is a national-level hybrid hackathon hosted by CEAM and the Department of ECE at Manipal Institute of Technology (MIT), Bengaluru.

This challenge is designed for B.Tech students passionate about autonomous and connected mobility systems, offering an opportunity to ideate, design, and build real working prototypes addressing next-generation mobility challenges.

Total Prize Pool: ₹3 Lakhs


Challenge Tracks

• AI in Mobility
Intelligent perception systems, predictive modeling, adaptive routing, autonomy stacks

• Robotics & Control
Embedded systems, actuator integration, simulation workflows, control architecture design

• Cybersecurity for Mobility
Secure V2X communication, threat modeling, safety-focused system hardening for connected vehicles


Format & Timeline

Shortlisted teams will build and demonstrate working prototypes during the final round.

Participants are encouraged to leverage open-source robotics frameworks (ROS), simulation environments, and modular autonomy architectures where relevant.

We welcome engagement from students and robotics enthusiasts interested in contributing to secure and intelligent mobility systems.

Further details and registration:
https://mahemobility.mitblr.org/

Looking forward to participation and discussion from the ROS community.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/mahe-mobility-challenge-2026-mit-bengaluru/52886

ROS Discourse General: SIPA: Quantifying Physical Integrity and the Sim-to-Real Gap in 7-DoF Trajectories

Introduction:

SIPA (Spatial Intelligence Physical Audit) is a trajectory-level physical consistency diagnostic. It does not require source code access or internal simulator states and directly audits 7-DoF CSV trajectories. By design, SIPA is compatible with any system that produces spatial motion data. Its principle is based on the Non-Associative Residual Hypothesis (NARH).

1. What SIPA Can Audit

SIPA operates on the final motion output, enabling post-hoc physical forensics for:

Supported Data Pathways:

2. The Logic: Non-Associative Residual Hypothesis (NARH)

NARH posits that physical inconsistency stems from discrete solver ordering rather than just algebraic error.

(1)Setting

Consider a rigid-body simulation system defined by:

s_{t+1} = \Psi_{\sigma(k)} \circ \cdots \circ \Psi_{\sigma(1)} (s_t)

where � is an execution order induced by:

Each \Psi_i is individually well-defined, but their composition order may vary.

(2) Order Sensitivity

Although each operator Ψi belongs to an associative algebra (e.g., matrix multiplication, quaternion composition), the composition of numerically approximated operators may satisfy:

(\Psi_a \circ \Psi_b) \circ \Psi_c \neq \Psi_a \circ (\Psi_b \circ \Psi_c)

due to:

Define the discrete associator:

A(a,b,c;s) = \bigl( (\Psi_a \circ \Psi_b) \circ \Psi_c \bigr)(s) - \bigl( \Psi_a \circ (\Psi_b \circ \Psi_c) \bigr)(s)

(3) Definition: Non-Associative Residual

We define the Non-Associative Residual (NAR) at state s_t as:

R_t = \lVert A(a,b,c; s_t) \rVert

for a chosen triple of sub-operators representative of contact or constraint updates.

This residual measures path-dependence induced by discrete solver ordering, not algebraic non-associativity of the state representation.

(4) Hypothesis (NARH)

In high-interaction-density regimes (e.g., contact-rich robotics, high-speed manipulation), the Non-Associative Residual R_t becomes non-negligible relative to scalar stability metrics, and accumulates over time as a structured drift term.

Formally, there exists a regime such that:

\sum_{t=0}^{T} R_t \not\approx 0

even when:

\Vert s_{t+1} - s_t \Vert remains bounded.

(5) Interpretation

This hypothesis does not claim:

Instead, it asserts:

Discrete parallel constraint resolution introduces a measurable order-dependent residual that is not explicitly encoded in the state space.

This residual may contribute to:

(6) Falsifiability

NARH is falsified if:

  1. s_t remains within numerical noise across interaction densities.

  2. Reordering constraint application yields statistically indistinguishable trajectories.

  3. Scalar metrics (e.g., kinetic energy norm, velocity norm) detect instability earlier or equally compared to any associator-derived signal.

(7) Research Implication

If validated, NARH suggests that:

If invalidated, the experiment establishes an empirically order-invariant regime — a valuable boundary characterization of solver behavior.

3. Physical Integrity Rating (PIR)

SIPA introduces the Physical Integrity Rating (PIR), a heuristic composite indicator designed to quantify the causal reliability of motion trajectories. PIR evaluates whether a world model is “physically solvent� or accumulating “kinetic debt.�

The Metric

PIR = Q_{\text{data}} \times (1 - D_{\text{phys}})

:bar_chart: Credit Rating Scale

PIR Score Rating Label Operational Meaning
≥ 0.85 A High Integrity Reliable for industrial simulation and safety-critical AI.
≥ 0.70 B Acceptable Generally consistent; minor numerical drift detected.
≥ 0.50 C Speculative “Visual plausibility maintained, but causal logic is shaky.�
≥ 0.30 D High Risk “Elevated physical debt; prone to ““hallucinations�� under stress.�
< 0.30 F Critical Physical bankruptcy; trajectory violates fundamental causality.

Note on Early Adoption: Since its initialization, we’ve observed a unique anomaly: 120 institutional entities cloned the repo via CLI with near-zero web UI traffic. This suggests that the industry (Sim-to-Real teams and Tech DD leads) is already utilizing NARH for internal audits. View Traffic Evidence

Call to Action

We invite the ROS community to stress-test their simulators and world models using SIPA. Any questions can be discussed under this topic!

GitHub Repository: https://github.com/ZC502/SIPA.git

2 posts - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/sipa-quantifying-physical-integrity-and-the-sim-to-real-gap-in-7-dof-trajectories/52884

ROS Discourse General: NVIDIA Isaac ROS 4.2 for DGX Spark has arrived

voyager_f3_localize3_2x-ezgif.com-video-to-gif-converter (1)
NVIDIA Isaac ROS 4.2 for DGX Spark is now live.

Here’s what’s new in 4.2:

Check out the full Isaac ROS 4.2 details and share what you build with this release. :rocket:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/nvidia-isaac-ros-4-2-for-dgx-spark-has-arrived/52858

ROS Discourse General: Controlling the Nero Robotic Arm with OpenClaw

Controlling the Nero Robotic Arm with OpenClaw

As a popular open-source project, OpenClaw has become a highlight in the robotic arm control field with its intuitive operation and strong adaptability. It enables full end-to-end linkage between AI commands and device execution, greatly lowering the barrier for robotic arm control. This article focuses on practical implementation. Combined with the pyAgxArm SDK, we will guide you through the download, installation, and configuration of OpenClaw to achieve efficient control of the NERO 7-axis robotic arm.

Seamless AI Control: AgileX NERO 7-DoF Arm with OpenClaw

Download and Install OpenClaw

Start Configuring OpenClaw

Teach OpenClaw the Skill and Rules for Controlling the Robotic Arm

---
name: agx-arm-codegen
description: Guide OpenClaw to generate pyAgxArm-based robotic arm control code from user natural language. When users describe robotic arm movements with prompts and existing scripts cannot directly meet the requirements, automatically organize and generate executable Python scripts based on the APIs and examples provided by this skill.
metadata:
  {
    "openclaw":
      {
        "emoji": "烙",
        "requires": { "bins": ["python3", "pip3"] },
      },
  }
---

## Function Overview

- This skill is used to **guide OpenClaw to generate** executable pyAgxArm control code (Python scripts) based on user natural language descriptions, rather than just calling existing CLIs.
- Reference SDK: pyAgxArm ([GitHub](https://github.com/agilexrobotics/pyAgxArm)); Reference example: `pyAgxArm/demos/nero/test1.py`.

## When to Use This Skill

- Users say "Write code to control the robotic arm", "Generate a control script based on my description", "Make the robotic arm perform multiple actions in sequence", etc.
- Users explicitly request to "generate Python code" or "provide a runnable script" to control AgileX robotic arms such as Nero/Piper.

## Generate Code Using This Skill
   - Based on user prompts, combine the APIs and templates in `references/pyagxarm-api.md` of this skill to generate a complete, runnable Python script.
   - After generation, explain: the script needs to run in an environment with pyAgxArm and python-can installed, and CAN must be activated and the robotic arm powered on; remind users to pay attention to safety (no one in the workspace, small-scale testing first is recommended).

## Rules for Generating Code

1. **Connection and Configuration**
   - Use `create_agx_arm_config(robot="nero", comm="can", channel="can0", interface="socketcan")` to create a configuration (Nero example; Piper can use `robot="piper"`).
   - Use `AgxArmFactory.create_arm(robot_cfg)` to create a robotic arm instance, then `robot.connect()` to establish a connection.
2. **Enabling and Pre-Motion**
   - CRITICAL: The robot MUST BE ENABLED before switching modes. If the robot is in a disabled state, you cannot switch modes.
   - Switch to normal mode before movement, then enable: `robot.set_normal_mode()`, then poll `robot.enable()` until successful; you can set `robot.set_speed_percent(100)`.
   - Motion modes: Whenever using move_* or needing to switch to * mode, explicitly set `robot.set_motion_mode(robot.MOTION_MODE.J)` (Joint), `P` (Point-to-Point), `L` (Linear), `C` (Circular), `JS` (Joint Quick Response, use with caution).
3. **Motion Interfaces and Units**
   - Joint motion: `robot.move_j([j1, j2, ..., j7])`, unit is **radians**, Nero has 7 joints.
   - Cartesian: `robot.move_p(pose)` / `robot.move_l(pose)`, pose is `[x, y, z, roll, pitch, yaw]`, position unit is **meters**, attitude is **radians**.
   - Circular: `robot.move_c(start_pose, mid_pose, end_pose)`, each pose is 6 floating-point numbers.
   - CRITICAL: All movement commands (move_j, move_js, move_mit, move_c, move_l, move_p) must be used in normal mode
   - After motion completion, poll `robot.get_arm_status().msg.motion_status == 0` or encapsulate `wait_motion_done(robot, timeout=...)` before executing the next step.
4. **Mode Switching**
   - Switching modes (master, slave, normal) requires 1s delay before and after the mode switch
   - Use `robot.set_normal_mode()` to set normal mode
   - Use `robot.set_master_mode()` to set master mode
   - Use `robot.set_slave_mode()` to set slave mode
   - CRITICAL: Enable the robot FIRST with `robot.enable()` BEFORE switching modes
5. **Safety and Conclusion**
   - In the generated script, note: confirm workspace safety before execution; small-scale movement is recommended for the first time; use physical emergency stop or `robot.electronic_emergency_stop()` / `robot.disable()` in case of emergency.
   - If the user requests "disable after completion", call `robot.disable()` at the end of the script.
6. **Implementation Details**
   - When waiting for motion to complete, use shorter timeout (2-3 seconds)
   - After each mechanical arm operation, add a small sleep (0.01 seconds)
   - Motion completion detection: `robot.get_arm_status().msg.motion_status == 0` (not == 1)

## Reference Files

- **API and Minimal Runnable Template**: `references/pyagxarm-api.md`  
  When generating code, refer to the interfaces and code snippets in this file to ensure consistency with pyAgxArm and test1.py usage.

## Safety Notes

- The generated code will drive a physical robotic arm. Users must be reminded: confirm no personnel or obstacles in the workspace before execution; it is recommended to test with small movements and low speeds first.
- High-risk modes (such as `move_js`, `move_mit`) should be marked with risks in code comments or user explanations, and it is recommended to use them only after understanding the consequences.
- This skill is only responsible for "guiding code generation" and does not directly execute movements; users need to prepare the actual running environment, CAN activation, and pyAgxArm installation by themselves (refer to environment preparation in the agx-arm skill).
# pyAgxArm API Quick Reference & Minimal Runnable Template

For reference when OpenClaw generates robotic arm control code from user natural language. SDK source: pyAgxArm ([GitHub](https://github.com/agilexrobotics/pyAgxArm)); Example reference: `pyAgxArm/demos/nero/test1.py`.

## 1. Connection and Configuration

```python
from pyAgxArm import create_agx_arm_config, AgxArmFactory

# Configuration: robot options - nero / piper / piper_h / piper_l / piper_x; channel e.g. can0
robot_cfg = create_agx_arm_config(
    robot="nero",
    comm="can",
    channel="can0",
    interface="socketcan",
)
robot = AgxArmFactory.create_arm(robot_cfg)
robot.connect()

2. Enabling and Modes

robot.set_normal_mode()   # Normal mode (single arm control)
# Enable: poll until successful
while not robot.enable():
    time.sleep(0.01)

robot.set_speed_percent(100)   # Motion speed percentage 0–100
# Disable
while not robot.disable():
    time.sleep(0.01)

3. Motion Modes and Interfaces

Mode Constant Interface Description
Joint Position Speed robot.MOTION_MODE.J robot.move_j([j1..j7]) 7 joint angles (radians), with smoothing
Joint Quick Response robot.MOTION_MODE.JS robot.move_js([j1..j7]) No smoothing, use with caution
Point-to-Point robot.MOTION_MODE.P robot.move_p([x,y,z,roll,pitch,yaw]) Cartesian pose, meters/radians
Linear robot.MOTION_MODE.L robot.move_l([x,y,z,roll,pitch,yaw]) Linear trajectory
Circular robot.MOTION_MODE.C robot.move_c(start_pose, mid_pose, end_pose) Each pose is 6 floating-point numbers

Example (Joint Motion + Wait for Completion):

import time

def wait_motion_done(robot, timeout: float = 3.0, poll_interval: float = 0.1) -> bool:  # Shorter timeout (2-3s)
    time.sleep(0.5)
    start_t = time.monotonic()
    while True:
        status = robot.get_arm_status()
        if status is not None and getattr(status.msg, "motion_status", None) == 0:
            return True
        if time.monotonic() - start_t > timeout:
            return False
        time.sleep(poll_interval)

robot.set_motion_mode(robot.MOTION_MODE.J)
robot.move_j([0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
wait_motion_done(robot, timeout=3.0)  # Shorter timeout

4. Read Status

5. Others

6. Minimal Runnable Template (Extend based on this when generating code)

#!/usr/bin/env python3
import time
from pyAgxArm import create_agx_arm_config, AgxArmFactory


def wait_motion_done(robot, timeout: float = 3.0, poll_interval: float = 0.1) -> bool:  # Shorter timeout (2-3s)
    time.sleep(0.5)
    start_t = time.monotonic()
    while True:
        status = robot.get_arm_status()
        if status is not None and getattr(status.msg, "motion_status", None) == 0:
            return True
        if time.monotonic() - start_t > timeout:
            return False
        time.sleep(poll_interval)


def main():
    robot_cfg = create_agx_arm_config(
        robot="nero",
        comm="can",
        channel="can0",
        interface="socketcan",
    )
    robot = AgxArmFactory.create_arm(robot_cfg)
    robot.connect()

    # Mode switching requires 1s delay before and after
    time.sleep(1)  # 1s delay before mode switch
    robot.set_normal_mode()
    time.sleep(1)  # 1s delay after mode switch
    
    # CRITICAL: The robot MUST BE ENABLED before switching modes
    while not robot.enable():
        time.sleep(0.01)
    robot.set_speed_percent(80)

    # After each mechanical arm operation, add a small sleep (0.01 seconds)
    # CRITICAL: All movement commands must be used in normal mode
    robot.set_motion_mode(robot.MOTION_MODE.J)
    robot.move_j([0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
    time.sleep(0.01)  # Small delay after move command
    wait_motion_done(robot, timeout=3.0)  # Shorter timeout

    # Optional: Disable before exit
    # while not robot.disable():
    #     time.sleep(0.01)


if __name__ == "__main__":
    main()

When generating code, replace or add motion steps (move_j / move_p / move_l / move_c, etc.) according to user descriptions, and keep consistency in connection, enabling, wait_motion_done and units (radians/meters).

After configuring the robotic arm CAN communication and Python environment, OpenClaw can automatically call the SDK driver to generate control code and control the robotic arm


2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/controlling-the-nero-robotic-arm-with-openclaw/52825

ROS Discourse General: LinkForge v1.3.0 — The Linter & Bridge for Robotics

Hi everyone! :waving_hand:

LinkForge v1.3.0 was just released! But more than a release announcement, I want to take a moment to share the bigger vision of where this project is going — because it’s grown far beyond a Blender plugin.


:telescope: The Vision

LinkForge is not just a URDF exporter for Blender. The architecture is intentionally built as a Hexagonal Core, fully decoupled from any single 3D host or output format.

The mission is simple:

Bridge the gap between creative 3D design and high-fidelity robotics engineering.

Design Systems (Blender, FreeCAD, Fusion 360) ➜ LinkForge CoreSimulation & Production (ROS 2, MuJoCo, Gazebo, Isaac Sim)

Because in robotics, Physics is Truth. Every inertia tensor, every joint limit, every sensor placement should be mathematically correct before it ever reaches a simulator.


:rocket: What’s New in v1.3.0?

Full release notes on GitHub

:high_voltage: Performance

:control_knobs: ros2_control Intelligence

:bug: Key Bug Fixes

:clipboard: Component Search


:handshake: Looking for Contributors

The upcoming roadmap includes SRDF Support, the linkforge_ros package, and a Composer API for modular robot assemblies.

If you work in ROS 2, enjoy Python or Rust, or have ideas on how to improve the URDF/XACRO workflow, come say hi:


What is your biggest URDF/XACRO pain point today? I’d love to know what the community needs most as we plan the next milestone! :robot::sparkles:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/linkforge-v1-3-0-the-linter-bridge-for-robotics/52822

ROS Discourse General: Tom Ryden talks robotics trends at Boston Robot Hackers on March 5!

We are excited to share news about the next monthly meeting of the Boston Robot Hackers! Check it out If you are in the Boston area and please register for the event.

Where: Artisans Asylum, 96 Holton Street, Boston, MA 02135
When: Thursday March 5, 7:00-9:00pm
Speaker: Tom Ryden of Mass Robotics

If you are into Robotics (and, by definition, you are, given you are reading this!) this promises to be a very interesting talk! Tom will start with an overview of MassRobotics and then get into what the current trends are in the robotics market: what problems are start-ups addressing, how the fundraising market is today, and where the investment dollars are going.

And if you cannot make it, still consider joining our organization!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/tom-ryden-talks-robotics-trends-at-boston-robot-hackers-on-march-5/52810

ROS Discourse General: Intrinsic joining Google as a a distinct robotics and AI unit

Hi Everyone,

As you may have seen in the recent blog post, Intrinsic is joining Google as a distinct robotics and AI unit. Specifically Intrinsic’s platform will bring a new “infrastructural bridge” between Google’s frontier AI research (such as the AI coming from teams at Gemini and DeepMind) and the practical, high-stakes requirements of industrial manufacturing, which is Intrinsic’s focus. This decision will allow our team to continue building the Intrinsic platform, and operate in a very similar way to before. Our commercial mandate remains the same, as does our focus on delivering intelligent solutions for our customers.

Intrinsic remains dedicated to the commitments we’ve made to the open source community, to ROS, Gazebo and Open-RMF (including Lyrical and Kura release roadmaps) and deepening our platform integrations with ROS over time. We’re also very excited about the AI for Industry Challenge this year, which is organized with the team at Open Robotics and has thousands of registrants so far.

From the community’s perspective we are expecting minimal disruption, if any, and we look forward to showing and sharing more news at ROSCon in Toronto later this year.

4 posts - 4 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/intrinsic-joining-google-as-a-a-distinct-robotics-and-ai-unit/52712

ROS Discourse General: ROS Lyrical Release Working Group

The ROS Lyrical Release Working Group will have its first meeting Fri, Feb 27, 2026 7:00 PM UTCFri, Feb 27, 2026 8:00 PM UTC.

Want to come? Give feedback on the time here: Meeting time: ROS Lyrical Release WG

Meeting link: https://openrobotics-org.zoom.us/meetings/81224698184/invitations?signature=SrUjwX951phQQqx25bNfAA-MFEpABNsKY1vAAWfp91s

Notes and Agenda: https://docs.google.com/document/d/1lkilmVulAUF1qVRsmimMa1cJtO2jOLoGy75itOCwc78/edit?usp=sharing

Agenda:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-lyrical-release-working-group/52691

ROS Discourse General: Meeting Summary for Accelerated Transport Working Group 02/18/2026

In this meeting we discussed

Remember that the meeting is happening every week to push this feature in to Lyrical Luth. Please check the Open Source Robotics Foundation official events to join the next meeting

Join the #Accelerated Memory Transport Working Group to discuss more details on Zulip

Meeting notes

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/meeting-summary-for-accelerated-transport-working-group-02-18-2026/52672

ROS Discourse General: Meeting Summary for Accelerated Transport Working Group 02/11/2026

We had the first meeting in the Accelerated Memory Transport WG. The meeting focused on discussing a new prototype presented by Karsten and CY from NVIDIA.

We discussed some topics:

Meeting notes

Please check the Open Source Robotics Foundation official events to join the next meeting

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/meeting-summary-for-accelerated-transport-working-group-02-11-2026/52671

ROS Discourse General: Transitive Core Concepts — #1: Full-stack Packages: robot + cloud + web

Working with nine different robotics companies over the course of 10 years has taught us a thing or two about designing robotic full-stack architectures. All this experience went into the design of Transitive, the open-source framework for full-stack robotics. We’ve started a new mini-series of blog posts where I dive into the three core concepts of the framework. Too often do we see robotic startups fall into the same pitfalls when designing their full-stack architecture (robot + cloud + web). Therefore it is important to us to share our experience and explain why we built Transitive the way we did.

In this first post you’ll learn about the need for cross-device code encapsulation, how we addressed this need in Transitive via full-stack packages, and what benefits result from this approach for growing your fleet and functionality without increasing complexity.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/transitive-core-concepts-1-full-stack-packages-robot-cloud-web/52651

ROS Discourse General: Ros2_canopen shortcomings and improvements

During the integration of our hardware we (inmach.de) encountered some shortcomings in the ros2_canopen package which we worked around or fixed in our fork of ros2_canopen. We’d like to get these changes into the upstream repo so that everyone can profit from them.

The major shortcomings we found and think should and could be improved are:

With this post I’d like to start a discussion with the ROS community and the maintainers (@c_h_s, @ipa-vsp) of ros2_canopen about other possible shortcomings and what needs and can be done to improve the ros2_canopen stack. So that we together can make it even better in the years to come.

5 posts - 3 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros2-canopen-shortcomings-and-improvements/52647

ROS Discourse General: Predictability of zero-copy message transport

Hey! I’m looking to improve my ROS 2 code performance with the usage of zero-copy transfer for large messages.

I’ve been under the impression that simply composing any composable node into a container, and setting “use_intra_process_comms” to True would lead into zero-copy transfer. But after experimenting and going through multiple tutorials, design docs, and discussions, that doesn’t seem to be the case.

I wanted to create this thread to write down some of my questions, in the hopes of them being helpful for improving the documentation, and to get a better understanding of the zero-copy edge cases. I’m also curious to hear if there are already ways to easily verify that the zero-copy transfer is happening.


To my understanding, it looks like there are a bunch of different things that can have an influence if zero-copy happens or not:

I’m looking to understand what are the cases when the zero-copy transfer really happens, and in which cases ROS just quietly falls back to copying the messages.

Many of these questions also boil down a bigger question: How can I verify if zero-copy happens, and what kind of performance benefits I’m getting from using it? All the demos I’ve seen until now simply print the memory address of the message to confirm that the zero-copy happens. I think it would be highly beneficial to have a better way directly in ROS 2 to see if zero-copy pub-sub is actually happening. Is there already a way to do that, or do you see how this could be implemented? Maybe through ros2 topic CLI?

In addition to the above questions, the tutorials and other resources still left me wondering about these ones:

[1] ROS Jazzy Tutorial - Intra-Process-Communication
[2] Discourse Thread - Performance Characteristics: subscription callback signatures, RMW implementation, Intra-process communication (IPC)
[3] ROS 2 Design Article - Intraprocess communications
[4] ROS Jazzy Tutorial - Configure Zero Copy Loaned Messages

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/predictability-of-zero-copy-message-transport/52646

ROS Discourse General: New kid on the block: meet Ajime, robotics CI/CD next-gen platform

Hello Roboticits!

We are building Ajime (https://ajime.io) to provide a zero-config and pipeline building, Ajime is a CI/CD drag and drop experience for edge computing & robotics. Just link your GitHub repository, we handle the build and deployment of CUDA-ready containers, manage your cloud/on-prem databases and compute resources (provide also fast hosting), and provide secure, fleet connectivity over the cloud. Easy like building lego.

Whether you’re deploying to an NVIDIA Jetson or Raspberry PI or any other linux based SOM, Ajime automates the entire pipeline—from LLM-generated Dockerfiles with sensor drivers to NVIDIA Isaac Sim validation. We’re in private beta and looking for engineers to help us kill the “dependency hell” of robotics DevOps. Check out the demo and join the waitlist at ajime.io.

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/new-kid-on-the-block-meet-ajime-robotics-ci-cd-next-gen-platform/52617

ROS Discourse General: Canonical Observability Stack Tryout | Cloud Robotics WG Meeting 2026-02-25

Please come and join us for this coming meeting at Wed, Feb 25, 2026 4:00 PM UTCWed, Feb 25, 2026 5:00 PM UTC, where we plan to deploy an example Canonical Observability Stack instance based on information from the tutorials and documentation.

We did originally plan to host this session on 2026-02-11, but unfortunately had to cancel, so the session has been moved back.

In the previous meeting, the CRWG invited Guillaume Beuzeboc from Canonical to present on the Canonical Observability Stack (COS). COS is a general observability stack for devices such as drones, robots, and IoT devices. It operates from telemetry data, and the COS team has extended it to support robot-specific use cases. If you’re interested to watch the talk, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there (and we won’t need to cancel again)!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/canonical-observability-stack-tryout-cloud-robotics-wg-meeting-2026-02-25/52606

ROS Discourse General: Agent ROS Bridge — Universal LLM-to-ROS bridge with auto-generated types

Hi ROS devs!

I built something to bridge the gap between AI agents and ROS robots. Instead of writing custom interfaces for every LLM integration, this gives you a universal bridge with zero boilerplate.

**Key features:**

- Auto-generates Python classes from .msg/.srv files

- One decorator = ROS action/service/actionlib

- gRPC + WebSocket APIs for remote agents

- Works with ROS1 and ROS2 (humble/jazzy tested)

- Full Docker playground with 4 examples

**The examples:**

1. **Talking Garden** — LLM monitors/controlls IoT plants via ROS topics

2. **Mars Colony** — Multi-robot coordination (excavator, builder, solar, scout)

3. **Theater Bots** — AI director + robot actors with scripted behaviors

4. **Art Studio** — Human and robot painters collaborating on canvas

**Quick start:**

```bash

pip install agent-ros-bridge

# In your Python code:

from agent_ros_bridge import ROSBridge

bridge = ROSBridge(ros_version=2)

@bridge.action(“move_to”)

def move_to(x: float, y: float):

\# Your robot movement code

pass

```

GitHub: GitHub - webthree549-bot/agent-ros-bridge

PyPI: Client Challenge

Open to issues, PRs, and feedback!

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/agent-ros-bridge-universal-llm-to-ros-bridge-with-auto-generated-types/52604

ROS Discourse General: 2025 ROS Metrics Report

2025 ROS Metrics Report

2025 ROS Metrics Report.pdf (3.7 MB)
For comparison, heres the 2024 Metrics Report.

Once a year, we take a moment to evaluate the health, growth, and general well-being of the ROS community. Our goal with this annual report is to provide a relative estimate of the community’s evolution and composition to better help us plan for the future and allocate resources.

As an open-source project, we prioritize user privacy above all else. We do not track our users, and as such, this report relies on aggregate statistics from services like GitHub, Google Analytics, and download data from our various servers. While this makes data collection difficult, and the results don’t always capture the information we would like, we are happy to report that the data we have captured clearly show a thriving and rapidly growing ROS ecosystem! :rocket:

2025 Report Highlights


The full report is available for download here (3.7 MB) If you would like just the highlights we’ve summarized the top line results below.

A Landmark Year for Community Growth

The 2025 metrics highlight a massive surge in users across almost all of our websites and servers. In the month of October 2025, ROS 2 package downloads saw a staggering 284% increase in the number of package downloads over the previous year. ROS 2 package downloads now make up the overwhelming majority of ROS package downloads (91.2% of all downloads in October 2025). This growth isn’t just from users transitioning from ROS 1 to ROS 2, most of it appears to be explosive growth in the number of ROS 2 users overall. The number of unique users / IPs downloading ROS packages grew from 843,959 in October 2024 to 1,315,867 in October of 2025, an increase of just shy of 56%!

Meanwhile, ROS 1 downloads declined slightly from 12,206,979 packages in October of 2024 to 11,590,884 in October of 2025, a decrease of slightly over 5%. The ROS Wiki, which is now at End-of-Life, saw an 8.5% decrease in users, a trend we view positively as the community migrates to modern documentation platforms and away from ROS 1. Similarly, there were only 5 questions tagged with “ROS1” on Robotics Stack Exchange in 2025, in contrast to the 1,449 questions tagged “ROS2.” On every platform, and by every metric, ROS 2 is now the dominant platform ROS development.

Our discussion platforms are also busier than ever. Annual topics on ROS Discourse rose by 40% (to 1,472), and annual posts increased by 24% (to 4,901). Overall viewership of Discourse grew by nearly 30%. Similarly our community on LinkedIn has increased by 23.9% and hovers at just shy of 200,000 followers. The only notable decrease of any ROS metric was on Robotics Stack Exchange, which has seen a -42.49% decrease in the number of questions asked. This decrease mirrors larger industry wide trends as developers turn to large language models to answer their technical questions.

ROS 2 Adoption and Industry Momentum

The shift to ROS 2 has reached a definitive milestone, with package downloads now overwhelmingly centered on ROS 2 and likely surpassing one billion per year. This massive download volume is a testament to the ROS’s utility and widespread adoption. We are especially encouraged by the growing health of the ecosystem, which now features 34,614 unique ROS packages available via Apt (an increase of 9.15% over the previous year). This growth in package availability directly translates into greater functionality and choice for our users.

The dedication of the developer community is evident in the flourishing number of public repositories on Github: 3,848 repositories are tagged with “#ROS2” (a 39% increase in 2025), alongside 8,744 public repositories tagged with “#ROS” (up 4.73% since Jan 2025), demonstrating increasing development activity. Furthermore, the relevance of ROS in industry is undeniable: our private list of ROS companies grew 26% this year to 1,579 companies, showing strong commercial validation. In the academic sphere, our canonical ROS 2 paper continues to demonstrate explosive growth with 1,929 citations (up 89.9% in 2025), confirming the platform’s role in cutting-edge research. Collectively, these metrics confirm ROS 2’s status as the established platform for the next generation of robotics development, driving significant growth across both commercial and research sectors.

Conclusion and Feedback

The data from 2025 depicts a thriving, maturing ecosystem that is increasingly centered on modern ROS 2 and modern Gazebo tools. We are immensely proud of this community’s growth and its successful shift toward next-generation robotics software! :rocket:

We encourage you to dive into the full report for a more detailed breakdown of these metrics. We also encourage you to take a look at the ROS project contributor metrics published by our colleagues at the Linux Foundation for a detailed breakdown of project contribution statistics. As always, we would love to hear your thoughts on what metrics you would like to see included in future reports.

A Note on 2025 Data

Our goal with the ROS metrics report is to develop an understanding of the magnitude and direction of changes in the ROS open source community so we can make better decisions about where we allocate our time and resources. As such, we’re looking for ballpark estimates to help guide decision making, not necessarily exacting figures. This year, due to circumstances beyond our control, we’ve had to fill in some gaps in our data as explained below. We believe the numbers reported here paint a reasonable lower bound on various phenomena in the ROS community.

Our ROS package download statistics are culled from an AWStats instance running on our OSU OSL servers. In July of 2025 we moved our AWStats host at OSU OSL and upgraded AWStats ahead of its imminent deprecation. Unfortunately, this migration had two negative side effects that impacted our results for 2025. First, it caused us to lose most of our AWStats data for the month of July, 2025. Second, the upgrade did not provide a migration utility for existing log data, and our AWStats summary page for 2025 only presents data for the six months after the migration. Thankfully, we still have the raw log data for the proceeding six months (with the exception of July), and we were able to manually re-calculate the results for most metrics, albeit missing some data from the month of July.

For our Gazebo download metrics we rely upon the Apache logs available on an OSRF AWS instance and AWStats download data from the OSU OSL servers. For privacy reasons we do not retain the Apache log data in perpetuity, instead we rely on a logging buffer that periodically rolls over. In prior years this buffer was sufficient to capture well over a month’s worth of Gazebo download data. Gazebo downloads have grown significantly over the past year, and when we evaluated our logs, we found that only a little over two weeks worth of data was available. As such we decided to evaluate the download data on a two week period from January 13th, until January 27th and extrapolate those results out to the entire month.

2 posts - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/2025-ros-metrics-report/52575

ROS Discourse General: Proposal: Add ADOPTERS to showcase ROS 2 production users

Hi :waving_hand:

i’ve opened an issue proposing to add an ADOPTERS to the ROS documentation — a centralized, community-maintained list of organizations using ROS in production.

please have a look at the issue, and give me the feedback :person_bowing:

If there’s interest, I’m happy to submit an initial PR to get things started. Please share your thoughts here or on the GitHub issue.

thanks,
tomoya

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/proposal-add-adopters-to-showcase-ros-2-production-users/52571

ROS Discourse General: Research grade robot recommendations in 2026

I want to test analytics software I’m developing on a wide variety of movements and ROS2 frameworks (e.g. MoveIt, Nav2, etc.) and sensor types, and I’m looking for recommendations on robots that are a good balance between low cost and a broad range of functionality. For example, I’m thinking of a combination of a Turtlebot 4 for a mobile robot and a Waveshare RoArm M3 for a robot arm with some Gen AI capabilities. I’m sure a lot of people have experience with the Turtlebot here but I’m curious what your recommendations would be in general.

By the way, I’m new here and wasn’t sure what category to post this in. Please let me know if there’s a better place for this discussion. Thanks in advance.

5 posts - 4 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/research-grade-robot-recommendations-in-2026/52569


2026-03-07 12:17