[Documentation] [TitleIndex] [WordIndex

Planet ROS

Planet ROS - http://planet.ros.org

Planet ROS - http://planet.ros.org[WWW] http://planet.ros.org


ROS Discourse General: Native ROS 2 Jazzy Debian packages for Raspberry Pi OS / Debian Trixie (arm64)

After spending some time trying to get ROS 2 Jazzy working reliably on Raspberry Pi via Docker and Conda (and losing several rounds to OpenGL, Gazebo plugins, and cross-arch issues), I eventually concluded:)

On Raspberry Pi, ROS really only behaves when it’s installed natively.

So I built the full ROS 2 Jazzy stack as native Debian packages for Raspberry Pi OS / Debian Trixie (arm64), using a reproducible build pipeline:

The result:

Public APT repository

:backhand_index_pointing_right: GitHub - rospian/rospian-repo: ROS2 Jazzy on Raspberry OS Trixie debian repo

Build farm (if you want to reproduce or extend it)

:backhand_index_pointing_right: GitHub - rospian/rospian-buildfarm: ROS 2 Jazzy debs for Raspberry Pi OS Trixie with full Debian tooling

Includes the full mini build-farm pipeline.

This was motivated mainly by reliability on embedded systems and multi-machine setups (Gazebo on desktop, control on Pi).

Feedback, testing, or suggestions very welcome.

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/native-ros-2-jazzy-debian-packages-for-raspberry-pi-os-debian-trixie-arm64/51965

ROS Discourse General: Ros2_yolos_cpp High-Performance ROS 2 Wrappers for YOLOs-CPP [All models + All tasks]

Hi everyone! :waving_hand:

I’m the author of ros2_yolos_cpp and YOLOs-CPP. I’m excited to share the first public release of this ROS 2 package!

:link: Repository: ros2_yolos_cpp


:brain: What Is ros2_yolos_cpp?

ros2_yolos_cpp is a production-ready ROS 2 interface for the YOLOs-CPP inference engine — a high-performance, unified C++ library for YOLO models (v5 through v12 and YOLO26) built on ONNX Runtime and OpenCV.

This package provides composable and lifecycle-managed ROS 2 nodes for real-time:

All powered through ONNX models and optimized for both CPU and GPU inference.


:gear: Key Features

Regards,

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros2-yolos-cpp-high-performance-ros-2-wrappers-for-yolos-cpp-all-models-all-tasks/51964

ROS Discourse General: The Canonical Observability Stack with Guillaume Beuzeboc | Cloud Robotics WG Meeting 2026-01-28

For this coming session on Wed, Jan 28, 2026 4:00 PM UTC→Wed, Jan 28, 2026 5:00 PM UTC, the CRWG has invited Guillaume Beuzeboc from Canonical to present on the Canonical Observability Stack (COS). COS is a general observability stack for devices such as drones, robots, and IoT devices. It operates from telemetry data, and the COS team has extended it to support robot-specific use cases. Guillaume, a software engineer at Canonical, previously presented COS at ROSCon 2025 and has kindly agreed to join this meeting to discuss additional technical details with the CRWG.

At the previous meeting, the CRWG continued its review of the ROSCon 2025 talks, focusing on identifying the sessions most relevant to Logging and Observability. A blog post summarizing our findings will be published in the coming weeks. If you would like to watch the latest review meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/the-canonical-observability-stack-with-guillaume-beuzeboc-cloud-robotics-wg-meeting-2026-01-28/51963

ROS Discourse General: Deployment and Implementation of RDA_planner

Deployment and Implementation of RDA_planner

We reproduce the RDA Planner project from the IEEE paper RDA: An Accelerated Collision-Free Motion Planner for Autonomous Navigation in Cluttered Environments. We provide a step-by-step guide to help you quickly reproduce the RDA path planning algorithm in this paper, enabling efficient obstacle avoidance for autonomous navigation in complex environments.

Abstract

RDA Planner is a high-performance, optimization-based Model Predictive Control (MPC) motion planner designed for autonomous navigation in complex and cluttered environments. By leveraging the Alternating Direction Method of Multipliers (ADMM), RDA decomposes complex optimization problems into several simple subproblems.

This project is the open-source development of the RDA_ROS autonomous navigation project, proposed by researchers from the University of Hong Kong, Southern University of Science and Technology, University of Macau, Shenzhen Institutes of Advanced Technology of the Chinese Academy of Sciences, and Hong Kong University of Science and Technology (Guangzhou). It is developed based on the AgileX Limo simulator. Relevant papers have been published in IEEE Robotics and Automation Letters and IEEE Transactions on Mechatronics.

RDA planner: GitHub - hanruihua/RDA-planner: [RA-Letter 2023] RDA: An Accelerated Collision Free Motion Planner for Autonomous Navigation in Cluttered Environments
RDA_ROS: GitHub - hanruihua/rda_ros: ROS Wrapper of RDA planner

Tags

limo、RDA_planner、path planning

Respositories

Environment Requirements

System:ubuntu 20.04

ROS Version:noetic

python Version:python3.9

Deployment Process

1、Download and Install Conda

Download Link

Choose Anaconda or Miniconda based on your system storage capacity

After downloading, run the following commands to install:

2、Create and Activate Conda Environment

conda create -n rda python=3.9
conda activate rda

3、Download RDA_planner

mkdir -p ~/rda_ws/src
cd ~/rda_ws/src
git clone https://github.com/hanruihua/RDA_planner
cd RDA_planner
pip install -e .  

4、Download Simulator

pip install ir-sim

5、Run Examples in RDA_planner

cd RDA_planner/example/lidar_nav
python lidar_path_track_diff.py

The running effect is consistent with the official README.

img_2

Deployment Process of rda_ros

1、Install Dependencies in Conda Environment

conda activate rda
sudo apt install python3-empy
sudo apt install ros-noetic-costmap-converter
pip install empy==3.3.4
pip install rospkg
pip install catkin_pkg

2、Download Code

cd ~/rda_ws/src
git clone https://github.com/hanruihua/rda_ros
cd ~/rda_ws && catkin_make
cd ~/rda_ws/src/rda_ros 
sh source_setup.sh && source ~/rda_ws/devel/setup.sh && rosdep install rda_ros 

3、Download Simulation Components

This step will download two repositories: limo_ros and rvo_ros

limo_ros:Robot model for simulation

rvo_ros:Cylindrical obstacles used in the simulation environment

cd rda_ros/example/dynamic_collision_avoidance
sh gazebo_example_setup.sh

4、Run Gazebo Simulation

Run via Script

cd rda_ros/example/dynamic_collision_avoidance
sh run_rda_gazebo_scan.sh

Run via Individual Commands

Launch the simulation environment:

roslaunch rda_ros gazebo_limo_env10.launch

Launch RDA_planner

roslaunch rda_ros rda_gazebo_limo_scan.launch

img_3

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/deployment-and-implementation-of-rda-planner/51956

ROS Discourse General: iRobot's ROS benchmarking suite now available!

We’ve just open-sourced our ROS benchmarking suite! Built on top of iRobot’s ros2-performance framework, this is a containerized environment for simulating arbitrary ROS2 systems and graph configurations both simple and complex, comparing the performance of various RMW implementations, and identifying performance issues and bottlenecks.

Are you building a custom RMW or ROS executor not included in this tooling, and want to compare against the existing implementations? We provide instructions and examples for how to add them to this suite.

Huge shoutout to Leonardo Neumarkt Fernandez for owning and driving the development of this benchmarking suite!

Check it out here: GitHub - irobot-ros/ros2-benchmark-container: A Dockerized performance benchmarking suite for ROS 2 that automates testing, comparative analysis, and report generation across multiple RMW implementations and system topologies.

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/irobots-ros-benchmarking-suite-now-available/51921

ROS Discourse General: Can anyone recommend a C++ GUI framework where I can embed or integrate a 3D engine?

I know that Qt gives an opportunity to do it natively with Qt3D, but I didn’t find any examples demonstrating that I can rotate and view models in Qt3D within mouse. There are also a lot of 3D engines which provides integration with Qt. They are listed here. But I don’t want to try each of them, maybe someone already knows which one is suitable for me.

I am using C++ for everything, so it is better to use C++ for easier integration, but Rust and Python are also acceptable.

I am a big fun of the Open3D, so if somebody knows how to integrate it with some GUI frameworks, I will be glad)

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/can-anyone-recommend-a-c-gui-framework-where-i-can-embed-or-integrate-a-3d-engine/51906

ROS Discourse General: Announcement: rclrs 0.7.0 Release

We’re happy to announce the release of rclrs v0.7.0!

Just like v0.6.0 landed right before ROSCon in Singapore, this release is arriving just in time for FOSDEM at the end of the month. Welcome to Conference-Driven Development (CDD)!

If you’re attending FOSDEM, come check out my talk on ros2-rust in the Robotics & Simulation devroom.

What’s New

Dynamic Messages

This release adds support for dynamic message publishers and subscribers. You can now work with ROS 2 topics without compile-time knowledge of message types, enabling tools like rosbag recorders, topic inspection utilities, and message bridges to be written entirely in Rust.

Best Available QoS

Added support for best available QoS profiles. Applications can now automatically negotiate quality of service settings when connecting to existing publishers or subscribers.

Other Changes

Breaking Changes

For the next release, we are planning to switch to Rust 2024, but wanted to give enough notice.

Contributors

A huge thank you to everyone who contributed to this release! Your contributions make ros2-rust better for the entire community.

Links

As always, we welcome feedback and contributions!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcement-rclrs-0-7-0-release/51896

ROS Discourse General: LinkForge: Robot modeling does not have to be complicated

I recorded a short video to show how easy it is to build a simple mobile robot with ���������, a Blender extension designed to bridge the gap between 3D modeling and robotics simulation.

All in a few straightforward steps.

���������: ����� �������� ���� ��� ���� �� �� �����������.

The goal is simple: remove friction from robot modeling so engineers can focus on simulation, control, and behavior, not file formats and repetitive setup.

If you are working with ROS or robot simulation and want a faster, cleaner workflow, this is worth a look.

All in a few straightforward steps.

The goal is simple: remove friction from robot modeling so engineers can focus on simulation, control, and behavior, not file formats and repetitive setup.

If you are working with ROS or robot simulation and want a faster, cleaner workflow, this is worth a look.

Blender Extensions: https://extensions.blender.org/add-ons/linkforge/

GitHub: https://github.com/arounamounchili/linkforge

Documentation: https://linkforge.readthedocs.io/

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/linkforge-robot-modeling-does-not-have-to-be-complicated/51883

ROS Industrial: First of 2026 ROS-I Developers' Meeting Looks at Upcoming Releases and Collaboration

The ROS-Industrial Developers’ Meeting provided updates on open-source robotics tools, with a focus on advancements in Tesseract, Helping developers still using MoveIt2, and Trajopt. These updates underscore the global push to innovate motion planning, perception, and tooling systems for industrial automation. Key developments revolved around stabilizing existing frameworks, improving performance, and leveraging modern technologies like GPUs for acceleration.

The Tesseract project, designed to address traditional motion planning tools' limitations, is moving steadily toward a 1.0 release. With about half of the work complete, remaining tasks include API polishing, unit test enhancements, and transitioning the motion planning pipeline to a plugin-based architecture. Tesseract is also integrating improved collision checkers and tools like the Task Composer, which supports modular backends, making it more adaptable for high-complexity manufacturing tasks.

On the MoveIt 2 front, ongoing community support will be critical as the prior suppor team shifts to supporting the commercial MoveItPro. To ensured Tesseract maintainability, updates include the migration of documentation directly into repositories via GitHub. This step simplifies synchronization between code and documentation, helping developers maintain robust, open-source solutions. There are plans to provide migration tutorials for those wanting to investigate Tesseract if MoveIt2 is not meeting development needs and not ready to move to MoveItPro. Ability to utilize MoveIt2 components within Tesseract are being investigated.

Trajopt, another critical component of the Tesseract ecosystem, is undergoing a rewrite to better handle complex trajectories and cost constraints. The new version, expected within weeks, will enable better time parameterization and overall performance improvements. Discussions also explored GPU acceleration, focusing on opportunities to optimize constraint and cost calculations using emerging GPU libraries, though some modifications will be needed to fully realize this potential.

Toolpath optimization also gained attention, with updates on the noether repository, which supports industrial toolpath generation and reconstruction. While still a work in progress, noether is set to play a pivotal role in enabling advanced workflows once the planned updates are implemented.

As the meeting concluded, contributors emphasized the importance of community engagement to further modernize and refine these tools. Upcoming events across Europe and Asia will foster collaboration and showcase advancements in the ROS-Industrial ecosystem. This collective effort promises to drive a smarter, more adaptable industrial automation landscape, ensuring open-source solutions stay at the forefront of global manufacturing innovation.

The next Developers' Meeting is slated to be hosted by the ROS-I Consortium EU. You can find all the info for Developers' Meetings over at the Developer Meeting page.

[WWW] https://rosindustrial.org/news/2026/1/16/first-ros-i-developers-meeting-looks-at-upcoming-releases-and-collaboration

ROS Discourse General: Simple status webpage for a robot in localhost?

Hi, I’m just collecting info on how you’re solving some simple status pages running locally on robots that would show some basic information like battery status, driver status, sensor health etc. But nothing fancy like camera streaming, teleoperation and such. No cloud, everything local!

The use-case is just being able to quickly connect to a robot AP and see the status of important things. This can of course be done via rqt or remote desktop, but a status webpage is much more accessible from phones, tablets etc.

I’ve seen statically generated pages with autoreload (easiest to implement, but very custom).

I guess some people have something on top of rosbridge/RobotWebTools, right? But I haven’t found much info about this.

Introducing Robotics UI: A Web Interface Solution for ROS 2 Robots  - sciota robotics seemed interesting, but it never did it over 8 commits…

So what do you use?

Is there some automatic /diagnostics_agg → HTML+JS+WS framework? :slight_smile: And no, I don’t count Foxglove, because self-hosted costs… who knows what :slight_smile:

12 posts - 6 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/simple-status-webpage-for-a-robot-in-localhost/51864

ROS Discourse General: Tbai - towards better athletic intelligence

Introducing tbai, a framework designed to democratize robotics and embodied AI and to help us move towards better athletic intelligence.

output

Drawing inspiration from Hugging Face (more specifically lerobot :hugs:), tbai implements and makes fully open-source countless state-of-the-art methods for controlling various sorts of robots, including quadrupeds, humanoids, and industrial robotic arms.

With its well-established API and levels of abstraction, users can easily add new controllers while reusing the rest of the infrastructure, including utilities for time synchronization, visualization, config interaction, and state estimation, to name a few.

Everything is built out of lego-like components that can be seamlessly combined into a single, high-performing robot controller pipeline. Its wide pool of already implemented state-of-the-art controllers (many from Robotic Systems Lab), state estimators, and robot interfaces, together with simulation or real-robot deployment abstractions, allows anyone using tbai to easily start playing around and working on novel methods, using the existing framework as a baseline, or to change one component while keeping the rest, thus accelerating the iteration cycle.

No more starting from scratch, no more boilerplate code. Tbai takes care of all of that.

Tbai seeks to support as many robotic platforms as possible. Currently, there are nine robots that have at least one demo prepared, with many more to come. Specifically, we have controllers readily available for ANYmal B, ANYmal C, and ANYmal D from ANYbotics; Go2, Go2W, and G1 from Unitree Robotics; Franka Emika from Franka Robotics; and finally, Spot and Spot with arm from Boston Dynamics.

Tbai is an ongoing project that will continue making strides towards democratizing robotics and embodied AI. If you are a researcher or a tinkerer who is building cool controllers for a robot, be it an already supported robot or a completely new one, please do consider contributing to tbai so that as many people can benefit from your work as possible.

Finally, a huge thanks goes to all researchers and tinkerers who do robotics and publish papers together with their code for other people to learn from. Tbai would not be where it is now if it weren’t for the countless open-source projects it has drawn inspiration from. I hope tbai becomes an inspiration for other projects too.

Thank you all!

Link: https://github.com/tbai-lab/tbai

Link: https://github.com/tbai-lab/tbai_ros

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/tbai-towards-better-athletic-intelligence/51848

ROS Discourse General: [Humble] Upcoming behavior change: Improved log file flushing in rcl_logging_spdlog

Summary

The ROS PMC has approved backporting an improved log file flushing behavior to ROS 2 Humble. This change will be included in an next Humble sync and affects how rcl_logging_spdlog flushes log data to the filesystem.

What’s Changing?

Previously, rcl_logging_spdlog did not explicitly configure flushing behavior, which could result in:

After this update, the logging behavior will:

This provides a much better debugging experience, especially when investigating crashes or unexpected application terminations.

Compatibility

How to Revert to the Old Behavior

If you need to restore the previous flushing behavior (no explicit flushing), you can set the following environment variable:

export RCL_LOGGING_SPDLOG_EXPERIMENTAL_OLD_FLUSHING_BEHAVIOR=1

Note: This environment variable is marked as EXPERIMENTAL and is intended as a temporary measure. It may be removed in future ROS 2 releases when full logging configuration file support is implemented. Please do not rely on this variable being available in future versions.

Related Links

Questions or Concerns?

If you experience any issues with this change or have feedback, please:

Thanks,
Tomoya

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/humble-upcoming-behavior-change-improved-log-file-flushing-in-rcl-logging-spdlog/51825

ROS Discourse General: Guidance on next steps after ROS 2 Jazzy fundamentals for a hospitality robot project

I’m keenly working on a hospitality robot project driven by personal interest and a genuine enthusiasm for robotics, and I’m seeking guidance on what to focus on next.

I currently have a solid grasp of ROS 2 Jazzy fundamentals, including nodes, topics, services, actions, lifecycle nodes, URDF/Xacro, launch files, and executors. I’m comfortable bringing up a robot model and understanding how the ROS 2 system fits together.

My aim is to build a simulation-first MVP for a lobby scenario (greeter, wayfinding, and escort use cases). I’m deliberately keeping the scope practical and do not plan to add arms initially unless they become necessary.

At this stage, I would really value direction from more experienced practitioners on how to progress from foundational ROS knowledge toward a real, working robot.

In particular, I’d appreciate insights on:

Any advice, lessons learned, or references that could help shape the next phase of development would be greatly appreciated.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/guidance-on-next-steps-after-ros-2-jazzy-fundamentals-for-a-hospitality-robot-project/51809

ROS Discourse General: [Announcing] LinkForge: A Native Blender Extension for Visual URDF/Xacro Editing (ROS 2 Support)

Hi everyone,

I’d like to share a tool I’ve been working on: LinkForge. It was just approved on the Blender Extensions Platform (v1.1.1).

The Problem We all know the workflow: export meshes from CAD, write URDFs by hand, guess inertia tensors, launch Gazebo, realize a link is rotated 90 degrees, kill Gazebo, edit XML, repeat. It separates the “design” from the “engineering.”

The Solution LinkForge allows you to rig, configure, and export simulation-ready robots directly inside Blender. It is not just a mesh exporter; it manages the entire URDF/Xacro structure.

Key Features for Roboticists:

Workflow

  1. Import your existing .urdf or .xacro.
  2. Edit joints and limits visually in the viewport.
  3. Add collision geometry (convex hulls/primitives).
  4. Export valid XML.

Links

This is an open-source project. I’m actively looking for feedback on the “Round-trip” capability and Xacro support.

Happy forging!

4 posts - 3 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcing-linkforge-a-native-blender-extension-for-visual-urdf-xacro-editing-ros-2-support/51808

ROS Discourse General: Update on ROS native buffers

Hello ROS community,

as you may have heard, NVIDIA has been working on proposing and prototyping a mechanism to add support for native buffer types into ROS2, to allow ROS2 to natively support APIs to use accelerated buffers like CUDA or Torch tensors efficiently. We had briefly touched on this in a previous discourse post. Since then, a lot of design discussions in the SIG PAI, as well as prototyping on our side has happened, to turn that outline into a full-fledged proposal and prototype.

Below is a rundown of our current status, as well as an outlook of where the work is heading. We are looking forward to discussions and feedback on the proposal.

Native Buffers in ROS 2

Problem statement

Modern robots use advanced, high-resolution sensors to perceive their environment. Whether it’s cameras, LIDARs, time-of-flight sensors or tactile sensor arrays, data rates to be processed are ever-increasing.

Processing of those data streams has for the most part moved onto accelerated hardware that can exploit the parallel nature of the data. Whether that is GPUs, DSPs, NPUs/TPUs, ASICS or other approaches, those hardware engines have some common properties:

The second property of dedicated memory regions is problematic in ROS2, as the framework currently does not have a way to handle non-CPU memory.

Consider for example the sensor_msgs/PointCloud2 message, which stores data like this:

uint8[] data         # Actual point data, size is (row_step*height)

A similar approach is used by sensor_msgs/Image. In rclcpp, this will map to a member like

std::vector<uint8_t> data;

This is problematic for large pieces of data that are never going to be touched by the CPU. It forces the data to be present in CPU memory whenever the framework handles it, in particular for message transport, and every time it crosses a node boundary.

For truly efficient, fully accelerated pipelines, this is undesirable. In cases where there are one or more hardware engines handling the data, it is preferable for the data to stay resident in the accelerator, and never be copied into CPU memory unless a node specifically requests to do so.

We are therefore proposing to add the notion of pluggable memory backends to ROS2 by introducing a concept of buffers that share a common API, but are implemented with vendor-specific plugins to allow efficient storage and transport with vendor-native, optimized facilities.

Specifically, we are proposing to map uint8[] in rosidl to a custom buffer type in rclcpp that behaves like a std::vector<uint8_t> if used for CPU code, but will automatically keep the data resident to the vendor’s accelerator memory otherwise. This buffer type is also integrated with rmw to allow the backend to move the buffer between nodes using vendor-specific side channels, allowing for transparent zero-copy transport of the data if implemented by the vendor.

Architecture overview

Message encoding

The following diagram shows the overview of a message containing a uint8[] array, and how it is mapped to C++, and then serialized:

It shows the following parts, which we will discuss in more detail later:

The message being encoded into a vendor-specific buffer descriptor message, which is serialized in place of the raw byte array in the message

Choice of uint8[] as trigger

It is worth noting the choice to utilize uint8[] as a trigger to generate Buffer<T> instances. An alternative approach would have been to add a new Buffer type to the IDL, and to translate that into Buffer<T>. However, this would not only introduce a break in compatibility of the IDL, but also force the introduction of a sensor_msgs/PointCloud3 and similar data types, fracturing the message ecosystem further.

We believe the cost of maintaining a std::vector compatible interface and the slight loss of semantics is outweighed by the benefit of being drop-in compatible with both existing messages and existing code bases.

Integration with rclcpp (and rclpy and rclrs)

rclcpp exposes all uint8[] fields as rosidl_runtime_cpp::Buffer<T> members in their respective generated C++ structs.

rosidl_runtime_cpp::Buffer<T> has a fully compatible interface to std::vector<T>, like size(), operator[](size_type pos) etc.. If any of the std::vector<T> APIs are being used, the vector is copied onto the CPU as necessary, and all members work as expected. This maintains full compatibility with existing code - any code that expects a std::vector<T> in the message will be able to use the corresponding fields as such without any code changes.

In order to access the underlying hardware buffers, the vendor-specific APIs are being used. Suppose a vendor backend named vendor_buffer_backend exists, then the backend would usually contain a static method to convert a buffer to the native type. Our hypothetical vendor backend may then be used as follows:

void topic_callback(const msg::MessageWithTensor & input_msg) {
  vendor_native_handle input_h = vendor_buffer_backend::from_buffer(msg.data);

  msg::MessageWithTensor output_msg =     
    vendor_buffer_backend::allocate<msg::MessageWithTensor>();

  vendor_native_handle output_h = 
    vendor_buffer_backend::from_buffer(output_msg.data);

  output_h = input_h.some_operation();

  publisher_.publish(output_msg);
}

This code snippet does the following:

First, it extracts the native buffer handle from the message using a static method provided by the vendor backend. Vendors are free to provide any interface they choose for providing this interface, but would be encouraged to provide a static method interface for ease of use.

It then allocates the output message to be published using another vendor-specific interface. Note that this allocation creates an empty buffer, it only sets up the relationship between output_msg.data and the vendor_buffer_backend by creating an instance of the backend buffer, and registering it in the impl field of rosidl_runtime_cpp::Buffer<T> class.

The native handle from the output message is also extracted, so it can be used with the native interfaces provided.

Afterwards, it performs some native operations on the input data, and assigns the result of that operation to the output data. Note that this is happening on the vendor native data types, but since the handles are linked to the buffers, the results show up in the output message without additional code.

Finally, the output message is published the same as any other ROS2 message. rmw then takes care of vendor-specific serialization, see the following sections on details of that process.

This design keeps any vendor-specific code completely out of rclcpp. All that rclcpp sees and links against is the generic rosidl_runtime_cpp::Buffer<T> class, which has no direct ties to any specific vendor. Hence there is no need for rclcpp to even know about all vendor backends that exist.

It also allows vendors to provide specific interfaces for their respective platforms, allowing them to implement allocation and handling schemes particular to their underlying systems.

A similar type would exist for rclpy and rclrs. We anticipate both of those easier to implement due to the duck typing facilities in rclpy, and the traits-based object system in rclrs, respectively, which make it much easier to implement drop-in compatible systems.

Backends as plugins

Backends are implemented as plugins using ROS’s pluginlib. On startup, each rmw instance scans for available backend-compatible plugins on the system, and registers them through pluginlib.

A standard implementation of a backend using CPU memory to offer std::vector<T> compatibility is provided by default through the ROS2 distribution, to ensure that there is always a CPU implementation available.

Additional vendor-specific plugins are implemented by the respective hardware vendors. For example, NVIDIA would implement and provide a CUDA backend, while AMD might implement and provide a ROCm backend.

Backends can either be distributed as individual packages, or be pre-installed on the target hardware. As an example, the NVIDIA Jetson systems would likely have a CUDA backend pre-installed as part of their system image.

Instances of rosidl_runtime_cpp::Buffer<T> are tied to a particular backend at allocation time, as illustrated in the section above.

Integration with rmw

rmw implementations can choose to integrate with vendor backends to provide accelerated transports through the backends. rmw implementations that do not choose to integrate with backends, or any existing legacy backends, automatically fall back onto converting all data to CPU data, and will continue working without any changes.

A rmw implementation that chooses to integrate with vendor backends does the following. At graph startup when publishers and subscribers are being created, each endpoint shares a list of installed backends, alongside vendor-specific data to establish any required side channels, and establishes dedicated channels for passing backend-enabled messages based on 4 different data points:

rmw can choose any mechanism it wants to perform this task, since this step is happening entirely internal to the currently loaded rmw implementation. Side channel creation is entirely hidden inside the vendor plugins, and not visible to rmw.

For publishing a message type that contains buffer-typed fields, if the publisher and the subscriber(s) share the same supported backend list, and there is a matching serialization method implemented in the backend for the distance to the subscriber(s), then instead of serializing the payload of the buffer bytewise, the backend can choose to use a custom serialization method instead.

The backend is then free to serialize into a ROS message type of its choice. This backend-custom message type is called a descriptor. It should contain all information the backend needs to deserialize the message at the subscriber side, and reconstruct the buffer. This descriptor message may contain pointer values, virtual memory handles, IPC handles or even the raw payload if the backend chooses to send that data through rmw.

The descriptor message can be inspected as usual if desired since it is just a normal ROS2 message, but deserializing requires the matching backend. However, since the publisher knows the backends available to the subscriber(s), it is guaranteed that a subscriber only receives a descriptor message if it is able to deserialize it.

Integration with rosidl

While the above sections show the implications visible in rclcpp, the bulk of the changes necessary to make that happen go into rosidl. It is rosidl that is generating the C++ message structures, and hence rosidl that would map to the Buffer type instead of std::vector. Hence the bulk of the work done in order to get this scheme to work is done in rosidl, not in rclcpp.

Layering semantics on top

Having only a buffer is not very useful, as most robotics data has higher level semantics, like images, tensors, point clouds etc..

However, all of those data types ultimately map to one or more large, contiguous regions of memory, in CPU or accelerator memory.

We also observe that a healthy ecosystem of higher level abstractions already exists. There is PCL for point clouds, Torch for tensor handling etc.. Hence, we propose to not try to replicate those ecosystems in ROS, but instead allow those ecosystems to bridge into ROS, and use the buffer abstraction as their backend for storage and transport.

As a demonstration of this, we are providing a Torch backend that allows linking (Py)Torch tensors to the ROS buffers. This allows users to use the rich ecosystem of Torch to perform tensor operations, while relying on the ROS buffers to provide accelerator-native storage and zero-copy transport between nodes, even across processes and chips if supported by the backend.

The Torch backend does not provide a raw buffer type itself, but relies on vendors implementing backends for their platforms (CUDA, ROCm, TPUs etc.). The Torch backend then depends on the vendor-specific backends, and provides the binding of the low-level buffers to the Torch tensors. The coupling between the Torch backend and the hardware vendor buffer types is loose, it is not visible from the node’s code, but is established after the fact.

From a developer’s perspective, all of this is hidden. All a developer writing a Node does is to interact with a Torch buffer, and it maps to the correct backend available on the current hardware automatically. An example of such a code could look like this:

void topic_callback(const msg::MessageWithTensor & input_msg) {
  // extract tensor from input message
  torch::Tensor input_tensor =
    torch_backend::from_buffer(input_msg.tensor);

  // allocate output message
  msg::MessageWithTensor output_msg =
    torch_backend::allocate<MessageWithTensor>();

  // get handle to allocated output tensor
  torch::Tensor & output_tensor =
    torch_backend::from_buffer(output_msg.tensor);

  // perform some torch operations
  output_tensor = torch.abs(input_tensor);

  // publish message as usual
  publisher_.publish(output_msg);
}

Note how this code segment is using Torch-native datatypes (torch::Tensor), and is performing Torch-native operations on the tensors (in this case, torch.abs). There is no mention of any hardware backend in the code.

By keeping the coupling loose, this node can run unmodified on NVIDIA, AMD, TPU or even CPU hardware, with the framework, in this case Torch, being mapped to the correct hardware, and receiving locally available accelerations for free.

Prior work

NITROS

https://docs.nvidia.com/learning/physical-ai/getting-started-with-isaac-ros/latest/an-introduction-to-ai-based-robot-development-with-isaac-ros/05-what-is-nitros.html

NITROS is NVIDIA’s implementation of a similar design based on Type Negotiation. It is specific to NVIDIA and not broadly compatible, nor is it currently possible to layer hardware-agnostics frameworks like Torch on top.

AgnoCast

https://github.com/tier4/agnocast

AgnoCast creates a zero-copy regime for CPU data. However, it is limited to CPU data, and does not have a plugin architecture for accelerator memory regions. It also requires kernel modifications, which some may find intrusive.

Future work

NVIDIA has been working on this proposal, alongside a prototype implementation that implements support for the mechanisms described above. We are working on CPU, CUDA and Torch backends, as well as integration with the Zenoh rmw implementation.

The prototype will move into a branch on the respective ROS repositories in the next two weeks, and continue development into a full-fledged implementation in public.

In parallel, a dedicated working group tasked with formalizing this effort is being formed, with the goal of reaching consensus on the design, and getting the required changes into ROS2 Lyrical.

5 posts - 4 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/update-on-ros-native-buffers/51771

ROS Discourse General: Pixi as a co-official way of installing ROS on Linux

It’s that time of the year when someone with too much spare time on their hands proposes a radical change to the way ROS is distributed and built. This time, it’s my turn.

So let me start this with acknowledging that without all the tooling the ROS community has developed over the years (rosdep, bloom, the buildfarm - donate if you can, I did! -, colcon, etc.) we wouldn’t be here, 20, 10 years ago it was almost impossible to run a multilanguage federated distributed project without these tools, nothing like that existed! So I’m really grateful for all that.

However, the landscape is different now. We now have projects like Pixi, conda-forge and so on.

As per title of my post, I’m proposing that Pixi not only would be the recommended way of installing ROS 2 on Windows, but also on Linux, or at least, co-recommended for ROS 2 Lyrical Luth and onwards.

One of the first challenges that new users of ROS face is learning a new build tool and a development workflow that is ROS-specific. Although historically we really needed to develop all the tools I’ve mentioned, the optics of having our own build tool and package management system doesn’t help, with the perception that some users still have of ROS as a silo that doesn’t play nice with the outside world.

The main two tools that a user can replace with Pixi are colcon and rosdep, and to some extent bloom.

I’ve been using Pixi for over a year for my own projects, some use ROS some don’t, and the experience couldn’t have been better:

Also, from the ROS side, this would reduce the burden of maintaining the buildfarm, the infrasttructure, all the tools, etc. but that’s probably too far in the future and realisticallly it’d take a while if there’s consensus to replace it with someone else.

Over the years, like good opensource citziens we are, we have collaborated with other projects outside the ROS realm. For example, instead of rolling our own transport like we had in ROS 1, we’ve worked with FastDDS, OpenSplice, CycloneDDS and now Zenoh. I’d say this has been quite symbiotic and we’ve helped each other. I believe collaborating with the Pixi, and Robostack projects would be extremely beneficial for everyone involved.

@ruben-arts can surely say more about the benefits of using Pixi for ROS

21 posts - 9 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/pixi-as-a-co-official-way-of-installing-ros-on-linux/51764

ROS Discourse General: Ferronyx – Real-Time ROS2 Observability & Automated RCA

We’ve been building robots with ROS2 for years, and we hit the same wall every time a robot fails in production:

The debugging process:

This takes 3-4 hours. Every time.

The problem: ROS gives you raw telemetry, but zero intelligence connecting infrastructure metrics + ROS topology + deployment history. You’re manually stitching pieces together.

So we built Ferronyx to be that intelligence layer.

What we did:

Real results from our beta customers:

We’re looking for 8-12 more teams to beta test and help us refine this. We want teams that:

Free beta access. You help shape the product, we learn what breaks.

If you’re dealing with robot reliability headaches, reply here or send a DM. Would genuinely love to hear your toughest debugging stories.

Links:
https://ferronyx.com/

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ferronyx-real-time-ros2-observability-automated-rca/51747

ROS Discourse General: ROS 2 Rust Meeting: January 2026

The next ROS 2 Rust Meeting will be Mon, Jan 12, 2026 2:00 PM UTC

The meeting room will be at https://meet.google.com/rxr-pvcv-hmu

In the unlikely event that the room needs to change, we will update this thread with the new info!

Agenda:

  1. Changes to generated message consumption (https://github.com/ros2-rust/ros2_rust/pull/556)
  2. Upgrade to Rust 1.85 (build!: require rustc 1.85 and Rust 2024 edition by esteve · Pull Request #566 · ros2-rust/ros2_rust · GitHub)
  3. Migration from Element to Zulip chat (Open Robotics launches Zulip chat server)

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-rust-meeting-january-2026/51726

ROS Discourse General: Easier Protobuf and ROS 2 Integration

For anyone integrating ROS 2 with Protobuf-based systems, we at the RAI Institute want to highlight one of our open-source tools: proto2ros!

proto2ros generates ROS 2 message definitions and bi-directional conversion code directly from .proto files, reducing boilerplate and simplifying integration between Protobuf-based systems and ROS 2 nodes.

Some highlights:

It is currently available for both Humble and Jazzy and can be installed with
apt install ros-<distro>-proto2ros

Check out the full repo here: https://github.com/bdaiinstitute/proto2ros

Thanks to everyone who has contributed to this project including @hidmic @khughes1 @jbarry !
As always, feedback and contributions are welcome!

The RAI Institute

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/easier-protobuf-and-ros-2-integration/51712

ROS Discourse General: ROSCon Review Continued | Cloud Robotics WG Meeting 2026-01-14

Please come and join us for this coming meeting at Wed, Jan 14, 2026 4:00 PM UTCWed, Jan 14, 2026 5:00 PM UTC, where we plan to dive deeper into the ROSCon talks collected together during the last session. By examining more details about the talks, we can highlight any that would be relevant to Logging & Observability, the current focus of the group. We can also pull out interesting tips to release as part of a blog post.

The details for the talks have been gathered into the Links/Notes column of this document. Please feel free to read ahead and take a look at the notes and videos ahead of the meeting, if you’re interested.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/roscon-review-continued-cloud-robotics-wg-meeting-2026-01-14/51710

ROS Discourse General: Goodbye RQt, Hello RQml [NEW RELEASE]

RQml announcement video

Greetings fellow roboticists,

During our transition to ROS 2 and the build of our new robot Athena, we’ve encountered quite a few issues both in ROS 2, with the middleware, but also with rqt.
For instance, when testing our manipulator, we have noticed that the ControllerManager in rqt gives you around 20 seconds of time before the application freezes completely when used over WiFi.
This is not the only issue, but that’s also not the point of this post.

You could chime in and say, “Hey, you could’ve fixed that and made a PR :index_pointing_up:”, and you would be right, and we did this in several instances.
But I’m not a fan of using Python for UI, and this presented the perfect opportunity to demonstrate how easy it is to create a nice ROS interface using my QML ROS 2 module.
So, instead, I’ve spent that time quickly developing a modern alternative, fixing all the issues that bothered me in rqt.

:waving_hand: Hello RQml :rocket:

Please note that this is still in beta and not all plugins exist yet.
You are very welcome to point me to the ones that you think would be great to have, or even implement them yourself and make a PR :blush:

Currently, the following plugins are available:

Notably, the ImageView now also uses transparency for depth image values that are not valid (instead of using black, which also represents very close values).

As always, I hope this is of interest to you, and I would love to hear from you if you build something cool with this :rocket:
If it wasn’t, my little turtle buddy will be very disappointed because he already considered you a special friend :worried:

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/goodbye-rqt-hello-rqml-new-release/51697

ROS Discourse General: New packages for Humble Hawksbill 2026-01-07

Package Updates for Humble

Added Packages [27]:

Updated Packages [390]:

Removed Packages [7]:

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/new-packages-for-humble-hawksbill-2026-01-07/51696

ROS Industrial: ROSCon 2025 & RIC-AP Summit 2025 Blog Series: Singapore’s Defining Week for Open-Source Robotics

As we look back on 2025, this blog is a recap of one of the most impactful weeks for open-source robotics in the Asia-Pacific region.

On 30 October, the RIC-AP Summit expanded beyond conference halls into the real world with a series of curated site tours across Singapore. These tours showcased how ROS and Open-RMF are not just concepts but living deployments across manufacturing, healthcare, and smart infrastructure.

If the Summit sessions were about vision and strategy, the tours were about seeing robotics in motion—from factory floors to hospitals, airports, and digital districts.

Importantly, the tours brought together participants from different companies and countries, reflecting the truly international nature of the ROS-Industrial community and the collaborative spirit of Asia Pacific’s robotics ecosystem.

1. ROS in Manufacturing: SIMTech & ARTC + Black Sesame Technologies, Singapore Polytechnic

SIMTech & ARTC

Singapore Polytechnic – Robotics, Automation and Control (RAC) Hub

2. RMF Deployment in Healthcare & Reconfigurable Robotics: CHART, SUTD

CHART – Centre for Healthcare Assistive & Robotics Technology (CGH)

SUTD – Reconfigurable Robotics Showcase

59dfa6d0-80c3-4796-8325-c95336f696a5.jpg
dacd2719-0129-4f71-94cf-6530e7e6a5d5.jpg

3. RMF/ROS Deployments: CAG, CPCA, KABAM Robotics, Punggol Digital District – Panasonic

Panasonic – Fleet Management with RMF

KABAM Robotics

Changi Airport Group (CAG)

CPCA – Hospitality Robotics Integration

RIC-AP Summit Tour 2025: Key Takeaways

The tours underscored a powerful message: Singapore is not just hosting conversations about robotics—it is living them. From labs to live deployments, the RIC-AP Summit tours demonstrated how open-source robotics is shaping industries, communities, and everyday life.

[WWW] https://rosindustrial.org/news/2026/1/6/roscon-2025-amp-ric-ap-summit-2025-blog-series-singapores-defining-week-for-open-source-robotics

ROS Discourse General: High frequency log persistence on Jetson Orin (Rosbag alternative?)

Hi everyone,

My team has been working on a storage engine specifically optimized for the Jetson/Orin architecture to handle high bandwidth sensor streams (Lidar/Cameras) that tend to choke rosbag record or mcap writing at the edge.

The main architectural difference is that we bypass the kernel page cache and stream directly to NVMe using custom drivers. We are seeing sustained writes of ~1GB/s with <10us latency on Orin AGX, even ensuring persistence during power cuts (no RAM buffer loss).

We are looking for 3-5 teams running ROS 2 on hardware to test a binary adapter we wrote. It exposes a standard ROS 2 subscriber but pipes the data into our crash-proof storage instead of the standard recorder.

If you are hitting bottlenecks with dropped messages at high frequency or struggling with data corruption on power loss, this might solve it.

DM me or reply here and I can send over the binary for aarch64.

4 posts - 3 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/high-frequency-log-persistence-on-jetson-orin-rosbag-alternative/51657

ROS Discourse General: Best practices for thermal camera intrinsics (FLIR A400) in sensor fusion

I’m working with a FLIR A400 thermal camera as part of a sensor-fusion pipeline
(thermal + radar / LiDAR).

I just found that unlike RGB cameras, FLIR does not expose factory intrinsics, and traditional
OpenCV checkerboard calibration has proven unreliable due to thermal contrast
limitations.

I wanted to start a discussion on what practitioners typically do in this case:

I’m especially interested in what has worked in real robotic systems rather than
textbook calibration.

Looking forward to hearing how others approach this.

8 posts - 5 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/best-practices-for-thermal-camera-intrinsics-flir-a400-in-sensor-fusion/51651


2026-01-24 12:20