Planet ROS
Planet ROS - http://planet.ros.org
Planet ROS - http://planet.ros.org
http://planet.ros.org
ROS Discourse General: Native ROS 2 Jazzy Debian packages for Raspberry Pi OS / Debian Trixie (arm64)
After spending some time trying to get ROS 2 Jazzy working reliably on Raspberry Pi via Docker and Conda (and losing several rounds to OpenGL, Gazebo plugins, and cross-arch issues), I eventually concluded:)
On Raspberry Pi, ROS really only behaves when it’s installed natively.
So I built the full ROS 2 Jazzy stack as native Debian packages for Raspberry Pi OS / Debian Trixie (arm64), using a reproducible build pipeline:
-
bloom → dpkg-buildpackage → sbuild → reprepro
-
signed packages
-
rosdep-compatible
The result:
-
Native ROS 2 Jazzy on Pi OS / Debian Trixie
-
Uses system Mesa / OpenGL
-
Gazebo plugins load correctly
-
Cameras, udev, and ros2_control behave
-
Installable via plain apt
Public APT repository
GitHub - rospian/rospian-repo: ROS2 Jazzy on Raspberry OS Trixie debian repo
Build farm (if you want to reproduce or extend it)
Includes the full mini build-farm pipeline.
This was motivated mainly by reliability on embedded systems and multi-machine setups (Gazebo on desktop, control on Pi).
Feedback, testing, or suggestions very welcome.
2 posts - 2 participants
ROS Discourse General: Ros2_yolos_cpp High-Performance ROS 2 Wrappers for YOLOs-CPP [All models + All tasks]
Hi everyone! ![]()
I’m the author of ros2_yolos_cpp and YOLOs-CPP. I’m excited to share the first public release of this ROS 2 package!
Repository: ros2_yolos_cpp
What Is ros2_yolos_cpp?
ros2_yolos_cpp is a production-ready ROS 2 interface for the YOLOs-CPP inference engine — a high-performance, unified C++ library for YOLO models (v5 through v12 and YOLO26) built on ONNX Runtime and OpenCV.
This package provides composable and lifecycle-managed ROS 2 nodes for real-time:
- Object Detection
- Instance Segmentation
- Pose Estimation
- Oriented Bounding Boxes (OBB)
- Image Classification
All powered through ONNX models and optimized for both CPU and GPU inference.
Key Features
-
ROS 2 Lifecycle Nodes
Full support for ROS 2 managed node lifecycle (configure,activate, etc.) -
Composable Nodes
Efficient multi-model, multi-node setups in a single process -
Zero-Copy Image Transport
Optimized subscription for high-throughput video pipelines -
All Major Vision Tasks
Detection, segmentation, pose, OBB, and classification in one stack -
Standardized ROS 2 Messages
Usesvision_msgsand custom OBB types for interoperability -
Production-Ready
CI/CD workflows, strict parameters, and reusable launch configurations
Regards,
1 post - 1 participant
ROS Discourse General: The Canonical Observability Stack with Guillaume Beuzeboc | Cloud Robotics WG Meeting 2026-01-28
For this coming session on Wed, Jan 28, 2026 4:00 PM UTC→Wed, Jan 28, 2026 5:00 PM UTC, the CRWG has invited Guillaume Beuzeboc from Canonical to present on the Canonical Observability Stack (COS). COS is a general observability stack for devices such as drones, robots, and IoT devices. It operates from telemetry data, and the COS team has extended it to support robot-specific use cases. Guillaume, a software engineer at Canonical, previously presented COS at ROSCon 2025 and has kindly agreed to join this meeting to discuss additional technical details with the CRWG.
At the previous meeting, the CRWG continued its review of the ROSCon 2025 talks, focusing on identifying the sessions most relevant to Logging and Observability. A blog post summarizing our findings will be published in the coming weeks. If you would like to watch the latest review meeting, it is available on YouTube.
The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
1 post - 1 participant
ROS Discourse General: Deployment and Implementation of RDA_planner
Deployment and Implementation of RDA_planner
We reproduce the RDA Planner project from the IEEE paper RDA: An Accelerated Collision-Free Motion Planner for Autonomous Navigation in Cluttered Environments. We provide a step-by-step guide to help you quickly reproduce the RDA path planning algorithm in this paper, enabling efficient obstacle avoidance for autonomous navigation in complex environments.
Abstract
RDA Planner is a high-performance, optimization-based Model Predictive Control (MPC) motion planner designed for autonomous navigation in complex and cluttered environments. By leveraging the Alternating Direction Method of Multipliers (ADMM), RDA decomposes complex optimization problems into several simple subproblems.
This project is the open-source development of the RDA_ROS autonomous navigation project, proposed by researchers from the University of Hong Kong, Southern University of Science and Technology, University of Macau, Shenzhen Institutes of Advanced Technology of the Chinese Academy of Sciences, and Hong Kong University of Science and Technology (Guangzhou). It is developed based on the AgileX Limo simulator. Relevant papers have been published in IEEE Robotics and Automation Letters and IEEE Transactions on Mechatronics.
RDA planner: GitHub - hanruihua/RDA-planner: [RA-Letter 2023] RDA: An Accelerated Collision Free Motion Planner for Autonomous Navigation in Cluttered Environments
RDA_ROS: GitHub - hanruihua/rda_ros: ROS Wrapper of RDA planner
Tags
limo、RDA_planner、path planning
Respositories
- Navigation Repository: GitHub - agilexrobotics/Agilex-College: Agilex College
- Project Repository: https://github.com/agilexrobotics/limo/RDA_planner.git
Environment Requirements
System:ubuntu 20.04
ROS Version:noetic
python Version:python3.9
Deployment Process
1、Download and Install Conda
Choose Anaconda or Miniconda based on your system storage capacity
After downloading, run the following commands to install:
-
Miniconda:
bash Miniconda3-latest-Linux-x86_64.sh -
Anaconda:
bash Anaconda-latest-Linux-x86_64.sh
2、Create and Activate Conda Environment
conda create -n rda python=3.9
conda activate rda
3、Download RDA_planner
mkdir -p ~/rda_ws/src
cd ~/rda_ws/src
git clone https://github.com/hanruihua/RDA_planner
cd RDA_planner
pip install -e .
4、Download Simulator
pip install ir-sim
5、Run Examples in RDA_planner
cd RDA_planner/example/lidar_nav
python lidar_path_track_diff.py
The running effect is consistent with the official README.

Deployment Process of rda_ros
1、Install Dependencies in Conda Environment
conda activate rda
sudo apt install python3-empy
sudo apt install ros-noetic-costmap-converter
pip install empy==3.3.4
pip install rospkg
pip install catkin_pkg
2、Download Code
cd ~/rda_ws/src
git clone https://github.com/hanruihua/rda_ros
cd ~/rda_ws && catkin_make
cd ~/rda_ws/src/rda_ros
sh source_setup.sh && source ~/rda_ws/devel/setup.sh && rosdep install rda_ros
3、Download Simulation Components
This step will download two repositories: limo_ros and rvo_ros
limo_ros:Robot model for simulation
rvo_ros:Cylindrical obstacles used in the simulation environment
cd rda_ros/example/dynamic_collision_avoidance
sh gazebo_example_setup.sh
4、Run Gazebo Simulation
Run via Script
cd rda_ros/example/dynamic_collision_avoidance
sh run_rda_gazebo_scan.sh
Run via Individual Commands
Launch the simulation environment:
roslaunch rda_ros gazebo_limo_env10.launch
Launch RDA_planner
roslaunch rda_ros rda_gazebo_limo_scan.launch

1 post - 1 participant
ROS Discourse General: iRobot's ROS benchmarking suite now available!
We’ve just open-sourced our ROS benchmarking suite! Built on top of iRobot’s ros2-performance framework, this is a containerized environment for simulating arbitrary ROS2 systems and graph configurations both simple and complex, comparing the performance of various RMW implementations, and identifying performance issues and bottlenecks.
- Support for jazzy, kilted and rolling
- Fully containerized, with experimental support for ARM64 builds through docker bake
- Container includes fastdds, cyclonedds and zenoh out of the box.
- In-depth statistical analysis / performance graphs, wrapped up in
a pretty PDF format like so. (3.8 MB)
Are you building a custom RMW or ROS executor not included in this tooling, and want to compare against the existing implementations? We provide instructions and examples for how to add them to this suite.
Huge shoutout to Leonardo Neumarkt Fernandez for owning and driving the development of this benchmarking suite!
2 posts - 2 participants
ROS Discourse General: Can anyone recommend a C++ GUI framework where I can embed or integrate a 3D engine?
I know that Qt gives an opportunity to do it natively with Qt3D, but I didn’t find any examples demonstrating that I can rotate and view models in Qt3D within mouse. There are also a lot of 3D engines which provides integration with Qt. They are listed here. But I don’t want to try each of them, maybe someone already knows which one is suitable for me.
I am using C++ for everything, so it is better to use C++ for easier integration, but Rust and Python are also acceptable.
I am a big fun of the Open3D, so if somebody knows how to integrate it with some GUI frameworks, I will be glad)
3 posts - 2 participants
ROS Discourse General: Announcement: rclrs 0.7.0 Release
We’re happy to announce the release of rclrs v0.7.0!
Just like v0.6.0 landed right before ROSCon in Singapore, this release is arriving just in time for FOSDEM at the end of the month. Welcome to Conference-Driven Development (CDD)!
If you’re attending FOSDEM, come check out my talk on ros2-rust in the Robotics & Simulation devroom.
What’s New
Dynamic Messages
This release adds support for dynamic message publishers and subscribers. You can now work with ROS 2 topics without compile-time knowledge of message types, enabling tools like rosbag recorders, topic inspection utilities, and message bridges to be written entirely in Rust.
Best Available QoS
Added support for best available QoS profiles. Applications can now automatically negotiate quality of service settings when connecting to existing publishers or subscribers.
Other Changes
-
Fixed mismatched lifetime syntax warnings
-
Fixed duplicate typesupport extern declarations
Breaking Changes
- Minimum Rust version is now 1.85
For the next release, we are planning to switch to Rust 2024, but wanted to give enough notice.
Contributors
A huge thank you to everyone who contributed to this release! Your contributions make ros2-rust better for the entire community.
- Esteve Fernández
- Geoff Sokoll
- Jacob Hassold
- Kimberly N. McGuire
- Luca Della Vedova
- Michael X. Grey
- Nikolai Morin
- Sam Privett
Links
-
GitHub: GitHub - ros2-rust/ros2_rust: Rust bindings for ROS 2
-
Examples: GitHub - ros2-rust/examples: Example packages for ros2-rust
-
Changelog: ros2_rust/rclrs/CHANGELOG.md at main · ros2-rust/ros2_rust · GitHub
As always, we welcome feedback and contributions!
1 post - 1 participant
ROS Discourse General: LinkForge: Robot modeling does not have to be complicated
I recorded a short video to show how easy it is to build a simple mobile robot with ğ��‹ğ��¢ğ��§ğ��¤ğ��…ğ��¨ğ��«ğ�� ğ���, a Blender extension designed to bridge the gap between 3D modeling and robotics simulation.
All in a few straightforward steps.
The goal is simple: remove friction from robot modeling so engineers can focus on simulation, control, and behavior, not file formats and repetitive setup.
If you are working with ROS or robot simulation and want a faster, cleaner workflow, this is worth a look.
All in a few straightforward steps.
The goal is simple: remove friction from robot modeling so engineers can focus on simulation, control, and behavior, not file formats and repetitive setup.
If you are working with ROS or robot simulation and want a faster, cleaner workflow, this is worth a look.
Blender Extensions: https://extensions.blender.org/add-ons/linkforge/
GitHub: https://github.com/arounamounchili/linkforge
Documentation: https://linkforge.readthedocs.io/
1 post - 1 participant
ROS Industrial: First of 2026 ROS-I Developers' Meeting Looks at Upcoming Releases and Collaboration
The ROS-Industrial Developers’ Meeting provided updates on open-source robotics tools, with a focus on advancements in Tesseract, Helping developers still using MoveIt2, and Trajopt. These updates underscore the global push to innovate motion planning, perception, and tooling systems for industrial automation. Key developments revolved around stabilizing existing frameworks, improving performance, and leveraging modern technologies like GPUs for acceleration.
The Tesseract project, designed to address traditional motion planning tools' limitations, is moving steadily toward a 1.0 release. With about half of the work complete, remaining tasks include API polishing, unit test enhancements, and transitioning the motion planning pipeline to a plugin-based architecture. Tesseract is also integrating improved collision checkers and tools like the Task Composer, which supports modular backends, making it more adaptable for high-complexity manufacturing tasks.
On the MoveIt 2 front, ongoing community support will be critical as the prior suppor team shifts to supporting the commercial MoveItPro. To ensured Tesseract maintainability, updates include the migration of documentation directly into repositories via GitHub. This step simplifies synchronization between code and documentation, helping developers maintain robust, open-source solutions. There are plans to provide migration tutorials for those wanting to investigate Tesseract if MoveIt2 is not meeting development needs and not ready to move to MoveItPro. Ability to utilize MoveIt2 components within Tesseract are being investigated.
Trajopt, another critical component of the Tesseract ecosystem, is undergoing a rewrite to better handle complex trajectories and cost constraints. The new version, expected within weeks, will enable better time parameterization and overall performance improvements. Discussions also explored GPU acceleration, focusing on opportunities to optimize constraint and cost calculations using emerging GPU libraries, though some modifications will be needed to fully realize this potential.
Toolpath optimization also gained attention, with updates on the noether repository, which supports industrial toolpath generation and reconstruction. While still a work in progress, noether is set to play a pivotal role in enabling advanced workflows once the planned updates are implemented.
As the meeting concluded, contributors emphasized the importance of community engagement to further modernize and refine these tools. Upcoming events across Europe and Asia will foster collaboration and showcase advancements in the ROS-Industrial ecosystem. This collective effort promises to drive a smarter, more adaptable industrial automation landscape, ensuring open-source solutions stay at the forefront of global manufacturing innovation.
The next Developers' Meeting is slated to be hosted by the ROS-I Consortium EU. You can find all the info for Developers' Meetings over at the Developer Meeting page.
ROS Discourse General: Simple status webpage for a robot in localhost?
Hi, I’m just collecting info on how you’re solving some simple status pages running locally on robots that would show some basic information like battery status, driver status, sensor health etc. But nothing fancy like camera streaming, teleoperation and such. No cloud, everything local!
The use-case is just being able to quickly connect to a robot AP and see the status of important things. This can of course be done via rqt or remote desktop, but a status webpage is much more accessible from phones, tablets etc.
I’ve seen statically generated pages with autoreload (easiest to implement, but very custom).
I guess some people have something on top of rosbridge/RobotWebTools, right? But I haven’t found much info about this.
Introducing Robotics UI: A Web Interface Solution for ROS 2 Robots - sciota robotics seemed interesting, but it never did it over 8 commits…
So what do you use?
Is there some automatic /diagnostics_agg → HTML+JS+WS framework?
And no, I don’t count Foxglove, because self-hosted costs… who knows what ![]()
12 posts - 6 participants
ROS Discourse General: Tbai - towards better athletic intelligence
Introducing tbai, a framework designed to democratize robotics and embodied AI and to help us move towards better athletic intelligence.

Drawing inspiration from Hugging Face (more specifically lerobot
), tbai implements and makes fully open-source countless state-of-the-art methods for controlling various sorts of robots, including quadrupeds, humanoids, and industrial robotic arms.
With its well-established API and levels of abstraction, users can easily add new controllers while reusing the rest of the infrastructure, including utilities for time synchronization, visualization, config interaction, and state estimation, to name a few.
Everything is built out of lego-like components that can be seamlessly combined into a single, high-performing robot controller pipeline. Its wide pool of already implemented state-of-the-art controllers (many from Robotic Systems Lab), state estimators, and robot interfaces, together with simulation or real-robot deployment abstractions, allows anyone using tbai to easily start playing around and working on novel methods, using the existing framework as a baseline, or to change one component while keeping the rest, thus accelerating the iteration cycle.
No more starting from scratch, no more boilerplate code. Tbai takes care of all of that.
Tbai seeks to support as many robotic platforms as possible. Currently, there are nine robots that have at least one demo prepared, with many more to come. Specifically, we have controllers readily available for ANYmal B, ANYmal C, and ANYmal D from ANYbotics; Go2, Go2W, and G1 from Unitree Robotics; Franka Emika from Franka Robotics; and finally, Spot and Spot with arm from Boston Dynamics.
Tbai is an ongoing project that will continue making strides towards democratizing robotics and embodied AI. If you are a researcher or a tinkerer who is building cool controllers for a robot, be it an already supported robot or a completely new one, please do consider contributing to tbai so that as many people can benefit from your work as possible.
Finally, a huge thanks goes to all researchers and tinkerers who do robotics and publish papers together with their code for other people to learn from. Tbai would not be where it is now if it weren’t for the countless open-source projects it has drawn inspiration from. I hope tbai becomes an inspiration for other projects too.
Thank you all!
Link: https://github.com/tbai-lab/tbai
Link: https://github.com/tbai-lab/tbai_ros
3 posts - 2 participants
ROS Discourse General: [Humble] Upcoming behavior change: Improved log file flushing in rcl_logging_spdlog
Summary
The ROS PMC has approved backporting an improved log file flushing behavior to ROS 2 Humble. This change will be included in an next Humble sync and affects how rcl_logging_spdlog flushes log data to the filesystem.
What’s Changing?
Previously, rcl_logging_spdlog did not explicitly configure flushing behavior, which could result in:
- Missing log messages when an application crashes
- Empty or incomplete log files during debugging sessions
After this update, the logging behavior will:
- Flush log files every 5 seconds (periodic flush)
- Immediately flush on ERROR level messages (flush on error)
This provides a much better debugging experience, especially when investigating crashes or unexpected application terminations.
Compatibility
API/ABI compatible — No rebuild of your packages is required
Behavior change — Log files will be flushed more frequently
How to Revert to the Old Behavior
If you need to restore the previous flushing behavior (no explicit flushing), you can set the following environment variable:
export RCL_LOGGING_SPDLOG_EXPERIMENTAL_OLD_FLUSHING_BEHAVIOR=1
Note: This environment variable is marked as EXPERIMENTAL and is intended as a temporary measure. It may be removed in future ROS 2 releases when full logging configuration file support is implemented. Please do not rely on this variable being available in future versions.
Related Links
- Original PR (rolling): https://github.com/ros2/rcl_logging/pull/95
- Backport PR (humble): change flushing behavior for spdlog log files, and add env var to use old style (no explicit flushing) (backport #95) by mergify[bot] · Pull Request #136 · ros2/rcl_logging · GitHub
- Future logging configuration plans: [rolling] Update maintainers - 2022-11-07 by audrow · Pull Request #96 · ros2/rcl_logging · GitHub
Questions or Concerns?
If you experience any issues with this change or have feedback, please:
- Comment on this thread
- Open an issue at GitHub · Where software is built
Thanks,
Tomoya
2 posts - 2 participants
ROS Discourse General: Guidance on next steps after ROS 2 Jazzy fundamentals for a hospitality robot project
I’m keenly working on a hospitality robot project driven by personal interest and a genuine enthusiasm for robotics, and I’m seeking guidance on what to focus on next.
I currently have a solid grasp of ROS 2 Jazzy fundamentals, including nodes, topics, services, actions, lifecycle nodes, URDF/Xacro, launch files, and executors. I’m comfortable bringing up a robot model and understanding how the ROS 2 system fits together.
My aim is to build a simulation-first MVP for a lobby scenario (greeter, wayfinding, and escort use cases). I’m deliberately keeping the scope practical and do not plan to add arms initially unless they become necessary.
At this stage, I would really value direction from more experienced practitioners on how to progress from foundational ROS knowledge toward a real, working robot.
In particular, I’d appreciate insights on:
-
What are the most important areas to focus on after mastering ROS 2 basics?
-
Which subsystems are best tackled first, and in what sequence?
-
What level of completeness is typically expected in simulation before transitioning to physical hardware?
-
Are there recommended ROS 2 packages, example bringups, or architectural patterns well suited for this type of robot?
Any advice, lessons learned, or references that could help shape the next phase of development would be greatly appreciated.
1 post - 1 participant
ROS Discourse General: [Announcing] LinkForge: A Native Blender Extension for Visual URDF/Xacro Editing (ROS 2 Support)
Hi everyone,
I’d like to share a tool I’ve been working on: LinkForge. It was just approved on the Blender Extensions Platform (v1.1.1).
The Problem We all know the workflow: export meshes from CAD, write URDFs by hand, guess inertia tensors, launch Gazebo, realize a link is rotated 90 degrees, kill Gazebo, edit XML, repeat. It separates the “design” from the “engineering.”
The Solution LinkForge allows you to rig, configure, and export simulation-ready robots directly inside Blender. It is not just a mesh exporter; it manages the entire URDF/Xacro structure.
Key Features for Roboticists:
- Visual Editor: Import/Export URDF & Xacro files seamlessly
- Physics: Auto-calculates mass & inertia tensors
- ROS2 Control Support: Automatically generates hardware interface configurations for ros2_control
- Complete Sensor Suite: Integrated support for Camera, Depth Camera, LiDAR, IMU, GPS, and Force/Torque sensors with configurable noise models
- Xacro Support: Preserves macros and properties where possible.
Workflow
- Import your existing
.urdfor.xacro. - Edit joints and limits visually in the viewport.
- Add collision geometry (convex hulls/primitives).
- Export valid XML.
Links
- Blender Extension: LinkForge — Blender Extensions
- GitHub: GitHub - arounamounchili/linkforge: Build simulation-ready robots in Blender. Professional URDF/XACRO exporter with validation, sensors, and ros2_control support.
- Documentation: https://linkforge.readthedocs.io/
This is an open-source project. I’m actively looking for feedback on the “Round-trip” capability and Xacro support.
Happy forging!
4 posts - 3 participants
ROS Discourse General: Update on ROS native buffers
Hello ROS community,
as you may have heard, NVIDIA has been working on proposing and prototyping a mechanism to add support for native buffer types into ROS2, to allow ROS2 to natively support APIs to use accelerated buffers like CUDA or Torch tensors efficiently. We had briefly touched on this in a previous discourse post. Since then, a lot of design discussions in the SIG PAI, as well as prototyping on our side has happened, to turn that outline into a full-fledged proposal and prototype.
Below is a rundown of our current status, as well as an outlook of where the work is heading. We are looking forward to discussions and feedback on the proposal.
Native Buffers in ROS 2
Problem statement
Modern robots use advanced, high-resolution sensors to perceive their environment. Whether it’s cameras, LIDARs, time-of-flight sensors or tactile sensor arrays, data rates to be processed are ever-increasing.
Processing of those data streams has for the most part moved onto accelerated hardware that can exploit the parallel nature of the data. Whether that is GPUs, DSPs, NPUs/TPUs, ASICS or other approaches, those hardware engines have some common properties:
- They are inherently parallel, and as such well suited to processing many small samples at the same time
- They are dedicated hardware with dedicated interfaces and often dedicated memory
The second property of dedicated memory regions is problematic in ROS2, as the framework currently does not have a way to handle non-CPU memory.
Consider for example the sensor_msgs/PointCloud2 message, which stores data like this:
uint8[] data # Actual point data, size is (row_step*height)
A similar approach is used by sensor_msgs/Image. In rclcpp, this will map to a member like
std::vector<uint8_t> data;
This is problematic for large pieces of data that are never going to be touched by the CPU. It forces the data to be present in CPU memory whenever the framework handles it, in particular for message transport, and every time it crosses a node boundary.
For truly efficient, fully accelerated pipelines, this is undesirable. In cases where there are one or more hardware engines handling the data, it is preferable for the data to stay resident in the accelerator, and never be copied into CPU memory unless a node specifically requests to do so.
We are therefore proposing to add the notion of pluggable memory backends to ROS2 by introducing a concept of buffers that share a common API, but are implemented with vendor-specific plugins to allow efficient storage and transport with vendor-native, optimized facilities.
Specifically, we are proposing to map uint8[] in rosidl to a custom buffer type in rclcpp that behaves like a std::vector<uint8_t> if used for CPU code, but will automatically keep the data resident to the vendor’s accelerator memory otherwise. This buffer type is also integrated with rmw to allow the backend to move the buffer between nodes using vendor-specific side channels, allowing for transparent zero-copy transport of the data if implemented by the vendor.
Architecture overview
Message encoding
The following diagram shows the overview of a message containing a uint8[] array, and how it is mapped to C++, and then serialized:
It shows the following parts, which we will discuss in more detail later:
- Declaration of a buffer using
uint8[]in a message definition as before - Mapping onto a custom buffer type in rclcpp, called
Buffer<T>here - The internals of the
Buffer<T>type, in particular itsstd::vector<T>-compatible interface, as well as a pointer to a vendor-specific implementation - A vendor-specific backend providing serialization, as well as custom APIs
The message being encoded into a vendor-specific buffer descriptor message, which is serialized in place of the raw byte array in the message
Choice of uint8[] as trigger
It is worth noting the choice to utilize uint8[] as a trigger to generate Buffer<T> instances. An alternative approach would have been to add a new Buffer type to the IDL, and to translate that into Buffer<T>. However, this would not only introduce a break in compatibility of the IDL, but also force the introduction of a sensor_msgs/PointCloud3 and similar data types, fracturing the message ecosystem further.
We believe the cost of maintaining a std::vector compatible interface and the slight loss of semantics is outweighed by the benefit of being drop-in compatible with both existing messages and existing code bases.
Integration with rclcpp (and rclpy and rclrs)
rclcpp exposes all uint8[] fields as rosidl_runtime_cpp::Buffer<T> members in their respective generated C++ structs.
rosidl_runtime_cpp::Buffer<T> has a fully compatible interface to std::vector<T>, like size(), operator[](size_type pos) etc.. If any of the std::vector<T> APIs are being used, the vector is copied onto the CPU as necessary, and all members work as expected. This maintains full compatibility with existing code - any code that expects a std::vector<T> in the message will be able to use the corresponding fields as such without any code changes.
In order to access the underlying hardware buffers, the vendor-specific APIs are being used. Suppose a vendor backend named vendor_buffer_backend exists, then the backend would usually contain a static method to convert a buffer to the native type. Our hypothetical vendor backend may then be used as follows:
void topic_callback(const msg::MessageWithTensor & input_msg) {
vendor_native_handle input_h = vendor_buffer_backend::from_buffer(msg.data);
msg::MessageWithTensor output_msg =
vendor_buffer_backend::allocate<msg::MessageWithTensor>();
vendor_native_handle output_h =
vendor_buffer_backend::from_buffer(output_msg.data);
output_h = input_h.some_operation();
publisher_.publish(output_msg);
}
This code snippet does the following:
First, it extracts the native buffer handle from the message using a static method provided by the vendor backend. Vendors are free to provide any interface they choose for providing this interface, but would be encouraged to provide a static method interface for ease of use.
It then allocates the output message to be published using another vendor-specific interface. Note that this allocation creates an empty buffer, it only sets up the relationship between output_msg.data and the vendor_buffer_backend by creating an instance of the backend buffer, and registering it in the impl field of rosidl_runtime_cpp::Buffer<T> class.
The native handle from the output message is also extracted, so it can be used with the native interfaces provided.
Afterwards, it performs some native operations on the input data, and assigns the result of that operation to the output data. Note that this is happening on the vendor native data types, but since the handles are linked to the buffers, the results show up in the output message without additional code.
Finally, the output message is published the same as any other ROS2 message. rmw then takes care of vendor-specific serialization, see the following sections on details of that process.
This design keeps any vendor-specific code completely out of rclcpp. All that rclcpp sees and links against is the generic rosidl_runtime_cpp::Buffer<T> class, which has no direct ties to any specific vendor. Hence there is no need for rclcpp to even know about all vendor backends that exist.
It also allows vendors to provide specific interfaces for their respective platforms, allowing them to implement allocation and handling schemes particular to their underlying systems.
A similar type would exist for rclpy and rclrs. We anticipate both of those easier to implement due to the duck typing facilities in rclpy, and the traits-based object system in rclrs, respectively, which make it much easier to implement drop-in compatible systems.
Backends as plugins
Backends are implemented as plugins using ROS’s pluginlib. On startup, each rmw instance scans for available backend-compatible plugins on the system, and registers them through pluginlib.
A standard implementation of a backend using CPU memory to offer std::vector<T> compatibility is provided by default through the ROS2 distribution, to ensure that there is always a CPU implementation available.
Additional vendor-specific plugins are implemented by the respective hardware vendors. For example, NVIDIA would implement and provide a CUDA backend, while AMD might implement and provide a ROCm backend.
Backends can either be distributed as individual packages, or be pre-installed on the target hardware. As an example, the NVIDIA Jetson systems would likely have a CUDA backend pre-installed as part of their system image.
Instances of rosidl_runtime_cpp::Buffer<T> are tied to a particular backend at allocation time, as illustrated in the section above.
Integration with rmw
rmw implementations can choose to integrate with vendor backends to provide accelerated transports through the backends. rmw implementations that do not choose to integrate with backends, or any existing legacy backends, automatically fall back onto converting all data to CPU data, and will continue working without any changes.
A rmw implementation that chooses to integrate with vendor backends does the following. At graph startup when publishers and subscribers are being created, each endpoint shares a list of installed backends, alongside vendor-specific data to establish any required side channels, and establishes dedicated channels for passing backend-enabled messages based on 4 different data points:
- The message type for determining if it contains any buffer-typed fields
- The list of backends supported by the current endpoint
- The list of backends supported by the associated endpoint on the other side
- The distance between the two endpoints (same process, different process, across a network etc.)
rmw can choose any mechanism it wants to perform this task, since this step is happening entirely internal to the currently loaded rmw implementation. Side channel creation is entirely hidden inside the vendor plugins, and not visible to rmw.
For publishing a message type that contains buffer-typed fields, if the publisher and the subscriber(s) share the same supported backend list, and there is a matching serialization method implemented in the backend for the distance to the subscriber(s), then instead of serializing the payload of the buffer bytewise, the backend can choose to use a custom serialization method instead.
The backend is then free to serialize into a ROS message type of its choice. This backend-custom message type is called a descriptor. It should contain all information the backend needs to deserialize the message at the subscriber side, and reconstruct the buffer. This descriptor message may contain pointer values, virtual memory handles, IPC handles or even the raw payload if the backend chooses to send that data through rmw.
The descriptor message can be inspected as usual if desired since it is just a normal ROS2 message, but deserializing requires the matching backend. However, since the publisher knows the backends available to the subscriber(s), it is guaranteed that a subscriber only receives a descriptor message if it is able to deserialize it.
Integration with rosidl
While the above sections show the implications visible in rclcpp, the bulk of the changes necessary to make that happen go into rosidl. It is rosidl that is generating the C++ message structures, and hence rosidl that would map to the Buffer type instead of std::vector. Hence the bulk of the work done in order to get this scheme to work is done in rosidl, not in rclcpp.
Layering semantics on top
Having only a buffer is not very useful, as most robotics data has higher level semantics, like images, tensors, point clouds etc..
However, all of those data types ultimately map to one or more large, contiguous regions of memory, in CPU or accelerator memory.
We also observe that a healthy ecosystem of higher level abstractions already exists. There is PCL for point clouds, Torch for tensor handling etc.. Hence, we propose to not try to replicate those ecosystems in ROS, but instead allow those ecosystems to bridge into ROS, and use the buffer abstraction as their backend for storage and transport.
As a demonstration of this, we are providing a Torch backend that allows linking (Py)Torch tensors to the ROS buffers. This allows users to use the rich ecosystem of Torch to perform tensor operations, while relying on the ROS buffers to provide accelerator-native storage and zero-copy transport between nodes, even across processes and chips if supported by the backend.
The Torch backend does not provide a raw buffer type itself, but relies on vendors implementing backends for their platforms (CUDA, ROCm, TPUs etc.). The Torch backend then depends on the vendor-specific backends, and provides the binding of the low-level buffers to the Torch tensors. The coupling between the Torch backend and the hardware vendor buffer types is loose, it is not visible from the node’s code, but is established after the fact.
From a developer’s perspective, all of this is hidden. All a developer writing a Node does is to interact with a Torch buffer, and it maps to the correct backend available on the current hardware automatically. An example of such a code could look like this:
void topic_callback(const msg::MessageWithTensor & input_msg) {
// extract tensor from input message
torch::Tensor input_tensor =
torch_backend::from_buffer(input_msg.tensor);
// allocate output message
msg::MessageWithTensor output_msg =
torch_backend::allocate<MessageWithTensor>();
// get handle to allocated output tensor
torch::Tensor & output_tensor =
torch_backend::from_buffer(output_msg.tensor);
// perform some torch operations
output_tensor = torch.abs(input_tensor);
// publish message as usual
publisher_.publish(output_msg);
}
Note how this code segment is using Torch-native datatypes (torch::Tensor), and is performing Torch-native operations on the tensors (in this case, torch.abs). There is no mention of any hardware backend in the code.
By keeping the coupling loose, this node can run unmodified on NVIDIA, AMD, TPU or even CPU hardware, with the framework, in this case Torch, being mapped to the correct hardware, and receiving locally available accelerations for free.
Prior work
NITROS
NITROS is NVIDIA’s implementation of a similar design based on Type Negotiation. It is specific to NVIDIA and not broadly compatible, nor is it currently possible to layer hardware-agnostics frameworks like Torch on top.
AgnoCast
https://github.com/tier4/agnocast
AgnoCast creates a zero-copy regime for CPU data. However, it is limited to CPU data, and does not have a plugin architecture for accelerator memory regions. It also requires kernel modifications, which some may find intrusive.
Future work
NVIDIA has been working on this proposal, alongside a prototype implementation that implements support for the mechanisms described above. We are working on CPU, CUDA and Torch backends, as well as integration with the Zenoh rmw implementation.
The prototype will move into a branch on the respective ROS repositories in the next two weeks, and continue development into a full-fledged implementation in public.
In parallel, a dedicated working group tasked with formalizing this effort is being formed, with the goal of reaching consensus on the design, and getting the required changes into ROS2 Lyrical.
5 posts - 4 participants
ROS Discourse General: Pixi as a co-official way of installing ROS on Linux
It’s that time of the year when someone with too much spare time on their hands proposes a radical change to the way ROS is distributed and built. This time, it’s my turn.
So let me start this with acknowledging that without all the tooling the ROS community has developed over the years (rosdep, bloom, the buildfarm - donate if you can, I did! -, colcon, etc.) we wouldn’t be here, 20, 10 years ago it was almost impossible to run a multilanguage federated distributed project without these tools, nothing like that existed! So I’m really grateful for all that.
However, the landscape is different now. We now have projects like Pixi, conda-forge and so on.
As per title of my post, I’m proposing that Pixi not only would be the recommended way of installing ROS 2 on Windows, but also on Linux, or at least, co-recommended for ROS 2 Lyrical Luth and onwards.
One of the first challenges that new users of ROS face is learning a new build tool and a development workflow that is ROS-specific. Although historically we really needed to develop all the tools I’ve mentioned, the optics of having our own build tool and package management system doesn’t help, with the perception that some users still have of ROS as a silo that doesn’t play nice with the outside world.
The main two tools that a user can replace with Pixi are colcon and rosdep, and to some extent bloom.
- colcon has noble goals, becoming the one build tool for multilanguage workspaces, and as someone who has contributed to it (e.g. extensions for colcon to support Gradle and Cargo) I appreciate having it all under the same tool. However, it hasn’t achieved much widespread adoption outside ROS.
- rosdep makes it easy to install the multilanguage dependencies, however it still has some long standing issues ( Add support for version_eq · Issue #803 · ros-infrastructure/rosdep · GitHub ) that are taken for granted in other package managers and because of the distribution model we have, ROS packages are installed at a system level, not everything is available via APT, etc.
- bloom works great for submitting packages to the buildfarm. Pixi provides rattler-build, the process only requires a YAML file and can publish to not only prefix.dev, but also Anaconda.org and JFrog Artifactory.
I’ve been using Pixi for over a year for my own projects, some use ROS some don’t, and the experience couldn’t have been better:
- No need for vendor packages thanks to conda-forge and robostack (over 43k packages available!)
- No need for root access, all software is installed in a workspace, and workspaces are reproducible thanks to lockfiles, so I have the same environment on my CI as on my computer.
- Distro-independent. I’m running AlmaLinux and Debian, I no longer have to worry whether ROS supports my distro or not.
- Pixi can replace colcon thanks to the pixi build backends ( Building a ROS Package - Pixi )
- Pixi is fast! It’s written in Rust

Also, from the ROS side, this would reduce the burden of maintaining the buildfarm, the infrasttructure, all the tools, etc. but that’s probably too far in the future and realisticallly it’d take a while if there’s consensus to replace it with someone else.
Over the years, like good opensource citziens we are, we have collaborated with other projects outside the ROS realm. For example, instead of rolling our own transport like we had in ROS 1, we’ve worked with FastDDS, OpenSplice, CycloneDDS and now Zenoh. I’d say this has been quite symbiotic and we’ve helped each other. I believe collaborating with the Pixi, and Robostack projects would be extremely beneficial for everyone involved.
@ruben-arts can surely say more about the benefits of using Pixi for ROS
21 posts - 9 participants
ROS Discourse General: Ferronyx – Real-Time ROS2 Observability & Automated RCA
We’ve been building robots with ROS2 for years, and we hit the same wall every time a robot fails in production:
The debugging process:
-
SSH into the machine
-
Grep through logs
-
Check ROS2 topics (which ones stopped publishing?)
-
Replay bag files
-
Cross-reference with deployment changes
-
Try to correlate infrastructure issues with ROS state
This takes 3-4 hours. Every time.
The problem: ROS gives you raw telemetry, but zero intelligence connecting infrastructure metrics + ROS topology + deployment history. You’re manually stitching pieces together.
So we built Ferronyx to be that intelligence layer.
What we did:
-
Real-time monitoring of ROS2 topics, nodes, actions + infrastructure (CPU, GPU, memory, network)
-
When something breaks, AI analyzes the incident chain and suggests probable root causes
-
Deployment markers show exactly which release caused the failure
-
Track sensor health degradation before failures happen
Real results from our beta customers:
-
MTTR: 3-4 hours → 12-15 minutes
-
One customer caught sensor drift they couldn’t see manually
-
Another correlated a specific firmware version with navigation failures
We’re looking for 8-12 more teams to beta test and help us refine this. We want teams that:
-
Run ROS2 in production (warehouses, humanoids, autonomous vehicles)
-
Actually deal with downtime/reliability issues
-
Will give honest feedback
Free beta access. You help shape the product, we learn what breaks.
If you’re dealing with robot reliability headaches, reply here or send a DM. Would genuinely love to hear your toughest debugging stories.
Links:
https://ferronyx.com/
3 posts - 2 participants
ROS Discourse General: ROS 2 Rust Meeting: January 2026
The next ROS 2 Rust Meeting will be Mon, Jan 12, 2026 2:00 PM UTC
The meeting room will be at https://meet.google.com/rxr-pvcv-hmu
In the unlikely event that the room needs to change, we will update this thread with the new info!
Agenda:
- Changes to generated message consumption (https://github.com/ros2-rust/ros2_rust/pull/556)
- Upgrade to Rust 1.85 (build!: require rustc 1.85 and Rust 2024 edition by esteve · Pull Request #566 · ros2-rust/ros2_rust · GitHub)
- Migration from Element to Zulip chat (Open Robotics launches Zulip chat server)
- …
2 posts - 2 participants
ROS Discourse General: Easier Protobuf and ROS 2 Integration
For anyone integrating ROS 2 with Protobuf-based systems, we at the RAI Institute want to highlight one of our open-source tools: proto2ros!
proto2ros generates ROS 2 message definitions and bi-directional conversion code directly from .proto files, reducing boilerplate and simplifying integration between Protobuf-based systems and ROS 2 nodes.
Some highlights:
-
Automatic ROS 2 message generation from Protobuf
-
C++ and Python conversion utilities
-
Supports Protobuf v2 and v3
It is currently available for both Humble and Jazzy and can be installed with
apt install ros-<distro>-proto2ros
Check out the full repo here: https://github.com/bdaiinstitute/proto2ros
Thanks to everyone who has contributed to this project including @hidmic @khughes1 @jbarry !
As always, feedback and contributions are welcome!
The RAI Institute
1 post - 1 participant
ROS Discourse General: ROSCon Review Continued | Cloud Robotics WG Meeting 2026-01-14
Please come and join us for this coming meeting at Wed, Jan 14, 2026 4:00 PM UTC→Wed, Jan 14, 2026 5:00 PM UTC, where we plan to dive deeper into the ROSCon talks collected together during the last session. By examining more details about the talks, we can highlight any that would be relevant to Logging & Observability, the current focus of the group. We can also pull out interesting tips to release as part of a blog post.
The details for the talks have been gathered into the Links/Notes column of this document. Please feel free to read ahead and take a look at the notes and videos ahead of the meeting, if you’re interested.
The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
1 post - 1 participant
ROS Discourse General: Goodbye RQt, Hello RQml [NEW RELEASE]

Greetings fellow roboticists,
During our transition to ROS 2 and the build of our new robot Athena, we’ve encountered quite a few issues both in ROS 2, with the middleware, but also with rqt.
For instance, when testing our manipulator, we have noticed that the ControllerManager in rqt gives you around 20 seconds of time before the application freezes completely when used over WiFi.
This is not the only issue, but that’s also not the point of this post.
You could chime in and say, “Hey, you could’ve fixed that and made a PR
”, and you would be right, and we did this in several instances.
But I’m not a fan of using Python for UI, and this presented the perfect opportunity to demonstrate how easy it is to create a nice ROS interface using my QML ROS 2 module.
So, instead, I’ve spent that time quickly developing a modern alternative, fixing all the issues that bothered me in rqt.
Hello RQml 
Please note that this is still in beta and not all plugins exist yet.
You are very welcome to point me to the ones that you think would be great to have, or even implement them yourself and make a PR ![]()
Currently, the following plugins are available:
- ActionCaller: Interface for calling ROS 2 Actions.
- Console: A log viewer for ROS 2 messages.
- ControllerManager: Manage and switch ROS 2 controllers.
- ImageView: View camera streams and images.
- JointTrajectoryController: Interface for sending joint trajectory commands.
- MessagePublisher: Publish custom ROS 2 messages.
- RobotSteering: Teleoperation tool for mobile robots.
- ServiceCaller: Interface for calling ROS 2 Services.
Notably, the ImageView now also uses transparency for depth image values that are not valid (instead of using black, which also represents very close values).
As always, I hope this is of interest to you, and I would love to hear from you if you build something cool with this ![]()
If it wasn’t, my little turtle buddy will be very disappointed because he already considered you a special friend ![]()
3 posts - 2 participants
ROS Discourse General: New packages for Humble Hawksbill 2026-01-07
Package Updates for Humble
Added Packages [27]:
- ros-humble-ardrone-sdk: 2.0.3-1
- ros-humble-ardrone-sdk-dbgsym: 2.0.3-1
- ros-humble-ardrone-sumo: 2.0.3-1
- ros-humble-ardrone-sumo-dbgsym: 2.0.3-1
- ros-humble-cloudini-lib: 0.11.1-2
- ros-humble-cloudini-lib-dbgsym: 0.11.1-2
- ros-humble-cloudini-ros: 0.11.1-2
- ros-humble-cloudini-ros-dbgsym: 0.11.1-2
- ros-humble-event-camera-tools: 3.1.1-1
- ros-humble-event-camera-tools-dbgsym: 3.1.1-1
- ros-humble-fibar-lib: 1.0.2-1
- ros-humble-frequency-cam: 3.1.0-1
- ros-humble-frequency-cam-dbgsym: 3.1.0-1
- ros-humble-hitch-estimation-apriltag-array: 0.0.1-1
- ros-humble-mavros-examples: 2.14.0-1
- ros-humble-mujoco-vendor: 0.0.6-1
- ros-humble-mujoco-vendor-dbgsym: 0.0.6-1
- ros-humble-olive-interfaces: 0.1.0-1
- ros-humble-olive-interfaces-dbgsym: 0.1.0-1
- ros-humble-persist-parameter-server: 1.0.4-1
- ros-humble-persist-parameter-server-dbgsym: 1.0.4-1
- ros-humble-pointcloud-to-ply: 0.0.1-1
- ros-humble-qml6-ros2-plugin: 0.25.121-1
- ros-humble-qml6-ros2-plugin-dbgsym: 0.25.121-1
- ros-humble-yasmin-editor: 4.2.2-1
- ros-humble-yasmin-factory: 4.2.2-1
- ros-humble-yasmin-factory-dbgsym: 4.2.2-1
Updated Packages [390]:
- ros-humble-ackermann-steering-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-ackermann-steering-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-admittance-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-admittance-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-apriltag-detector: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-detector-dbgsym: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-detector-mit: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-detector-mit-dbgsym: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-detector-umich: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-detector-umich-dbgsym: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-draw: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-draw-dbgsym: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-tools: 3.0.3-1 → 3.1.0-1
- ros-humble-apriltag-tools-dbgsym: 3.0.3-1 → 3.1.0-1
- ros-humble-aruco-opencv: 2.3.1-1 → 2.4.1-1
- ros-humble-aruco-opencv-dbgsym: 2.3.1-1 → 2.4.1-1
- ros-humble-aruco-opencv-msgs: 2.3.1-1 → 2.4.1-1
- ros-humble-aruco-opencv-msgs-dbgsym: 2.3.1-1 → 2.4.1-1
- ros-humble-automatika-ros-sugar: 0.4.1-1 → 0.4.2-1
- ros-humble-automatika-ros-sugar-dbgsym: 0.4.1-1 → 0.4.2-1
- ros-humble-autoware-internal-debug-msgs: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-debug-msgs-dbgsym: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-localization-msgs: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-localization-msgs-dbgsym: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-metric-msgs: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-metric-msgs-dbgsym: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-msgs: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-msgs-dbgsym: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-perception-msgs: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-perception-msgs-dbgsym: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-planning-msgs: 1.12.0-2 → 1.12.1-1
- ros-humble-autoware-internal-planning-msgs-dbgsym: 1.12.0-2 → 1.12.1-1
- ros-humble-behaviortree-cpp: 4.7.1-1 → 4.8.3-1
- ros-humble-behaviortree-cpp-dbgsym: 4.7.1-1 → 4.8.3-1
- ros-humble-beluga: 2.0.2-1 → 2.1.0-1
- ros-humble-beluga-amcl: 2.0.2-1 → 2.1.0-1
- ros-humble-beluga-amcl-dbgsym: 2.0.2-1 → 2.1.0-1
- ros-humble-beluga-ros: 2.0.2-1 → 2.1.0-1
- ros-humble-bicycle-steering-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-bicycle-steering-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-camera-calibration: 3.0.8-1 → 3.0.9-1
- ros-humble-camera-ros: 0.5.0-1 → 0.5.2-1
- ros-humble-camera-ros-dbgsym: 0.5.0-1 → 0.5.2-1
- ros-humble-clearpath-common: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-control: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-customization: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-description: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-generator-common: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-generator-common-dbgsym: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-manipulators: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-manipulators-description: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-mounts-description: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-platform-description: 1.3.7-1 → 1.3.8-1
- ros-humble-clearpath-sensors-description: 1.3.7-1 → 1.3.8-1
- ros-humble-control-toolbox: 3.6.2-1 → 3.6.3-1
- ros-humble-control-toolbox-dbgsym: 3.6.2-1 → 3.6.3-1
- ros-humble-controller-interface: 2.52.2-1 → 2.53.0-1
- ros-humble-controller-interface-dbgsym: 2.52.2-1 → 2.53.0-1
- ros-humble-controller-manager: 2.52.2-1 → 2.53.0-1
- ros-humble-controller-manager-dbgsym: 2.52.2-1 → 2.53.0-1
- ros-humble-controller-manager-msgs: 2.52.2-1 → 2.53.0-1
- ros-humble-controller-manager-msgs-dbgsym: 2.52.2-1 → 2.53.0-1
- ros-humble-depth-image-proc: 3.0.8-1 → 3.0.9-1
- ros-humble-depth-image-proc-dbgsym: 3.0.8-1 → 3.0.9-1
- ros-humble-depthai: 2.30.0-1 → 2.31.0-1
- ros-humble-depthai-bridge: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-bridge-dbgsym: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-dbgsym: 2.30.0-1 → 2.31.0-1
- ros-humble-depthai-descriptions: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-examples: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-examples-dbgsym: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-filters: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-filters-dbgsym: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-ros: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-ros-driver: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-ros-driver-dbgsym: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-ros-msgs: 2.11.2-1 → 2.12.1-1
- ros-humble-depthai-ros-msgs-dbgsym: 2.11.2-1 → 2.12.1-1
- ros-humble-diff-drive-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-diff-drive-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-dynamixel-hardware-interface: 1.4.16-1 → 1.5.0-2
- ros-humble-dynamixel-hardware-interface-dbgsym: 1.4.16-1 → 1.5.0-2
- ros-humble-effort-controllers: 2.50.2-1 → 2.52.0-1
- ros-humble-effort-controllers-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-event-camera-codecs: 2.0.1-1 → 3.0.0-1
- ros-humble-event-camera-codecs-dbgsym: 2.0.1-1 → 3.0.0-1
- ros-humble-event-camera-msgs: 2.0.0-1 → 2.0.1-1
- ros-humble-event-camera-msgs-dbgsym: 2.0.0-1 → 2.0.1-1
- ros-humble-event-camera-py: 2.0.1-1 → 3.0.0-1
- ros-humble-event-camera-renderer: 2.0.1-1 → 3.0.0-1
- ros-humble-event-camera-renderer-dbgsym: 2.0.1-1 → 3.0.0-1
- ros-humble-examples-tf2-py: 0.25.17-1 → 0.25.18-1
- ros-humble-fastcdr: 1.0.24-2 → 1.0.29-1
- ros-humble-fastcdr-dbgsym: 1.0.24-2 → 1.0.29-1
- ros-humble-fastrtps: 2.6.10-1 → 2.6.11-1
- ros-humble-fastrtps-cmake-module: 2.2.3-1 → 2.2.4-1
- ros-humble-fastrtps-dbgsym: 2.6.10-1 → 2.6.11-1
- ros-humble-force-torque-sensor-broadcaster: 2.50.2-1 → 2.52.0-1
- ros-humble-force-torque-sensor-broadcaster-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-forward-command-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-forward-command-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-generate-parameter-library: 0.5.0-1 → 0.6.0-1
- ros-humble-generate-parameter-library-py: 0.5.0-1 → 0.6.0-1
- ros-humble-geometry2: 0.25.17-1 → 0.25.18-1
- ros-humble-gpio-controllers: 2.50.2-1 → 2.52.0-1
- ros-humble-gpio-controllers-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-gripper-controllers: 2.50.2-1 → 2.52.0-1
- ros-humble-gripper-controllers-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-hardware-interface: 2.52.2-1 → 2.53.0-1
- ros-humble-hardware-interface-dbgsym: 2.52.2-1 → 2.53.0-1
- ros-humble-hardware-interface-testing: 2.52.2-1 → 2.53.0-1
- ros-humble-hardware-interface-testing-dbgsym: 2.52.2-1 → 2.53.0-1
- ros-humble-husarion-components-description: 0.0.2-1 → 0.1.0-1
- ros-humble-image-pipeline: 3.0.8-1 → 3.0.9-1
- ros-humble-image-proc: 3.0.8-1 → 3.0.9-1
- ros-humble-image-proc-dbgsym: 3.0.8-1 → 3.0.9-1
- ros-humble-image-publisher: 3.0.8-1 → 3.0.9-1
- ros-humble-image-publisher-dbgsym: 3.0.8-1 → 3.0.9-1
- ros-humble-image-rotate: 3.0.8-1 → 3.0.9-1
- ros-humble-image-rotate-dbgsym: 3.0.8-1 → 3.0.9-1
- ros-humble-image-view: 3.0.8-1 → 3.0.9-1
- ros-humble-image-view-dbgsym: 3.0.8-1 → 3.0.9-1
- ros-humble-imu-sensor-broadcaster: 2.50.2-1 → 2.52.0-1
- ros-humble-imu-sensor-broadcaster-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-joint-limits: 2.52.2-1 → 2.53.0-1
- ros-humble-joint-limits-dbgsym: 2.52.2-1 → 2.53.0-1
- ros-humble-joint-state-broadcaster: 2.50.2-1 → 2.52.0-1
- ros-humble-joint-state-broadcaster-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-joint-trajectory-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-joint-trajectory-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-kitti-metrics-eval: 2.2.1-1 → 2.4.0-1
- ros-humble-kitti-metrics-eval-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-kompass: 0.3.2-1 → 0.3.3-1
- ros-humble-kompass-interfaces: 0.3.2-1 → 0.3.3-1
- ros-humble-kompass-interfaces-dbgsym: 0.3.2-1 → 0.3.3-1
- ros-humble-launch-pal: 0.19.0-1 → 0.20.0-1
- ros-humble-libmavconn: 2.12.0-1 → 2.14.0-1
- ros-humble-libmavconn-dbgsym: 2.12.0-1 → 2.14.0-1
- ros-humble-mapviz: 2.5.10-1 → 2.6.0-1
- ros-humble-mapviz-dbgsym: 2.5.10-1 → 2.6.0-1
- ros-humble-mapviz-interfaces: 2.5.10-1 → 2.6.0-1
- ros-humble-mapviz-interfaces-dbgsym: 2.5.10-1 → 2.6.0-1
- ros-humble-mapviz-plugins: 2.5.10-1 → 2.6.0-1
- ros-humble-mapviz-plugins-dbgsym: 2.5.10-1 → 2.6.0-1
- ros-humble-mavlink: 2025.9.9-1 → 2025.12.12-1
- ros-humble-mavros: 2.12.0-1 → 2.14.0-1
- ros-humble-mavros-dbgsym: 2.12.0-1 → 2.14.0-1
- ros-humble-mavros-extras: 2.12.0-1 → 2.14.0-1
- ros-humble-mavros-extras-dbgsym: 2.12.0-1 → 2.14.0-1
- ros-humble-mavros-msgs: 2.12.0-1 → 2.14.0-1
- ros-humble-mavros-msgs-dbgsym: 2.12.0-1 → 2.14.0-1
- ros-humble-mecanum-drive-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-mecanum-drive-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-metavision-driver: 2.0.1-1 → 3.0.0-1
- ros-humble-metavision-driver-dbgsym: 2.0.1-1 → 3.0.0-1
- ros-humble-mola: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-bridge-ros2: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-bridge-ros2-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-demos: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-gnss-to-markers: 0.1.0-1 → 0.1.2-1
- ros-humble-mola-gnss-to-markers-dbgsym: 0.1.0-1 → 0.1.2-1
- ros-humble-mola-imu-preintegration: 1.14.0-1 → 1.14.1-1
- ros-humble-mola-imu-preintegration-dbgsym: 1.14.0-1 → 1.14.1-1
- ros-humble-mola-input-euroc-dataset: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-euroc-dataset-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-kitti-dataset: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-kitti-dataset-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-kitti360-dataset: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-kitti360-dataset-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-lidar-bin-dataset: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-lidar-bin-dataset-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-mulran-dataset: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-mulran-dataset-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-paris-luco-dataset: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-paris-luco-dataset-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-rawlog: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-rawlog-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-rosbag2: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-rosbag2-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-video: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-input-video-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-kernel: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-kernel-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-launcher: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-launcher-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-lidar-odometry: 1.2.2-1 → 1.3.1-1
- ros-humble-mola-lidar-odometry-dbgsym: 1.2.2-1 → 1.3.1-1
- ros-humble-mola-metric-maps: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-metric-maps-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-msgs: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-msgs-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-pose-list: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-pose-list-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-relocalization: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-relocalization-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-traj-tools: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-traj-tools-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-viz: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-viz-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-yaml: 2.2.1-1 → 2.4.0-1
- ros-humble-mola-yaml-dbgsym: 2.2.1-1 → 2.4.0-1
- ros-humble-mp2p-icp: 2.1.1-1 → 2.2.0-1
- ros-humble-mp2p-icp-dbgsym: 2.1.1-1 → 2.2.0-1
- ros-humble-mrpt-apps: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-apps-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libapps: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libapps-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libbase: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libbase-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libgui: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libgui-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libhwdrivers: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libhwdrivers-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libmaps: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libmaps-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libmath: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libmath-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libnav: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libnav-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libobs: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libobs-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libopengl: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libopengl-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libposes: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libposes-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libros-bridge: 3.0.2-1 → 3.1.1-1
- ros-humble-mrpt-libros-bridge-dbgsym: 3.0.2-1 → 3.1.1-1
- ros-humble-mrpt-libslam: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libslam-dbgsym: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-libtclap: 2.15.1-2 → 2.15.4-1
- ros-humble-mrpt-path-planning: 0.2.3-1 → 0.2.4-1
- ros-humble-mrpt-path-planning-dbgsym: 0.2.3-1 → 0.2.4-1
- ros-humble-multires-image: 2.5.10-1 → 2.6.0-1
- ros-humble-multires-image-dbgsym: 2.5.10-1 → 2.6.0-1
- ros-humble-mvsim: 0.14.2-1 → 0.15.0-1
- ros-humble-mvsim-dbgsym: 0.14.2-1 → 0.15.0-1
- ros-humble-parameter-traits: 0.5.0-1 → 0.6.0-1
- ros-humble-pid-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-pid-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-plotjuggler: 3.13.2-1 → 3.15.0-1
- ros-humble-plotjuggler-dbgsym: 3.13.2-1 → 3.15.0-1
- ros-humble-plotjuggler-ros: 2.3.1-1 → 2.3.1-2
- ros-humble-plotjuggler-ros-dbgsym: 2.3.1-1 → 2.3.1-2
- ros-humble-pluginlib: 5.1.2-1 → 5.1.3-1
- ros-humble-pluginlib-dbgsym: 5.1.2-1 → 5.1.3-1
- ros-humble-pose-broadcaster: 2.50.2-1 → 2.52.0-1
- ros-humble-pose-broadcaster-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-position-controllers: 2.50.2-1 → 2.52.0-1
- ros-humble-position-controllers-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-python-mrpt: 2.15.1-1 → 2.15.3-1
- ros-humble-range-sensor-broadcaster: 2.50.2-1 → 2.52.0-1
- ros-humble-range-sensor-broadcaster-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-rclcpp: 16.0.16-1 → 16.0.17-1
- ros-humble-rclcpp-action: 16.0.16-1 → 16.0.17-1
- ros-humble-rclcpp-action-dbgsym: 16.0.16-1 → 16.0.17-1
- ros-humble-rclcpp-components: 16.0.16-1 → 16.0.17-1
- ros-humble-rclcpp-components-dbgsym: 16.0.16-1 → 16.0.17-1
- ros-humble-rclcpp-dbgsym: 16.0.16-1 → 16.0.17-1
- ros-humble-rclcpp-lifecycle: 16.0.16-1 → 16.0.17-1
- ros-humble-rclcpp-lifecycle-dbgsym: 16.0.16-1 → 16.0.17-1
- ros-humble-rcutils: 5.1.7-1 → 5.1.8-1
- ros-humble-rcutils-dbgsym: 5.1.7-1 → 5.1.8-1
- ros-humble-realtime-tools: 2.14.1-1 → 2.15.0-1
- ros-humble-realtime-tools-dbgsym: 2.14.1-1 → 2.15.0-1
- ros-humble-rko-lio: 0.1.6-1 → 0.2.0-1
- ros-humble-rko-lio-dbgsym: 0.1.6-1 → 0.2.0-1
- ros-humble-robotraconteur: 1.2.6-1 → 1.2.7-1
- ros-humble-robotraconteur-dbgsym: 1.2.6-1 → 1.2.7-1
- ros-humble-ros-babel-fish: 0.25.2-1 → 0.25.120-1
- ros-humble-ros-babel-fish-dbgsym: 0.25.2-1 → 0.25.120-1
- ros-humble-ros-babel-fish-test-msgs: 0.25.2-1 → 0.25.120-1
- ros-humble-ros-babel-fish-test-msgs-dbgsym: 0.25.2-1 → 0.25.120-1
- ros-humble-ros2-control: 2.52.2-1 → 2.53.0-1
- ros-humble-ros2-control-test-assets: 2.52.2-1 → 2.53.0-1
- ros-humble-ros2-controllers: 2.50.2-1 → 2.52.0-1
- ros-humble-ros2-controllers-test-nodes: 2.50.2-1 → 2.52.0-1
- ros-humble-ros2cli-common-extensions: 0.1.1-4 → 0.1.2-1
- ros-humble-ros2controlcli: 2.52.2-1 → 2.53.0-1
- ros-humble-ros2plugin: 5.1.2-1 → 5.1.3-1
- ros-humble-rosbag2rawlog: 3.0.2-1 → 3.1.1-1
- ros-humble-rosbag2rawlog-dbgsym: 3.0.2-1 → 3.1.1-1
- ros-humble-rosidl-adapter: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-cli: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-cmake: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-generator-c: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-generator-cpp: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-parser: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-runtime-c: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-runtime-c-dbgsym: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-runtime-cpp: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-typesupport-fastrtps-c: 2.2.3-1 → 2.2.4-1
- ros-humble-rosidl-typesupport-fastrtps-c-dbgsym: 2.2.3-1 → 2.2.4-1
- ros-humble-rosidl-typesupport-fastrtps-cpp: 2.2.3-1 → 2.2.4-1
- ros-humble-rosidl-typesupport-fastrtps-cpp-dbgsym: 2.2.3-1 → 2.2.4-1
- ros-humble-rosidl-typesupport-interface: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-typesupport-introspection-c: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-typesupport-introspection-c-dbgsym: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-typesupport-introspection-cpp: 3.1.7-1 → 3.1.8-1
- ros-humble-rosidl-typesupport-introspection-cpp-dbgsym: 3.1.7-1 → 3.1.8-1
- ros-humble-rqt-controller-manager: 2.52.2-1 → 2.53.0-1
- ros-humble-rqt-joint-trajectory-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-rviz-assimp-vendor: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-common: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-common-dbgsym: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-default-plugins: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-default-plugins-dbgsym: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-ogre-vendor: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-ogre-vendor-dbgsym: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-rendering: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-rendering-dbgsym: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-rendering-tests: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz-visual-testing-framework: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz2: 11.2.23-1 → 11.2.25-1
- ros-humble-rviz2-dbgsym: 11.2.23-1 → 11.2.25-1
- ros-humble-septentrio-gnss-driver: 1.4.5-1 → 1.4.6-1
- ros-humble-septentrio-gnss-driver-dbgsym: 1.4.5-1 → 1.4.6-1
- ros-humble-simple-launch: 1.11.0-1 → 1.11.1-1
- ros-humble-slider-publisher: 2.4.1-1 → 2.4.2-1
- ros-humble-steering-controllers-library: 2.50.2-1 → 2.52.0-1
- ros-humble-steering-controllers-library-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-stereo-image-proc: 3.0.8-1 → 3.0.9-1
- ros-humble-stereo-image-proc-dbgsym: 3.0.8-1 → 3.0.9-1
- ros-humble-tcb-span: 1.0.2-2 → 1.2.0-1
- ros-humble-tf2: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-bullet: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-dbgsym: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-eigen: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-eigen-kdl: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-eigen-kdl-dbgsym: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-geometry-msgs: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-kdl: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-msgs: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-msgs-dbgsym: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-py: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-py-dbgsym: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-ros: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-ros-dbgsym: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-ros-py: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-sensor-msgs: 0.25.17-1 → 0.25.18-1
- ros-humble-tf2-tools: 0.25.17-1 → 0.25.18-1
- ros-humble-tile-map: 2.5.10-1 → 2.6.0-1
- ros-humble-tile-map-dbgsym: 2.5.10-1 → 2.6.0-1
- ros-humble-tl-expected: 1.0.2-2 → 1.2.0-1
- ros-humble-tracetools-image-pipeline: 3.0.8-1 → 3.0.9-1
- ros-humble-tracetools-image-pipeline-dbgsym: 3.0.8-1 → 3.0.9-1
- ros-humble-transmission-interface: 2.52.2-1 → 2.53.0-1
- ros-humble-transmission-interface-dbgsym: 2.52.2-1 → 2.53.0-1
- ros-humble-tricycle-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-tricycle-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-tricycle-steering-controller: 2.50.2-1 → 2.52.0-1
- ros-humble-tricycle-steering-controller-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-turtlebot3: 2.3.3-1 → 2.3.6-1
- ros-humble-turtlebot3-bringup: 2.3.3-1 → 2.3.6-1
- ros-humble-turtlebot3-cartographer: 2.3.3-1 → 2.3.6-1
- ros-humble-turtlebot3-description: 2.3.3-1 → 2.3.6-1
- ros-humble-turtlebot3-example: 2.3.3-1 → 2.3.6-1
- ros-humble-turtlebot3-navigation2: 2.3.3-1 → 2.3.6-1
- ros-humble-turtlebot3-node: 2.3.3-1 → 2.3.6-1
- ros-humble-turtlebot3-node-dbgsym: 2.3.3-1 → 2.3.6-1
- ros-humble-turtlebot3-teleop: 2.3.3-1 → 2.3.6-1
- ros-humble-ur: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-bringup: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-calibration: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-calibration-dbgsym: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-client-library: 2.6.0-1 → 2.6.1-1
- ros-humble-ur-client-library-dbgsym: 2.6.0-1 → 2.6.1-1
- ros-humble-ur-controllers: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-controllers-dbgsym: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-dashboard-msgs: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-dashboard-msgs-dbgsym: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-description: 2.8.0-1 → 2.9.0-1
- ros-humble-ur-moveit-config: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-robot-driver: 2.10.0-1 → 2.11.0-1
- ros-humble-ur-robot-driver-dbgsym: 2.10.0-1 → 2.11.0-1
- ros-humble-vector-pursuit-controller: 1.0.1-1 → 1.0.2-2
- ros-humble-vector-pursuit-controller-dbgsym: 1.0.1-1 → 1.0.2-2
- ros-humble-velocity-controllers: 2.50.2-1 → 2.52.0-1
- ros-humble-velocity-controllers-dbgsym: 2.50.2-1 → 2.52.0-1
- ros-humble-yasmin: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-dbgsym: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-demos: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-demos-dbgsym: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-msgs: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-msgs-dbgsym: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-ros: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-ros-dbgsym: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-viewer: 3.5.1-1 → 4.2.2-1
- ros-humble-yasmin-viewer-dbgsym: 3.5.1-1 → 4.2.2-1
- ros-humble-zmqpp-vendor: 0.0.2-1 → 0.1.0-3
- ros-humble-zmqpp-vendor-dbgsym: 0.0.2-1 → 0.1.0-3
Removed Packages [7]:
- ros-humble-feetech-ros2-driver
- ros-humble-feetech-ros2-driver-dbgsym
- ros-humble-generate-parameter-library-example
- ros-humble-generate-parameter-library-example-dbgsym
- ros-humble-generate-parameter-library-example-external
- ros-humble-generate-parameter-library-example-external-dbgsym
- ros-humble-generate-parameter-module-example
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
- Adam Serafin
- Automatika Robotics
- Autoware
- Bence Magyar
- Berkay Karaman
- Bernd Pfrommer
- Chris Lalancette
- Christian Rauch
- Davide Faconti
- Felix Exner
- Fictionlab
- Gerardo Puga
- Haroon Rasheed
- Husarion
- Ivan Paunovic
- Jacob Perron
- Jeremie Deray
- John Wason
- Jordan Palacios
- Jose Luis Blanco-Claraco
- Jose-Luis Blanco-Claraco
- José Luis Blanco-Claraco
- Kostubh Khandelwal
- Luis Camero
- M. Fatih Cırıt
- Markus Bader
- Masaya Kataoka
- Meher Malladi
- Michel Hidalgo
- Miguel Ángel González Santamarta
- Olivier Kermorgant
- Pyo
- Raul Sanchez Mateos
- Ryohsuke Mitsudome
- Sai Kishor Kothakota
- Samuel Hafner
- Shane Loretz
- Southwest Research Institute
- Stefan Fabian
- Steven! Ragnarök
- Temkei Kem
- Tibor Dome
- Tomoya Fujita
- Tyler Weaver
- Vincent Rabaud
- Vladimir Ermakov
- Víctor Mayoral-Vilches
- Yukihiro Saito
- bmagyar
- li9i
- miguel
- victor
1 post - 1 participant
ROS Industrial: ROSCon 2025 & RIC-AP Summit 2025 Blog Series: Singapore’s Defining Week for Open-Source Robotics
As we look back on 2025, this blog is a recap of one of the most impactful weeks for open-source robotics in the Asia-Pacific region.
On 30 October, the RIC-AP Summit expanded beyond conference halls into the real world with a series of curated site tours across Singapore. These tours showcased how ROS and Open-RMF are not just concepts but living deployments across manufacturing, healthcare, and smart infrastructure.
If the Summit sessions were about vision and strategy, the tours were about seeing robotics in motion—from factory floors to hospitals, airports, and digital districts.
Importantly, the tours brought together participants from different companies and countries, reflecting the truly international nature of the ROS-Industrial community and the collaborative spirit of Asia Pacific’s robotics ecosystem.
1. ROS in Manufacturing: SIMTech & ARTC + Black Sesame Technologies, Singapore Polytechnic
SIMTech & ARTC
Spotlight on smart manufacturing innovations.
Demonstrations of autonomous material handling and intelligent inspection systems.
ROS-powered robotics showing how open-source frameworks are shaping industrial transformation.
Reinforced Singapore’s role as a hub for advanced automation and digitalisation.
Singapore Polytechnic – Robotics, Automation and Control (RAC) Hub
Cutting-edge RAC Hub at the School of Electrical and Electronic Engineering.
Co-location labs with industry partners like ShenHao and JP Neura.
Demonstrations of collaborative and inspection robotics powered by ROS.
Clear example of academia-industry collaboration driving automation and intelligent control systems.
2. RMF Deployment in Healthcare & Reconfigurable Robotics: CHART, SUTD
CHART – Centre for Healthcare Assistive & Robotics Technology (CGH)
Demonstration of RoMi-H (Robotic Middleware for Healthcare), built on Open-RMF.
Multi-fleet interoperability enabling ROS and non-ROS robots to work seamlessly in hospitals.
Integration with lifts, automatic doors, and enterprise systems for streamlined operations.
Showcased how robotics enhance patient care and operational efficiency in smart hospitals.
SUTD – Reconfigurable Robotics Showcase
Outdoor mosquito-catching robot “Dragonfly” and snake-repulsing “Naja.”
Infrastructure-focused robots like “Meerkat” and “Panthera 2.0.”
Nested reconfigurable robots demonstrating adaptability across environments.
A creative exploration of embodied AI, blending research ingenuity with real-world challenges.
3. RMF/ROS Deployments: CAG, CPCA, KABAM Robotics, Punggol Digital District – Panasonic
Panasonic – Fleet Management with RMF
Proprietary AI-enhanced RMF integration.
Features like congestion detection, human presence recognition in elevators, and prevention of unintended companion following.
Practical, operationally relevant fleet management for smart districts.
KABAM Robotics
Smart+ RMF Solution integrating multi-robot coordination with PABX and access systems.
Security robots tied into surveillance, access control, and facility management.
Tour of R&D facilities showcasing innovation in robotics for secure, automated environments.
Changi Airport Group (CAG)
Firsthand insights into CAG’s Open-RMF journey.
Live demonstrations of RMF features supporting airport operations.
Strategic vision for scaling interoperability across one of the world’s busiest airports.
CPCA – Hospitality Robotics Integration
Work-in-progress deployment of cleaning and delivery robots in hotel operations.
Robots integrated with lifts and automated doors via RMF dashboard.
Future vision: hotel staff requesting ad hoc robot tasks via StayPlease app.
Demonstrations of robots performing floor cleaning, restaurant bussing, and seamless interaction with smart infrastructure.
RIC-AP Summit Tour 2025: Key Takeaways
Manufacturing track: ROS is powering industrial transformation, bridging academia and industry.
Healthcare track: Open-RMF is operationalised in hospitals, enhancing patient care and efficiency.
Smart infrastructure track: Airports, hotels, and digital districts are adopting RMF for multi-robot orchestration.
The tours underscored a powerful message: Singapore is not just hosting conversations about robotics—it is living them. From labs to live deployments, the RIC-AP Summit tours demonstrated how open-source robotics is shaping industries, communities, and everyday life.
ROS Discourse General: High frequency log persistence on Jetson Orin (Rosbag alternative?)
Hi everyone,
My team has been working on a storage engine specifically optimized for the Jetson/Orin architecture to handle high bandwidth sensor streams (Lidar/Cameras) that tend to choke rosbag record or mcap writing at the edge.
The main architectural difference is that we bypass the kernel page cache and stream directly to NVMe using custom drivers. We are seeing sustained writes of ~1GB/s with <10us latency on Orin AGX, even ensuring persistence during power cuts (no RAM buffer loss).
We are looking for 3-5 teams running ROS 2 on hardware to test a binary adapter we wrote. It exposes a standard ROS 2 subscriber but pipes the data into our crash-proof storage instead of the standard recorder.
If you are hitting bottlenecks with dropped messages at high frequency or struggling with data corruption on power loss, this might solve it.
DM me or reply here and I can send over the binary for aarch64.
4 posts - 3 participants
ROS Discourse General: Best practices for thermal camera intrinsics (FLIR A400) in sensor fusion
I’m working with a FLIR A400 thermal camera as part of a sensor-fusion pipeline
(thermal + radar / LiDAR).
I just found that unlike RGB cameras, FLIR does not expose factory intrinsics, and traditional
OpenCV checkerboard calibration has proven unreliable due to thermal contrast
limitations.
I wanted to start a discussion on what practitioners typically do in this case:
- Using FOV-derived pinhole intrinsics (fx, fy from datasheet FOV)
- Optimizing intrinsics during downstream tasks (SLAM / NeRF / reconstruction)
- Avoiding explicit intrinsics and relying on extrinsics only
I’m especially interested in what has worked in real robotic systems rather than
textbook calibration.
Looking forward to hearing how others approach this.
8 posts - 5 participants






