[Documentation] [TitleIndex] [WordIndex

Motivation

The aim of this project is to create a supervisory interface that will allow human-in-the-loop control of multiple robots within the Willow Garage building. The system will allow users to schedule tasks to be performed by the robots, implement a sliding autonomy system that allows the robots to ask for human assistance when needed, and provide a supervisory interface to allow humans to interact with the robots and see the data that they gather. This project builds directly on the continuous operations project, currently underway at Willow Garage, and also on the RIDE game-based robot control interface, prototyped at Washington University.

This project will provide

  1. a supervisory, task-level interface to a team of mobile robots that supports sliding autonomy operations and human-in-the-loop control;
  2. a framework in which to test and showcase autonomous capabilities of our robots, allowing them to be sequenced by a human operator;
  3. extensions to ROS allowing it to deal with multi-robot applications in a robust way;
  4. sharing of persistent data structures among several robots in the building; and
  5. a system that actually performs useful tasks around Willow Garage on a continuous and on-going basis.

This project is the first step into an exploration of the application area of using robots for long-lived janitorial functions in a large office building. The main customers for this project will be individuals without technical knowledge of robots, using them for janitorial functions in an large. Additional customers include Willow Garage staff, who can use the framework to test various tele-operation and autonomous modules developed for our robots. The Building Manager is an enabling project; it provides a framework in which autonomous capabilities can be inserted and tested.

Another Perspective: The building manager can also be seen as being an assistive interface for persons with disabilities. If the task set of the Building Manager includes functions that a person with disabilities has difficulty with (such as fetching items from high shelves, checking the status of doors at the other side of the building, etc), it can be an effective tool to empower that individual and to make him less dependent on caregivers. In the case where an individual with disabilities is capable of tasking the robot (accomplished with, essentially, button-pushes in the interface), but it not capable of the direct teleoperation often needed to get the robot out of trouble, this supervisory function could be forwarded to another operator, most likely at a remote location.

Goal

The specific goals of the project are as follows.

  1. Implement the supervisory interface based on the prototype RIDE interface developed at Washington University. This interface draws heavily on ideas from computer games, and provides real-time strategy, first- and third-person views of the robots and their environments. Robots can be selected by the user (either singly or in groups), and tasks (drawn from a context-sensitive list) assigned to them. The robots then perform these tasks autonomously, allowing the user to attend to other things. When a task is complete, or the robot detects that it needs help, the user is alerted and can either re-task the robot or take direct control of it to resolve the problem. The interface will be enhanced to allow semantic tagging of objects and locations in the world, either by the robot or by the human. These semantic tags can include blackout areas into which the robot is not allowed to drive or record data. A video of the prototype interface is available at http://www.youtube.com/watch?v=Uu5yuMgAiOk/. Deliverable: A working interface, with all of the current features of the RIDE prototype, fully integrated with ROS and rviz2, capable of displaying and tasking ten heterogeneous robots.

  2. Implement a set of example tasks that can be assigned to the robots. We list a set of possible tasks below, and note that the goal of this project is not to implement the complete set of useful tasks but, rather, to develop the framework to enable the easy addition of new tasks, and to verify this framework with some illustrative examples. The tasks listed below represent a set that we believe that we can implement today, with little risk. Deliverable: A set of ten tasks that can be assigned to the robots. These tasks will include data collection, navigation around the building, environment monitoring, and simple human interaction (see list below for examples).

  3. Implement a task manager that can sequence and prioritize a set of tasks, and allocate them appropriately to a heterogeneous collection of robots. Since this is an open research problem, we will limit ourselves to a simple FIFO scheme in the first instance, and ensure that the task manager is modular enough to allow upgrades to a mode advanced scheduling policy at a later date. Deliverable: A modular task management system that initially implements FIFO scheduling. Once this is in place, we will experiment with other common scheduling schemes, including a probabilistic MDP-based scheduled developed by Bill at WU.

  4. Implement multi-master support in ROS and test it in the context of the interface, where the Building Manager and the robots must share data and communicate with each other to accomplish their tasks. There are two facets to this goal: retrieving data from another robot, and pushing data to another robot, which will hopefully be provided by the same solution. We will also investigate implementing TTL support in the master data structures. Deliverable: New multi-master aware ROS master implementation, possibly based on Redis, that supports bidirectional communication between the Manager and robots.

  5. Developing multi-master aware APIs and naming specifications for ROS. The ROS name specification currently provides for running multiple robots within child namespaces on a single master. We will investigate a new name specification that will incorporate refering to remote master resources, like topics, services and parameters. We will also amend client library APIs based on this new specification. Deliverable: Multi-master naming specification for ROS with implementation in roscpp, rospy, and roslisp.

  6. Improve communication in multi-robot scenarios in ROS. ROS currently heavily relies on TCP transports, which perform poorly in WiFi and have stateful connections. There is rudimentary support for UDP in ROS, but there are a variety of improvements that can be made at the transport and protocol level. These include reliable UDP transport, polled topics, robust connectivity, and other transport integration. This goal potentially will require work in (5) first. Deliverable: Various improvements to the ros_comm stack, implemented in both roscpp and rospy, along with performance studies using tc to simulate adverse network conditions.

  7. Implement a framework for sharing persistent data structures between the robots and the Building Manager. An example of such a data structure is a map of the building. Currently, each robot has a locally-stored map. The Building Manager system will share a single map between all robots in the building, allowing each of them to update it as they gather new data. This will ensure that each robot always has a current map, and that this map can be viewed by the human supervisor in the interface. Deliverable: A mechanism for sharing persistent data structures among all the robots in the building, allowing each robot to use and update these data structures. In particular, we will focus on a shared map, annotated with the states of doors, lights, temperature, and WiFi signal strength. The map will be kept current by continually incorporating new data from the robots, and mechanisms will be developed to ensure that each robot has an up-to-date version.

  8. Allow computation to be performed by the Building Manager. Since the Building Master has access to data from the robots, it can potentially perform useful calculations for them. This will be useful for computationally-intensive processes, processes that need access to special-purpose hardware (such as graphics cards), or for platforms lacking in computational resources. For example, a simple robot with a laser range-finder could send the raw laser data (along with odometry) to the Building Manager, which could then run AMCL on it, returning the most likely pose to the robot. There is clearly a trade-off between on-board capability and wireless network traffic here, which we will explore as part of the project. Deliverable: Demonstration of off-board computation (perhaps AMCL), with a study of trade-offs, and relative quality (compared to an on-board version).

  9. Evaluate the interface with both a naive and an experienced pool of users. We will perform formal user studies with the interface, assessing the effectiveness of the interface with individuals selected from the local community. We will also run a set of user studies at the PAX computer games conference, to see how experienced gamers (who are likely to be very familiar with this sort of interface) fare. The studies will measure effectiveness of task performance as well as other factors, such as cognitive load, and situational awareness. Deliverable: Design, prototyping and execution of a two large-scale user studies, one with naive users (from the local community), and one with experienced gamers.

Example tasks that we will implement in the Building Manager system include the following. Initially, we will implement the simplest forms of these tasks. We give examples of harder, more autonomous versions of the tasks for completeness, and to give a sense of where the system proposed here could go in the future.

  1. Watching external doors. The system will monitor the four external doors to the building, using the security cameras. When someone enters the building (possible in a particular time-window), a robot will be sent to meet them. In the simplest case, the human will wait for the robot to arrive, and the task will be done. In a more difficult version, the human will walk slowly away from the door, and the robot (or robots) will have to find him. In yet another more difficult version, a robot will instead attach to the building’s cameras and watch the door to meet an expected visitor.

  2. Data collection. This can be performed in the background as the robots perform other tasks. The robots will record data from the building (such as WiFi strength, temperature, congestion, conference room door status (open/closed), lights on or off, etc) and transmit these data to the building master.

  3. Attract the attention of a human, and get them to follow the robot. In the simplest case, the user will identify the human in the robot's camera image (displayed in the interface) by drawing a bounding box around it. The robot will approach the human and say “please follow me”, and then lead to human to a place in the building selected by the user. In a harder version, the robot will detect the presence of people, and ask the user to select one. In a much harder version, the robot will be tasked with finding a specific person in the building, and bringing them to a place.

  4. Multi-robot teleoperation. In the simplest case, drive multiple robots to different locations in the building, and have them perform a task in parallel (e.g. pick up an item off the floor). This will guide the general UI framework for maneuvering robots, assigning them a task, and integrating specialized interfaces for that task (e.g. showing a camera image so that a pickup item can be selected).

  5. Robot allocation. A user will be able to go to the Building Manager UI, request a robot with a particular capability, and have it “claimed” for them. In a slighly harder version, the claimed robot will autonomously navigate to the user’s desired work location.

  6. Multi-robot Continuous Operation. In the current continuous operation framework, an individual robot is given a task list to complete in its free time. In the multi-robot version of this, the Building Manager will maintain a task list and assign it to “free” robots.

  7. Multi-robot cleanup. In the simplest version, a PR2 will place a trash item on an iRobot Create, which will then drive to another location, where another PR2 will take the trash item and throw it in the trash. In a more difficult version, the Building Manager will continuously allocate new iRobot Creates to transport the trash. In an even more difficult version, a scout robot will map out trash around the building, and the Building Manager will attempt to dynamically generate a cleanup plan based on available robot resources.

  8. Tour guide. A robot tour guide will guide visitors around the building. Whenever it is in range of another operating robot, it will announce to the visitors what that robot is currently working on. In a more difficult version, the robot tour guide will have a display that will show the visitors what the other robot sees.

  9. Recycle the bottle. This is a lower-level task that involves the robot picking up a bottle and dropping it in the nearest recycle bin. In the simplest case, the bottle is sitting on an uncluttered tabletop, and the bottle and the bin are identified (though the interface) by the human supervisor. In the most autonomous version of the task, the bottle is found by the robot, and the recycle bin locations are retrieved from a persistent semantic map of the world.

  10. Open the door. The robot opens a door, and passes through it. In the simplest case, the robot starts in front of the door (which opens away from it), and the doorframe and handle are identified by the supervisor (through the interface). In the most autonomous version, the robot starts from an arbitrary position near the door, and identifies everything without assistance.


2024-03-23 12:21