Autonomous Navigation for Mobile Service Robots in Urban Pedestrian Environments
Journal of Field Robotics (JFR), 2011
pdf / bibref

This paper presents a fully autonomous navigation solution for urban, pedestrian environments. This work was part of URUS (Ubiquitous networking Robotics in Urban Settings (2006-2009), a EU STREP project whose main objective was the development of an adaptable network architecture to allow robots to perform tasks in urban areas. In order to do this we at IRI built two robots, based on Segway RMP200 platforms, and outfitted a large section of a University Campus as an experimental area for mobile robotics research. A basic requirement was to have the robots navigate the area robustly. Some pictures of the area can be seen below. Note the many variations in height (ramps, steps), small obstacles (bicycle stands, trashcans, glass windows and objects made of transparent plastic, and the ubiquitous presence of pedestrians.


Experimental area at the UPC Campus Nord

The two-wheeled Segway RMP200 is a very interesting platform to build urban robots on, as it is highly mobile and yet very powerful, with nominal speeds over 4 m/s, a relatively small footprint and a payload around 50 kg (depending on the model). Even so, its self-balancing nature creates problems for its application in robotics. There is, on one hand, a perception issue: sensors such as cameras or laser scanners point higher or lower as the robot pitches forward and backward to gain or lose momentum. This is specially critical when 2D laser scanners at foot level, a very common solution for navigation and the one we adopted, point to the ground and sense "spurious" features or obstacles. On another hand there is a control issue, as the platform's control algorithm takes precedence over the operator's instructions, so that it is difficult or impossible to execute carefully planned trajectories with precision. Both problems are aggravated as weight is added to the platform. In practice, our rather overweight robot sees its visibility reduced to as little as a 2-5 meters and takes 1-2 seconds to respond to commands, and the situation is notably worse when traversing slopes. We deal with these issues by using 3D data for localization and low-level navigation, on one hand, and by implementing a loose navigation scheme on the other. See the paper for further details.


Left: Tibi (left) facing Dabo (right). Right: on-board devices.

Our navigation framework is diagrammed in the figure below. It is divided in four blocks, in decreasing level of abstraction: path planning, path execution, localization, and obstacle avoidance. The latter integrates three components: traversability inference, local planning and motion control. The robots receive go-to requests in XY global map coordinates, or semantic requests linked to coordinates (i.e. take me to (25,11), or pick me up in the cafeteria, which happens to be at (50,50)). The path planning module is executed typically one per go-to request, and finds a path from the robot's current position to the goal, as a set of waypoints. The other blocks make up two control loops. The obstacle avoidance is a reactive loop, and its mission is to get to the robot to a local goal (that is, in robot coordinates) using the laser scans to avoid static and dynamic obstacles. To do this, we use a RRT-based local planner and a motion control based on the Dynamic Window approach. The localization algorithm keeps track of the robot's position in the map using a 3D map-based particle filter. The localization estimate is then used by the path execution algorithm to transform the global waypoints computed by path planning into robot coordinates, which are sent to the obstacle avoidance module, thus closing the deliberative loop.


This solution was tested in two different urban settings, the Campus setting and a very concurred, public avenue in the Gràcia district, also in Barcelona, Spain. Our results total over 6 km of autonomous navigation, with a success rate on go-to requests of nearly 99%, a marked improvement over previous, published efforts by the same team (Corominas, 2010). The results presented in the paper correspond to four experimental sessions, one in the Gràcia site, and three on the Campus site. Two videos document the session on Gràcia and the first session at the Campus, respectively. The Campus video features the experimental session in its entirety. Some segments are sped up to trim it down from over 18 minutes. The video at Gràcia is much shorter, but it highlights how our navigation framework can work in different environments with little to no prior in-site testing. Note the lack of clear features for localization and the presence of many pedestrians and bicyclists. The videos are available in high definition (see bottom bar).

CAMPUS VIDEO (Second experimental session, June 3, 2010)

The full version (18 minutes) can be seen here: part 1 and part 2.

GRÀCIA VIDEO (First experimental session, May 20, 2010)

Some notes about the first video:

For more information about the project, please refer to the URUS website. Several videos are available, among them one documenting the demonstration at Gràcia (with URUS partner ETH Zürich and their SmartTer robot), which received TV coverage.