Autonomous navigation for robots at live events: technologies & challenges

The goal of fully-autonomous robots that can be trusted to safely share spaces with Humans is an understandable one.  Indeed in certain highly controlled situations it has been realised, but in the event industry this goal is for the most part still elusive.  In this article we’ll explore the technical reasons why.

Let’s learn a bit about what’s involved in successfully traversing a space.  But first let’s define what’s meant by “success” in this context! A definition that’s satisfying to most would surely be “Collision Avoidance”.  “Collision Detection” is a last line of defence, triggering a shutdown of all motion, but we would like to (no pun intended) avoid this situation.

What does a robot need in order to traverse a space, assuming we’ve taken care of collision avoidance?  It needs a map of some description- a spatial representation of the real world. This map will contain not only representations of the relatively permanent objects such as walls and furniture, but also higher-level ideas such as ‘Stallholder A’ or ‘Event Entrance’.  The map will need to dynamically include moving objects such as people (and other robots).

Savioke - Autonomous, Secure Delivery in Dynamic Public Spaces

Savioke - Autonomous, Secure Delivery in Dynamic Public Spaces

A map is of little use without an accurate representation of where the robot is on that map.  Ideally our robot should be able to ascertain where it is in the space by gathering inputs from the real world.  It might do this using GPS, LASER scanning, RFID positional tags, Image Recognition glyphs, magnetic markers in the floor, to name just a few.

Sometimes it suffices for the robot to simply be aware of in which direction its target is, and to move towards it whilst avoiding objects.  However, such a simple strategy is likely to lead to the robot getting stuck in navigational dead-ends, without some form of higher-level route finding.

There is also a social aspect to consider here.  It would be very odd indeed for a Human to move in this way; generally Human Highways form by a phenomenon known as ‘emergence’, and what we observe in this instance is a clear demarcation of where it’s acceptable to walk.  In order to share a space with Humans, our robot would need at least some concept of this.

Pepper robot

Pepper robot

However it’s a fallacy to assume that our robot will fit in like any other event participant. Reactions will range from “Wow how cool!” to “Great. Just one more step closer to the robot uprising”.  We’d like to avoid either of these extremes. Certainly, wowing event participants is the name of the game, but we’d like the robot to attract attention for the purpose it’s serving, and not just for the fact that it’s a robot; this is a distraction in most cases.

Let’s dig a little deeper into the techniques and tools that are used in giving a robot a sense of its surroundings.  Firstly sensors:

Bump SensorS

One of the simplest possible sensors is a bump sensor- essentially just a momentary switch attached to a piece of material (usually rubber) that will trip the switch when it comes into contact with something.  Recalling the collision detection vs avoidance distinction from earlier in the article, we would rather these sensors were not triggered, but it’s essential to have them as a last resort.

rotary encoder

rotary encoder

Rotary Encoders

It’s very helpful for a robot to sense exactly how many rotations its wheels have undertaken.  One way of achieving this is to embed a special coded disc around the motor’s axle, and use an optical sensor to read in that code.  Let’s say we want to be able to determine 256 possible positions for the wheel. In that case an 8 bit number will suffice, because 2 to the power of 8 is 256.  

Ultrasonic transducers

These devices send pulses of a frequency much higher than what is audible by Humans, and precisely time the time it takes for the echo of that sound to return, thereby measuring the distance from the sensor to the target object.  The calculation essentially involves working out how far the sound would need to travel and then return to the source given the speed of light. You can engage with a similar idea the next time you experience a thunderstorm. Count how many seconds it takes for the thunder to sound after you see the flash of a lightning strike.  Given that the speed of sound is roughly 343 meters per second all you need to do is multiply the number of seconds by 343 to estimate how far away the strike was. You could round down to 330 or 300 if you’re doing the calculation in your head and want to make the math a bit easier.

Roomba 980

Roomba 980

Infrared sensors

An important aspect to sharing space with Humans is not interfering with their perceptual frequency range.  OK, that’s probably not a term that’s immediately obvious, but what I mean is that normally Humans can see within a certain range of colour frequencies and hear within a certain range of sound frequencies.  Just like in the case of ultrasonic transducers, which work within a range of sound that is inaudible to Humans, so too do infrared sensors work outside the range of light that is visible to Humans- below in fact.  Speaking of facts, here’s a fun one- you can use any smartphone camera (or any CCD camera) to see infrared signals. Simply use your preview function of the camera to look at the infrared sensor. Infrared sensors work in an analogous way to ultrasonic transducers.  They emit energy and detect the reflection of that energy. Thus, there will be an infrared source (usually a L.E.D.) and you should be able to see that source through your camera. This trick works with TV remotes too- often that’s an easier way to demonstrate this technique.  Our robot can use infrared sensors to build up its map of its surroundings. For example, cliff sensors are infrared emitters located under the front bumper. The sensors actively look for cliffs and prevent the robot from falling off a step or other steep drop. The famous vacuum cleaner robot Roomba is equipped with Cliff Sensors.

LIDAR - Self driving car using 3D laser scanning

LIDAR - Self driving car using 3D laser scanning

LASER scanners

Again, this form of scanner is analogous to the previous two- indeed the range of light they employ is often in the infrared range.  The difference here is that LASER light is coherent, meaning that it does not spread out in the manner than regular light does.  As such, very detailed maps of a robot’s surroundings can be built up. Typically, the LASER spins on a rapidly rotating platform, allowing the scanner to sample many positions per second, enabling it to build up a very detailed map of its surroundings.

Line Followers

This is a rather specialised style of sensor; it’s designed to keep the robot ‘on rails’ so to speak.  A simple optical sensor lets the robot know if it is over the line, or to one side of it, allowing it to adjust its heading accordingly.


Useful for navigation, (Think self-driving cars and Amazon delivery drones) GPS gives our robot an idea of its position in absolute space.  The resolution of GPS is such that it is less applicable to the navigation within a room, and is better suited to navigating in larger areas.  In addition, the signal from GPS satellites is blocked by heavy concrete and steel structures.

Microphone /  Voice recognition

There few things that put Humans at ease than being able to speak to robots, perhaps more importantly, being understood.  Speech to text is now ubiquitous across a range of devices; it’s understandable that Humans should expect their words to be understood.  However, it’s their sentiment that presents a greater challenge for artificially intelligent systems to recognise.


The humble rotary encoder is useful in situations where there is a strong coupling between a robot’s locomotive system (i.e. its wheels) and the encoder used to track the change in that locomotive system.  But what is a robot to do if doesn’t use wheels, or if those wheels are likely to slip? Visual odometry is an ideal solution to this limitation. It is a more sophisticated technique using optical sensors that track the movement of markers in the robot’s surroundings.  By ‘movement’ in this context we mean relative movement- movement of the environment relative to the robot, and the robot moves through the environment. Imagine you’re sitting on a stationary train, looking out the window. Einstein did this, but that’s another story. Now imagine the train is in the station, about to start its journey.  The platform is covered in tiles, and you’re able to make out individual tiles, and indeed count them as they pass the train. As the train starts to move, you count the number of tiles that pass you in a given time period. From just this information you would be able to calculate how fast the train is travelling, and how far it’s travelled since you started travelling.  This is in essence the technique of visual odometry. (Remember, that your car’s odometer measures distance travelled).

Scene Flow demonstration. Software is available at:

The benefits of visual odometry are a lack of a need to have specialised markers placed out in the robot’s environment- the environment itself provides the markers.  The technique has its challenges however, the foremost of which is the considerable processing power required to make the determination of movement through the space.  However, improvements in hardware processing power and algorithmic efficiency make this challenge seem ever less insurmountable.

In the next article we’ll explore actuators- devices that facilitate a robot’s reach into the physical world.  As we’ll discover, they are very strongly associated with a Human’s experience of interacting with a robot- positive and negative.