Wall-following is a relatively simple and useful method for an autonomous mobile robot to explore its environment.
However, wall-following can be tricky for a number of reasons:
- Incomplete sensor coverage
- Limiting behaviors of range-finding infrared (IR) sensors
- Variety and complexity of environment
- Dealing with memory and state
First I will give you some idea of why these are issues and then I want to tell you about a possible solution. Throughout the post, I use ‘we’ for things that I did jointly with my MASLAB team, and ‘I’ for my own musings and reflections.
Incomplete Sensor Coverage
On our robot we had 5 IR sensors roughly covering an arc at the front of the robot. Three were three long range IRs, one facing forward (0 degrees), and two facing at -45 and 45 degrees. There were also two short range IR sensors at -90 and 90 degrees. It should not be surprising then, that the robot has blind spots with this arrangement of IR sensors. It is possible to move in such a way that you hit an obstacle without any of your IR sensors knowing about it. For example, if the robot drives head on toward a thin pole, it is very likely that none of the IR sensors will catch it.
We embraced the fact that the robot would hit random things from time to time if it just relied on IR sensor data, and decided to let other things (like bump sensors) deal with it.
Limiting Behavior of IR Sensors
IR sensors don’t just magically give you the correct distance all the time. The reality is that IR sensors have limited effective ranges. We found that short-range IRs start giving garbage readings above roughly 0.25 m, and long-range IRs start giving garbage readings below rougly 0.17 m. Garbage readings for the short-range IR were always either too low for the robot to ever experience them or higher than 0.25 m, so we could safely trust any reading between 0.1 m (the minimum short-range reading we could with the way the short-range IRs were mounted) and 0.25 m.
Unlike the short-range IR, whose sets of garbage readings and good readings are effectively disjoint, long-range IRs are a bit more tricky. If a long-range IR is less than about 0.17 m from an obstacle, it will start producing readings as high as 0.60 m. Thus it is difficult to tell whether we are too close to an obstacle or actually at 0.60 m. We considered long-range IR readings above 0.45 m out of range (and in the case of wall following, 0.75 m, since we would lose walls too often).
As you can see, the short IR is nice because it is easy to filter out junk. If we wanted to, we could use only short IRs and do wall following very close to the wall. However following close to the wall is bad for exploring and seeing things in wide open spaces. We wanted to follow the walls from relatively far away.
We ended up combining the short and long IRs to filter out garbage readings and do wall following far from the wall. Here’s how it worked. When we were somewhat far from the wall but still capable of wall following, we would get garbage readings from the short-range IR, but our long-range IR would be in range and we would get closer to the wall. If we got too close, we would start getting garbage readings on the long-range IR sensor and good readings on the short-range sensor, which would push us away from the wall.
Variety and Complexity of Environment
A good wall following algorithm must be robust under a seemingly endless set of situations. I found it tempting to try to come up with a magical formula, based on IR sensors, or vision, or other sensory data that just produces the correct wheel velocities at each instant. My first idea was roughly to generate attractive and repulsive forces based on actual IR ranges compared to ideal IR ranges. One can think of the robot as moving in the direction of steepest descent to minimize an “energy” function based on its IR ranges. The ideal ranges would be adjusted dynamically to keep the robot moving and prevent it from getting stuck in local minima. This method seems elegant, but it is clear that it would have some subtleties. How do we dynamically change the ideal IR ranges? When do we increase or decrease them? How do we choose which ones to change? Energy-based methods have a special place in my heart and this is a topic I hope to explore in the near future.
What I actually ended up doing to do wall following under the enormous variety of situations was to break up the space of situations into categories and apply particular behaviors for each one. This isn’t as elegant (in my view) as having a physics-based approach with artificial attractive and repulsive forces, but it works.
General Strategy for Wall Following
When you’re following a wall, you most likely want to stay at a fixed distance from it. We used a proportional controller to stay at a fixed distance from and roughly parallel to the wall. Each side of robot had an IR sensor perpendicular to the wall and an IR sensor at roughly 45 degrees to the wall. If one of the side sensors is in range of a wall, we start wall following. We set a desired distance for each of the two IR sensors. The robot moves forward at a constant velocity and each IR in-range comes up with a desired rotational velocity by multiplying a gain with the difference between actual and desired distances. The desired rotational velocities are then averaged and the average is commanded to the motors. We did not explicitly calculate the angle to the wall and try to remain parallel but the distance control implicitly took care of that.
The above strategy works fine if you just have a very long straight wall and nothing else. What if you sense other objects on the opposite side of the wall you are following, or the wall you are following suddenly turns at a 90 (or negative 90!) degree angle? A more general strategy for wall following would need to handle these complexities.
1. In general, we try to stick to the wall that we are currently following, or go straight if we are not currently following a wall.
2. If we encounter an obstacle, we scan in the direction of the wall we are currently following (or in a random direction if we are not currently following a wall) until our front sensors become unblocked. Then we can be reasonably sure that we can move forward.
There are two interesting cases of this. An example of the first is when you are following a wall on the right and you have another wall perpendicular to the one you are following blocks your path forward. However, there is still space between where your wall ends and where the perpendicular wall is (imagine a T shape, with vertical space between the horizontal and vertical lines). In fact, if your robot was smart, it would turn the right and go through that space. This is exactly why we first scan in the direction of the wall we are following. With this strategy, we will be able to make it into another space.
Another case of of this strategy is when you are following a wall on the right (the parallel wall), and the wall turns 90 degrees in the positive direction, blocking your path forward (the perpendicular wall). There is no space between the parallel and perpendicular walls for you to pass through to the right. You would then scan to the right, not see any opportunity to move forward, then scan to the left, and start following the perpendicular wall.
The order of scanning takes care of both of these common cases.
3. If we are following a wall and the wall ends, then we try to turn in an arc in the direction of the wall that we think just ended. The reason for this goes back to Rule 1 — we always want to try to stick to the current wall. It could be that the wall curved very sharply, so sharply that our side sensors went out of range. It could alse be that we were following a dividing wall between two rooms, and we reached the doorway between these two rooms.
Either way, sticking to the wall turns out to be a crucial element for reliable exploration.
Imagine that we have two rooms, a wall separating the two rooms, and a wide doorway in the middle of the separating wall, enough for the robot to pass very comfortably through to the other room. We follow the wall dividing the two rooms and all of a sudden, the wall ends when we reach the doorway. Suppose we ignored this doorway and just kept going forward. Following Rule 2,we would eventually run into a wall, turn in some direction, and keep following the wall, all while ignoring the doorway. You would never be able to exit that room (in theory). In practice, you might, but it will take a very long time, and your robot will look it doesn’t know what it’s doing. If instead you were to turn in an arc in the direction of the wall that just ended, you would turn right through the doorway into the other room.
Dealing With Memory and State
In order for the robot to know what it should do in a given situation, it is necessary for it to store some kind of state. For example, if the robot is in a wall-following state, it should be measuring distances from its sensors and trying to stay close to the wall. In most situations, it is sufficient for the robot to decide what to do just based on its current state, without regard to its previous states. Sometimes though, it is useful to have a longer memory, and base your actions on previous states as well. For example, if we are following a wall on the left for 10 seconds, and we suddenly lose the wall (e.g. if it ends), what should we do? If we look at rule number 3 in the previous section, a good solution is to turn in an arc to the left to either try to get back to the wall or enter another room. But rule number 3 relies on the robot’s memory of more than just its current state. The robot knows that it is currently not following a wall, but right before that it was. So it turns in the right direction. If we did not know that we were just following a wall, it might be more reasonable to just keep going forward. With more memory of past states, we have a richer model of our world.