What’s going through the mind of those your autonomous vacuum cleaning robots as they traverse a room? There are different ways to find out such as covering the floor with dirt and seeing what remains afterwards (a less desirable approach) or mounting an LED to the top and taking a long exposure photo. [Saulius] decided to do it by videoing his robot with a fisheye lens from near the ceiling and then making a heatmap of the result. Not being satisfied with just a finished photo, he made a video showing the path taken as the room is being traversed, giving us a glimpse of the algorithm itself.
The robot he used was the Vorwerk VR200 which he’d borrowed for testing. In preparation he cleared the room and strategically placed a few obstacles, some of which he knew the robot wouldn’t get between. He started the camera and let the robot do its thing. The resulting video file was then loaded into some quickly written Python code that uses the OpenCV library to do background subtraction, normalizing, grayscaling, and then heatmapping. The individual frames were then rendered into an animated gif and the video which you can see below.
Watching the video it’s clear that the robot first finds an obstacle and then traverses its perimeter until it gets back to where it first found the obstacle. If the obstacle defines the outer boundaries of a reachable area then it fills in the area by crossing back and forth. It makes us wonder if the robot programmers can get some optimization hints from fill routines used in drawing programs. If you did want to experiment with other algorithms then you can without too much trouble using iRobot’s hackable Roomba, the iRobot Create, though you’d have to add back the vacuum. You’ll also notice that it couldn’t fit between the two boxes placed in the middle of the room. We’re guessing that it’s aware of this missed area. How would you solve this problem?