Mobile Robot Navigation – Localization – Mike Boulet

At a recent MIT/Lincoln Labs 2017 Beaver Works Summer Institute Seminar, Mike Boulet gave a lecture on localization with regards to mobile robot navigation. Looky here:


There are many parts to mobile robot navigation. As this information is used directly in programming the MIT RACECAR, a NVIDIA Jetson based robot, we will cover that information here.

Mobile robot navigation can be broken into these main categories:

  • Perception
  • Localization
  • Task and Route Planning
  • Motion Planning and Execution

This lecture covers the localization aspect of the task.

There are many more lectures available in this summer series, with a wide range of subject matter. We will be providing pointers to the lectures that directly address the RACECAR, but it’s worth going through the playlist to find other topics which may interest you.

Note that these lectures are given to high school senior students.

Note: In case there are browser issues, the YouTube address is:

Note: Some people find it helpful to set the playback speed for these types of videos to 1.25X on YouTube, the setting is available in the settings menu. This saves a little time while watching, but the fidelity is still good enough to understand the lecture. You can always put it back to normal speed for the tricky bits.

1 Comment

  1. For localization Kalman Filters are very powerful and are easy to program. It was a key algorithm used in the Apollo space program and they are used in many navigation systems today. The Kalman filter is also used for many tracking applications and supports the fusion of data from various sources. The filter is based on a set of equations that describe the state of a system and the way observations are mapped to to the state based on Gaussian probability distribution functions. The filtering aspect of the algorithm is that observations are filtered.

    Here are some links for further learning:

Leave a Reply

Your email address will not be published.