Multisensor Pedestrian Navigation System
A recent field of research is the “Multisensor Pedestrian Navigation System with Mapping Capabilities”. We study navigation and guidance in indoor scenarios of individuals, who are equipped with sensors. The scope of this application is
- Disaster management
- Firefighters and Rescue teams
- Guidance for visually impaired people
- Consumer applications
In urban area and especially inside buildings, navigation systems based on Global Navigation Satellite Systems (GNSS) suffer from poor reliability and positioning accuracy due to signal attenuation and multipath effects. Furthermore digital maps of buildings are usually not available. In disaster and firefighter scenarios, building infrastructure like Wi-Fi-, Radio-frequency identification (RFID)-or Ultra-wideband (UWB) systems are typically not available or operational. Therefore, pedestrian navigation systems for urban area have to provide highly precise localization and mapping capabilities independent of building infrastructure and external signals.
Dual IMU Inertial System
The basis of our Multisensory Pedestrian Navigation System is an inertial sensor, recording the movements of the user with accelerometer and gyroscopes. By strapdown calculations, the trajectory is calculated. One Inertial Measurement Unit (IMU) is mounted on the foot of the user to record the movements. But due to weight and cost, only Micro-Electro-Mechanical -Systems (MEMS) can be used, which have poor noise and bias-drift performance. An estimation based only on strapdown calculations without aiding information is drifting in the dimension of hundreds of meters after one minute. Only by an intelligent combination of sensors and innovative algorithms, the drift can be eliminated. This is done by using Zero Velocity Updates during standing phase of the equipped foot. With this aiding technique, a huge amount of the sensor errors can be eliminated.
Furthermore, by using a second IMU, closely tied to the torso and additional vision or laser sensors, the torso dynamics can also be recorded. Furthermore, this platform is equipped with additional barometer and compass sensors, yielding long-term aiding for heading and height. This resulting “Dual IMU” approach yields good results for indoor scenarios, see figure 2 with seven independent test runs.
But as there is no long-term aiding, the solution will always be drifting. Therefore we propose the use of additional aiding sensors like laser rangers, cameras or a combination of these. With this innovative configuration of inertial and aiding sensors in the torso unit, a tightly coupled fusion of the single subsystems is obtained. This combines the particular advantages of different sensors in an optimal way, yielding maximum reliability and accuracy.
For additional long-term aiding as well as for mapping, a portable laser ranger is integrated into the multi-sensorial system. Simultaneous localization and mapping (SLAM) is performed using our Dual IMU Pedestrian Navigation System. Hence the slightly drifting inertial navigation solution is improved and a map of already visited rooms is generated. An optimized OrthoSLAM algorithm is used, which enables real time execution on portable hardware. Combining laser and inertial sensors yields very robust and highly accurate navigation estimation for indoor scenarios. The survey of buildings for automatic generation of indoor maps is currently researched with promising results.
Optical cameras are used as sensor for navigation aiding and mapping, known as VisualSLAM. Based on the images of camera which is integrated in the navigation system, the ego-motion of the user and the structure of the environment is calculated. Video-based navigation yields a complete navigation solution with position, velocity and attitude information and a three-dimensional feature map (see figure 3)..
Our research covers a wide spectrum of vision-based navigation techniques with different advantages. Several kinds of image features like point or line features as well as different methods for data fusion based on Kalman Filters or numerical optimization are used. Our goal is to optimally combine inertial, laser and vision sensors for robust and highly accurate indoor navigation in different scenarios.