CSS
 
Date : 19-10-23 14:39
   Full Text Download:  isgnss2019-049.pdf (758.5K)
Monocular Visual Localization and Mapping using 3D Point Cloud and Feature Map
Tae-Ki Jung, Gyu-In Jee



It is important to understand the position of vehicles for safety of autonomous driving. Over the past several years, researchers have studied localization by using sensor to enhance safety. Localization uses GPS, INS, Lidar, cameras and so on. In particular, vehicle localization have studied using camera which is similar to high accuracy lidar and human eyes. In this study, lidar and Mono camera are used. Use lidar to build a cumulative 3d point cloud. Then, create a feature map using lidar and camera. The positioning uses a mono camera. Visual inertial odometry and map matching are performed using a mono camera. Visual Inertial Odometry estimates the inverse depth of the feature point and calculates the position using the projection error. Map matching performs localization by matching feature points of an image and feature points of a map. Also, depth of feature point is calculated by matching feature point and 3d point cloud. Visual Inertial Odometry is performed using the correct depth in case of no map matching.

Keywords: autonomous vehicle, map based localization, vision based localization