Semantic segmentation based on multimodal information is used for object detection during autonomous driving at night

Authors

  • Haoqi Wu Author

DOI:

https://doi.org/10.61173/0s386b39

Keywords:

Autonomous driving, multimodal information semantic segmentation, object detection, Dual U-net network, Dual-way adversarial network

Abstract

Currently, it is difficult for autonomous driving to detect objects accurately enough at night, mainly due to the poor light at night. The laser camera’s captured RGB image information is less effective than during the day, and the image content is seriously affected. The blurred outline of the object and the decreased accuracy of semantic segmentation lead to missed detections. In view of this situation, this study proposes the following solutions: introducing point cloud data collected by lidar during data acquisition, combining it with the image information from the camera, to constitute precise multimodal three-dimensional information. Then, using a dual adversarial network to preprocess the data and U-net semantic segmentation to segment the multimodal 3D information. The simulation experiment is then used to test and evaluate the segmentation effect. After the completion of the training, the object information detection is applied during the autonomous driving of the vehicle. This method, when compared with the general method, has accurate detection, is independent of the intensity of the light, and has the advantage of good universality.

Downloads

Published

2024-08-14

Issue

Section

Articles