Self-Driving Car Using Simulator
DOI:
https://doi.org/10.32628/IJSRSET2411269Keywords:
Self-driving car, CNN, Image Processing, Dataset Generation, Real Time Data, Augmentation TechniquesAbstract
Due to rapid technological growth in transportation, self-driving cars became the topic of concern. The main purpose of this project is to use the CNN and train the neural network in order to drive the car in autonomous mode in a simulator environment. Front camera of a car captures the images and we use those captured images in order to train the model, in short we can say we have used the concept of behavioural cloning. In behavioural cloning, the system tries to mimic the human driving behaviour by tracking the steering angle. That means a dataset is generated in the simulator by a user driven car in training mode, and the deep neural network model then drives the car in autonomous mode. In one track a car is trained and in other tracks the car drives in autonomous mode. The dataset for Track 1, which was straightforward to drive and had good road conditions, was utilized as the training set for the automobile to drive itself on Track 2, which has abrupt curves, barriers, heights, and shadows are all things to consider. Image processing and other augmentation techniques were utilized to solve this difficulty, allowing for the extraction of as many data and features as feasible. In the end, the vehicle performed admirably on Track 2. In the future, the team hopes to achieve the same level of accuracy using real-time data.
Downloads
References
SAE International, "SAE J3016C (Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles)", 2021.
Lerner, J., &Tirole, J. (2003). Some Simple Economics of Open Source. The Journal of Industrial Economics, 50(2), 197-234. DOI: https://doi.org/10.1111/1467-6451.00174
Lerner, J., &Tirole, J. (2005). The Economics of Technology Sharing: Open Source and Beyond. Journal of Economic Perspectives, 19(2), 99-120. DOI: https://doi.org/10.1257/0895330054048678
von Krogh, G., &Spaeth, S. (2007). The open source software phenomenon: Characteristics that promote research. Journal of Strategic Information Systems, 16, 236-253. DOI: https://doi.org/10.1016/j.jsis.2007.06.001
Apollo. [Online]. Available: https://apollo.auto/index.html.
CARLA Simulator. [Online]. Available: https://carla.org/.
von Krogh, G., & von Hippel, E. (2006). The Promise of Research on Open Source Software. Management Science, 52(7). DOI: https://doi.org/10.1287/mnsc.1060.0560
YOLO: Real-Time Object Detection. [Online]. Available: https://pjreddie.com/darknet/yolo/.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In: NIPS.
LeCun, Y., Bengio, Y., & Hinton, G. E. (2015). Deep learning. Nature. DOI: https://doi.org/10.1038/nature14539
Zablocki, E., Ben-Younes, H., Perez, P., & Cord, M. (Year). Explainability of vision-based autonomous driving systems: Review and challenges.
Ullman, S. (1980). Against direct perception. Basic Books. DOI: https://doi.org/10.1017/S0140525X0000546X
Redmon, J., &Farhadi, A. (2017). YOLO9000: better faster stronger. In: CVPR. DOI: https://doi.org/10.1109/CVPR.2017.690
Redmon, J., Divvala, S. K., Girshick, R. B., &Farhadi, A. (2016). You only look once: Unified real-time object detection. In: CVPR. DOI: https://doi.org/10.1109/CVPR.2016.91
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Journal of Scientific Research in Science, Engineering and Technology
This work is licensed under a Creative Commons Attribution 4.0 International License.