SIAM News Blog
SIAM News
Print

CNN-based Model Facilitates Classification and Damage Detection After Car Accidents

By Busayamas Pimpunchat

The automobile industry’s rapid and continuous growth has intensified the need for efficient car classification and damage detection systems. For instance, car accidents are prevalent in Bangkok, Thailand, due to congested roads, poor road conditions, and reckless driving. Accidents may result in significant property damage, serious injuries, and even fatalities. To address this issue, we developed an automated system that can analyze vehicle images; classify car components such as left headlight, right windshield, hood, and so forth; and detect damage to the car body. Because traditional methods are often time consuming and prone to errors, we sought an advanced solution that leverages the power of artificial intelligence (AI). Convolutional neural networks (CNNs) are increasingly popular in the field of computer vision due to their ability to accurately process visual data, and our model utilizes CNNs and state-of-the-art deep learning techniques to provide valuable insights on accidents for insurance companies and regulatory authorities.

Overview of Our Model and Its Performance

Figure 1. Our automated classification and damage detection system can provide two damage locations with accuracy values of 91 percent (large square) and 78 percent (smaller rectangle on right windshield). Image courtesy of the Thai General Insurance Association.
Our CNN-based model leverages large datasets of car images to learn and extract meaningful features. Specifically, it automates the classification and damage detection process to offer faster and more objective assessments and reduce human error. After generating the model, we tested its performance on various scenarios with single and multiple cars (see Figures 1-3). While the model effectively classifies the position of a single car and detects visible damage, it offers less accurate predictions when dealing with multiple cars. Cases that involve black image data or distant photographs also pose a challenge, though the model detects damage in other scenarios with a high level of accuracy. These limitations indicate that further research and improvements are necessary to enhance the model's performance.

Another noteworthy capability of the model is its ability to detect and classify various types of vehicular damage based on images of the car in question. For instance, it can identify and differentiate between minor scratches, dents, broken windows, and more significant structural damages. This feature significantly expedites the inspection and assessment process for both insurance companies and car owners. Additionally, the speed and performance of models that allow real-time vehicle image processing will continue to improve; this aspect is particularly beneficial in scenarios that demand quick decisions, such as insurance claims processing, pre-purchase inspections, or accident assessments.

Training, Optimization, and Evaluation Metrics

Figure 2. Our automated classification and damage detection system can provide three damage locations (blue and green squares) with accuracy values of 94, 81, and 83 percent. Image courtesy of the Thai General Insurance Association.
Training a CNN-based model for car classification and damage detection involves optimizing its internal parameters to minimize the prediction error. Our optimization process utilizes backpropagation, which updates the model's parameters based on the calculated gradients of the loss function. The choice of optimization algorithm, learning rate, and regularization technique can all significantly impact the model's convergence and generalization performance. Common optimization algorithms in deep learning include stochastic gradient descent, Adam, and RMSprop [3, 4]. We can customize these algorithms with appropriate hyperparameters to improve our model’s training efficiency and stability.

Various evaluation metrics can assess the model’s ability to correctly classify cars and detect damage by quantitatively measuring the model's accuracy, precision, recall, and F1 score. Similar studies have utilized a variety of these metrics, as well as mean average precision [1, 2].

Potential Impacts of the Model

Our CNN-based model holds considerable potential for both the automotive industry and the insurance sector. Car manufacturers and dealerships could benefit from this AI-based technology by automating the car classification process, improving inventory management, and enhancing customer experience during sales transactions. Meanwhile, insurers could streamline their claim settlement procedures by leveraging our model's accurate damage detection capabilities. Doing so may lead to faster claim assessments, reduced costs, and improved customer satisfaction.

Prospective buyers in the used car market could even leverage this technology to assess the condition and history of pre-owned vehicles and make informed decisions about their purchases.

Concluding Thoughts

Figure 3. Our automated classification and damage detection system can identify three locations of damage to these two cars: two locations at the front of a black car with accuracy levels of 98 and 76 percent, and one location on the white car with an accuracy of 72 percent. Image courtesy of the Thai General Insurance Association.
We hope that our findings will lay the foundation for further advancements in the automotive industry and contribute to ongoing efforts to mitigate car accidents and improve road safety. Accurate identification and assessment of car damage is crucial for insurance claims processing, accident investigations, collision liability, and regulatory compliance. Our CNN-based model has the potential to expedite the claims process and reduce fraudulent claims, improve overall customer satisfaction, and even facilitate updates to vehicle regulations that promote road safety. 

More broadly, this research showcases AI’s potential to transform the automotive industry. As our work continues to evolve, we anticipate exciting advancements in computer vision and its application to the automotive domain.


Busayamas Pimpunchat delivered a minisymposium presentation on this research at the 2023 SIAM Conference on Computational Science and Engineering, which took place in Amsterdam, the Netherlands, earlier this year.

Acknowledgments: We gratefully acknowledge project funding from the Thai General Insurance Association. Case study data on examples of car accidents are courtesy of non-life insurance companies.

References
[1] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE conference on computer vision and pattern recognition. Miami, FL: Institute of Electrical and Electronics Engineers.
[2] Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., & Zisserman, A. (2010). The PASCAL visual object classes (VOC) challenge. Int. J. Comput. Vis., 88, 303-338.
[3] Kingma, D.P., & Ba, J. (2014). Adam: A method for stochastic optimization. Preprint, arXiv:1412.6980
[4] Tieleman, T., & Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neur. Net. Mach. Learn., 4(2), 26-31.

Busayamas Pimpunchat is an assistant professor in the Department of Mathematics and chair of the master's degree program in Data Science and Analytics at King Mongkut’s Institute of Technology Ladkrabang in Thailand. She is also a council member of the Asia Pacific Consortium of Mathematics for Industry. Pimpunchat’s research focuses on stochastic optimization, risk assessment, mathematical modeling with environmental contexts, finance, and actuarial science.

blog comments powered by Disqus