Cybersecurity Challenges in Smart Transportation Systems

Artificial Intelligence (AI) plays a critical role in modern transportation modeling, and we take for granted that the outputs of AI are accurate and trustworthy. However, the robustness and vulnerability issues of these deep learning models have not been investigated for traffic models. Recent studies have shown that neural networks are vulnerable to deliberately designed samples, which are known as adversarial samples. In general, the adversarial samples could be generated by adding imperceptible perturbations to the original data sample. Though the adversarial sample is very similar to its original counterpart, it can significantly change the performance of the deep learning models. Szegedy et al. (2013) firstly discovered this phenomenon on deep neural networks (DNN), and they found that adversarial samples are low-probability but densely distributed. Goodfellow et al. also showed that neural networks are vulnerable to the adversarial samples in the sense that it is sufficient to generate adversarial samples when DNNs demonstrate linear behaviors in high-dimensional spaces. Due to the existence of the adversarial samples, potential attackers could take advantage of the deep learning models and degrade the model performance. Though related theories and applications have been studied in various areas such as computer vision, social networks, and recommendation systems, few of the studies have investigated the vulnerability and robustness of the transportation systems.

An example of attacking traffic signals.