AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

CASAformer: Congestion-aware sparse attention transformer for traffic speed prediction

Yifan Zhanga,b( )Qishen Zhouc,bJianping WangdAnastasios KouvelasbMichail A. Makridisb
Department of Computer Science, City University of Hong Kong (Dongguan), Dongguan, 523808, China
Institute for Transport Planning and Systems, ETH Zurich, Zurich, 8093, Switzerland
Institute of Intelligent Transportation Systems, College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, 310058, China
Department of Computer Science, City University of Hong Kong, Hong Kong, 999077, China
Show Author Information

Highlights

• Comprehensive evaluations of the State-Of-The-Art (SOTA) traffic prediction deep learning models under different traffic regimes.

• Design of a transformer-based model with congestion-aware and informative sparse layer, improving the low speed prediction accuracy.

• Evaluation on two public datasets shows the model outperforms alternatives, especially for low-speed prediction.

Abstract

Accurate and efficient traffic speed prediction is crucial for improving roaDongguand safety and efficiency. With the emerging deep learning and extensive traffic data, data-driven methods are widely adopted to achieve this task with increasingly complicated structures and progressively deeper layers of neural networks. Despite the design of the models, they aim to optimize the overall average performance without discriminating against different traffic states. However, the fact is that predicting the traffic speed under congestion is normally more important than under free flow since the downstream tasks, such as traffic control and optimization, are more interested in congestion rather than free flow. Most of the State-Of-The-Art (SOTA) models unfortunately do not differentiate between the traffic states during training and evaluation. To this end, we first comprehensively study the performance of the SOTA models under different speed regimes to illustrate the low accuracy of low-speed prediction. We further propose and design a novel Congestion-Aware Sparse Attention transformer (CASAformer) to enhance the prediction performance under low-speed traffic conditions. Specifically, the CASA layer emphasizes the congestion data and reduces the impact of free-flow data. Moreover, we adopt a new congestion adaptive loss function for training to make the model learn more from the congestion data. Extensive experiments on real-world datasets show that our CASAformer outperforms the SOTA models for predicting speed under 40 mph in all prediction horizons.

References

【1】
【1】
 
 
Communications in Transportation Research
Article number: 100174

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Zhang Y, Zhou Q, Wang J, et al. CASAformer: Congestion-aware sparse attention transformer for traffic speed prediction. Communications in Transportation Research, 2025, 5(2): 100174. https://doi.org/10.1016/j.commtr.2025.100174

542

Views

24

Downloads

7

Crossref

7

Web of Science

7

Scopus

Received: 14 October 2024
Revised: 08 December 2024
Accepted: 16 December 2024
Published: 10 April 2025
© 2025 The Authors.

This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).