News

This page shares some of our latest work. Click the publication title to view the article. We sincerely welcome your comments.

Multi-Vehicle Cooperative Decision

Game in Graph: Distributed Cooperative Decision-Making Framework for Multi-Level Equilibrium in Mixed Traffic.

GIG

Shiyu Fang, Bing Zhu, Peng Hang, Chen Lv, Jian Sun*.

Abstract: This paper proposes the Game in Graph (GIG) framework for cooperative decision-making. Traffic scenarios are modeled as a Spatiotemporal Conflict Topological Graph (SCTG) with heuristic-refined edges. The global graph is partitioned into high-conflict subgraphs via modularity maximization to enable distributed optimization. A bottom-up system utility is then decomposed across subgraphs, achieving multi-level equilibrium among vehicles, subgroups, and the overall system.

PG

Shiyu Fang, Xiaocong Zhao, Xuekai Liu, Peng Hang*, Jianqiang Wang, Yunpeng Wang, Jian Sun*.

Abstract: This paper proposes an adaptive potential game (APG)–based cooperative driving framework. A system utility function is constructed from a general individual utility form to jointly optimize individual and system objectives. The Lloyd Shapley value is adopted to quantify each vehicle’s marginal contribution. Moreover, HDV preference estimation is iteratively refined by comparing observed behaviors with APG-predicted actions, enhancing overall safety and efficiency.

Single-Vehicle Interactive Decision

CoReVLA: A Dual-Stage End-to-End Autonomous Driving Framework for Long-Tail Scenarios via Collect-and-Refine.

CoReVLA

Shiyu Fang, Yiming Cui, Haoyang Liang, Chen Lv, Peng Hang*, Jian Sun.

Abstract: CoReVLA is a continual end-to-end driving framework targeting long-tail scenarios. It is first fine-tuned on mixed driving QA datasets, then deployed in the Transportation Cave Automatic Virtual Environment (TransCAVE) to collect driver takeover data as failure signals. Finally, Direct Preference Optimization (DPO) refines the model using human preferences, avoiding reward hacking.