Publications



Recent Work

---------------------------------------------------------------------------------------------------------------

Dream to Adapt: Meta Reinforcement Learning by Latent Context Imagination and MDP Imagination

Lu Wen, H. Eric Tseng, Huei Peng, and Songan Zhang

IEEE RA-L
sym
  • We introduce MetaDreamer, a novel context-based Meta Reinforcement Learning (RL) algorithm that addresses the high data and task density requirements of existing Meta RL methods. By leveraging meta-imagination through interpolating learned latent context space and MDP-imagination via a generative world model with added physical knowledge, MetaDreamer significantly improves data efficiency and generalization, outperforming current approaches.
---------------------------------------------------------------------------------------------------------------

Safe and Human-Like Autonomous Driving: A Predictor–Corrector Potential Game Approach

Mushuang Liu, H. Eric Tseng, Dimitar Filev, Anouck Girard, and Ilya Kolmanovsky

IEEE TCST
sym
  • We propose PCPG (Predictor-Corrector Potential Game), a novel decision-making framework for autonomous vehicles. PCPG uses a predictor for multi-agent interaction and a corrector to adapt to real-world, unpredictable agent behaviors by measuring and correcting prediction errors. This framework guarantees Nash equilibrium, is computationally scalable, ensures ego-vehicle safety, and approximates true Nash equilibrium despite unknown agent cost functions.
---------------------------------------------------------------------------------------------------------------

Bi-Level Transfer Learning for Lifelong-Intelligent Energy Management of Electric Vehicles

Hao Zhang, Nuo Lei, Wang Peng, Bingbing Li, Shujun Lv, Boli Chen, and Zhi Wang

IEEE TITS
sym
  • We proposed a bi-level transfer approach with MAML to realize cross-platform transferable and online-adaptive EMS for REEVs. It contributed to the successful industry deployment of RL methods, implemented in leading automotive company - BYD Auto, significantly enhancing the REEV efficiency.
---------------------------------------------------------------------------------------------------------------

Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles

S Zhang, H Peng., S Nageshrao, and H. Eric Tseng

CVPRW
sym
  • We introduce a novel approach to evaluate the robustness of deep reinforcement learning-based autonomous vehicle (AV) decision-making. We train a "challenger" agent using deep reinforcement learning to generate Socially Acceptable Perturbations (SAPs), aiming to induce crashes where the AV is primarily at fault, even when the AV's policy performs safely in naturalistic environments.
---------------------------------------------------------------------------------------------------------------

Improved Robustness and Safety for Pre-Adaptation of Meta Reinforcement Learning with Prior Regularization

Lu Wen, Songan Zhang, H. Eric Tseng, Baljeet Singh, Dimitar Filev, and Huei Peng

IEEE IROS
sym
  • We developed PEARL+, a Meta-Reinforcement Learning algorithm that significantly enhances safety for autonomous systems. Unlike prior methods, PEARL+ explicitly optimizes for pre-adaptation safety and post-adaptation performance in new tasks, showing improved robustness in critical applications.
---------------------------------------------------------------------------------------------------------------

Multi-Scale Reinforcement Learning of Dynamic Energy Controller for Connected Electrified Vehicles

Hao Zhang, Nuo Lei, Shengbo Eben Li, Junzhi Zhang, Zhi Wang

IEEE TITS
sym
  • This study proposes a multi-horizon reinforcement learning (MHRL) featuring a novel state representation and coordinated training of sub-networks across multiple time scales, which greatly improves fuel economy in real-world driving.
---------------------------------------------------------------------------------------------------------------

Prospective Role of Foundation Models in Advancing Autonomous Vehicles

Jianhua Wu, Bingzhao Gao, Jincheng Gao, Jianhao Yu, HongqingChu, Qiankun Yu, Xun Gong, Yi Chang, H. Eric Tseng, Hong Chen, and Jie Chen

Research
sym
  • We present an example of an LLM-driven pipeline for autonomous driving, aiming to advance the application and development of foundation models in the autonomous vehicle domain.
---------------------------------------------------------------------------------------------------------------

Autonomous Highway Driving using Deep Reinforcement Learning

Subramanya Nageshrao, H. Eric Tseng, and Dimitar Filev

IEEE SMC
sym
  • We proposed a reinforcement learning (RL)-based method where an autonomous vehicle learns to make decisions by interacting directly with simulated traffic, using a deep neural network to select actions for given system states, as demonstrated in highway driving scenarios with varying traffic densities.
---------------------------------------------------------------------------------------------------------------

Selected Book Chapters

---------------------------------------------------------------------------------------------------------------

Selected Journal Papers

---------------------------------------------------------------------------------------------------------------

Selected Conference Papers



---------------------------------------------------------------------------------------------------------------

Selected Patents

---------------------------------------------------------------------------------------------------------------

More publications and patents can be found on Prof. Tseng's Google Scholar homepage.