Publications



Recent Work

---------------------------------------------------------------------------------------------------------------

Learning Human-Robot Collaboration via Heterogeneous-Agent Lyapunov Policy Optimization

Hao Zhang, Yaru Niu, Yikai Wang, Ding Zhao and H. Eric Tseng

Preprint
sym
  • We propose heterogeneous-agent Lyapunov policy optimization (HALyPO), which establishes formal stability directly in the policy-parameter space by enforcing a per-step Lyapunov decrease condition on a parameter-space disagreement metric.
---------------------------------------------------------------------------------------------------------------

C2C: A Cognition-to-Control Hierarchy for Human-Robot Collaboration via Multi-Agent Learning

Hao Zhang, Ding Zhao and H. Eric Tseng

Preprint
sym
  • In multi-agent human-robot collaboration, where long-horizon coordination decisions and physical execution must co-evolve under contact, feasibility, and safety constraints. We address this limitation with cognition-to-control (C2C), a three-layer hierarchy that makes the deliberation-to-control pathway explicit.
---------------------------------------------------------------------------------------------------------------

IO-WBC: Interaction-Orientated Whole-Body Control for Compliant Object Transport

Hao Zhang, Yves Tseng, Ding Zhao and H. Eric Tseng

Preprint
sym
  • We proposed a bio-inspired, interaction-oriented whole-body control (IO-WBC) that functions as an artificial cerebellum - an adaptive motor agent that translates upstream (skill-level) commands into stable, physically consistent whole-body behavior under contact.
---------------------------------------------------------------------------------------------------------------

Dream to Adapt: Meta Reinforcement Learning by Latent Context Imagination and MDP Imagination

Lu Wen, H. Eric Tseng, Huei Peng, and Songan Zhang

IEEE RA-L
sym
  • We introduce MetaDreamer, a novel context-based Meta Reinforcement Learning (RL) algorithm that addresses the high data and task density requirements of existing Meta RL methods. By leveraging meta-imagination through interpolating learned latent context space and MDP-imagination via a generative world model with added physical knowledge, MetaDreamer significantly improves data efficiency and generalization, outperforming current approaches.
---------------------------------------------------------------------------------------------------------------

Bi-Level Transfer Learning for Lifelong-Intelligent Energy Management of Electric Vehicles

Hao Zhang, Nuo Lei, Wang Peng, Bingbing Li, Shujun Lv, Boli Chen, and Zhi Wang

IEEE TITS
sym
  • We proposed a bi-level transfer approach with MAML to realize cross-platform transferable and online-adaptive EMS for REEVs. It contributed to the successful industry deployment of RL methods, implemented in leading automotive company - BYD Auto, significantly enhancing the REEV efficiency.
---------------------------------------------------------------------------------------------------------------

Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles

S Zhang, H Peng., S Nageshrao, and H. Eric Tseng

CVPRW
sym
  • We introduce a novel approach to evaluate the robustness of deep reinforcement learning-based autonomous vehicle (AV) decision-making. We train a "challenger" agent using deep reinforcement learning to generate Socially Acceptable Perturbations (SAPs), aiming to induce crashes where the AV is primarily at fault, even when the AV's policy performs safely in naturalistic environments.
---------------------------------------------------------------------------------------------------------------

Improved Robustness and Safety for Pre-Adaptation of Meta Reinforcement Learning with Prior Regularization

Lu Wen, Songan Zhang, H. Eric Tseng, Baljeet Singh, Dimitar Filev, and Huei Peng

IEEE IROS
sym
  • We developed PEARL+, a Meta-Reinforcement Learning algorithm that significantly enhances safety for autonomous systems. Unlike prior methods, PEARL+ explicitly optimizes for pre-adaptation safety and post-adaptation performance in new tasks, showing improved robustness in critical applications.
---------------------------------------------------------------------------------------------------------------

Multi-Scale Reinforcement Learning of Dynamic Energy Controller for Connected Electrified Vehicles

Hao Zhang, Nuo Lei, Shengbo Eben Li, Junzhi Zhang, Zhi Wang

IEEE TITS
sym
  • This study proposes a multi-horizon reinforcement learning (MHRL) featuring a novel state representation and coordinated training of sub-networks across multiple time scales, which greatly improves fuel economy in real-world driving.
---------------------------------------------------------------------------------------------------------------

Prospective Role of Foundation Models in Advancing Autonomous Vehicles

Jianhua Wu, Bingzhao Gao, Jincheng Gao, Jianhao Yu, HongqingChu, Qiankun Yu, Xun Gong, Yi Chang, H. Eric Tseng, Hong Chen, and Jie Chen

Research
sym
  • We present an example of an LLM-driven pipeline for autonomous driving, aiming to advance the application and development of foundation models in the autonomous vehicle domain.
---------------------------------------------------------------------------------------------------------------

Autonomous Highway Driving using Deep Reinforcement Learning

Subramanya Nageshrao, H. Eric Tseng, and Dimitar Filev

IEEE SMC
sym
  • We proposed a reinforcement learning (RL)-based method where an autonomous vehicle learns to make decisions by interacting directly with simulated traffic, using a deep neural network to select actions for given system states, as demonstrated in highway driving scenarios with varying traffic densities.
---------------------------------------------------------------------------------------------------------------

Selected Book Chapters

---------------------------------------------------------------------------------------------------------------

Selected Journal Papers

---------------------------------------------------------------------------------------------------------------

Selected Conference Papers



---------------------------------------------------------------------------------------------------------------

Selected Patents

---------------------------------------------------------------------------------------------------------------

More publications and patents can be found on Prof. Tseng's Google Scholar homepage.