The long-term goals of our research are to advance intelligent autonomous systems by integrating reinforcement learning, modern control theory, and game-theoretic approaches for trustworthy human-robot collaboration. Our research centers on developing advanced control strategies for micro-mobility robots—including wheeled, humanoid, and quadrupedal platforms—to enhance their agility and manipulation in complex environments. We also explore multi-robot coordination and swarm intelligence to enable seamless teamwork among robot fleets. Additionally, we work on safe and energy-efficient control methods for connected and autonomous vehicles to promote sustainable urban mobility in large scale. Leveraging diverse robotic platforms and experimental facilities, ETAIC bridges cutting-edge theory with real-world applications to create intelligent agents capable of complex tasks.
Our research addressed the core challenges in automated driving, including long-tail road scenarios, dynamic interactions with human agents, and reliable system validation. We explore the intersection of model-based control and data-driven intelligence to build resilient and adaptive autonomous vehicles.
Key Challenges:
Our Approach:
We explore how to empower robots with the ability to make robust decisions through learning in uncertain environments, and to achieve efficient, safe, and trustworthy human-robot interaction. This includes how to ensure that robots maintain high performance and safety when facing unknown or abnormal situations, and how to utilize cutting-edge artificial intelligence technologies (such as reinforcement learning and large language models) to facilitate deeper autonomous learning and adaptation in robots, thereby enabling natural coexistence and collaboration with humans, and laying the foundation for future intelligent automation.
Key Challenges:
Our Approach: