Robotics/Embodied AI

Embodied AI, or Robotics AI, focuses on building intelligent robots that can understand and interact with the physical world. Using artificial intelligence techniques like machine learning and computer vision, these robots process data from sensors to recognize objects, navigate spaces, and make decisions in real time. This field connects AI with real-world applications, enabling robots to perform tasks like delivering goods, assisting in surgeries, or working in hazardous environments. By combining AI with robotics, Embodied AI aims to create systems that can think and act like humans in practical settings.

Robotics_50

Key Researchers

Key Projects

Human-Robot Collaborative AI Programme

The Human-Robot Collaborative Artificial Intelligence (Collab AI) program by A*STAR's Institute of High Performance Computing (IHPC) focuses on using advanced AI to create robots that can work seamlessly with people. These robots leverage AI to learn tasks by observing humans, identifying objects through self-exploration, and responding to instructions using vision, touch, and speech. By enabling robots to handle complex, customized tasks in dynamic, real-world environments, this project has the potential to transform industries like manufacturing, healthcare, and logistics.

Learn more

Improving 3D Recognition Performance with Minimum Extra Costs for Vision Guided Robotics (VGR)

Vision-guided robotics (VGR) relies on advanced perception techniques to navigate and interact with complex environments, finding applications in autonomous vehicles, navigation systems, surveillance, and more. By leveraging abundant unlabelled data and developing few-shot learning models, VGR systems can recognize and adapt to new objects with only a few labeled examples. This approach enhances accuracy and flexibility while minimizing resource investment, making VGR more robust and scalable across diverse applications.