Amap, Alibaba’s location-based services platform, debuted its first quadruped robot, Tutu, as a performing guest at the finish line of the 2026 Beijing E-Town Humanoid Robot Half-Marathon, China’s premier annual robotics showcase. As the world’s first fully autonomous robotic guide dog capable of navigating complex, open environments, Tutu successfully led visually impaired individuals through bustling marathon crowds and obstacles during a live demonstration.
Unlike controlled laboratory environments, guiding in the open world represents the most complex scenario for robotics. It requires the machine to make real-time decisions amidst moving pedestrians, traffic, uneven terrain, and changing weather while ensuring user safety.
Tutu operates without preset routes or remote controls, identifying paths and walking autonomously while perceiving changes in road conditions up to three kilometers away.
The showcase demonstrated Amap’s unique technological capabilities in AI and spatial computing, serving as a manifestation of its vision to bridge spatial intelligence with the physical world, turning it into tangible, real-world support.
Translating Digital Mapping Expertise into Spatial Intelligence with AI
The intelligence behind Amap’s robotic guide showcase above is its proprietary Abot full-stack technology framework, a three-layer architecture that seamlessly converges its high-fidelity spatial data, model capabilities, and agentic orchestration for execution.
The data layer is powered by Amap’s self-developed ABot-World, a world model that provides tens of millions of real-world training scenarios for embodiment models. It gives Tutu the ability to understand the physical world and reason through solutions, marking a fundamental shift from merely following instructions to making autonomous decisions.
Building upon this foundation is the model layer, consisting of Amap’s two core foundation models for embodiment launched earlier this year: ABot-M0, the world’s first universal manipulation model that works across diverse robot morphologies, and ABot-N0, a navigation model that integrates essential navigation tasks into a unified system, enabling robots to handle long, complex tasks in the real world.
Powered by ABot-N0 and a deep fusion of large-scale traffic data with advanced machine vision, Tutu provides a dual-layer safety assurance covering both line-of-sight and beyond-visual-range navigation, enabling it to tackle complex route-planning decisions across a wide range of real-world mobility scenarios.
Finally, the agent layer acts as an operating system to translate these capabilities into autonomous action. Equipped with an agentic architecture that features self-reflection reasoning and self-correction capabilities, Tutu can process natural language commands to solve daily needs. Rather than just following a path, it interprets user intent. For example, when given the command “I’m thirsty,” it recognizes the need, identifies nearby venues, and brings a bottle of water back to the user, effectively bridging model capabilities with real-world applications.
To support a collaborative industry ecosystem, Amap has open-sourced ABot-M0, including the data, algorithms, model, and training frameworks, allowing the broader robotics industry to access a high-performance intelligence baseline and operate more effectively.
The global standing of this architecture was recently validated by ABot-World, which outperformed entries from Google and NVIDIA to secure the top position on international world model benchmark WorldArena.
Tech for Good: Addressing Accessibility Gaps through Physical AI
For Amap, high-level technical benchmarks are only valuable if they solve real-world issues. The company is focusing its technical edge on addressing critical societal gaps. For Tutu, it can help address the dire shortage of guide dogs in China, where over 17 million visually impaired individuals are supported by only about 400 active guide dogs.
The successful showcase at the marathon served as a vital stress test. Amap demonstrated that its intelligent model is ready to provide meaningful support in real-life scenarios. It aligns with Alibaba’s overarching strategy: prioritizing the practical applicability of AI to enhance human life and turning world-class expertise into a helpful, actionable tool for society.
This new breakthrough is also an extension of Amap’s long-standing commitment to inclusive mobility. In November 2022, Amap launched Wheelchair Navigation, which has since expanded to 71 cities. In 2024, it introduced Visually Impaired Navigation, prioritizing tactile pavements and providing audio narration. These features helped plan over 300 million accessible routes since launch.