Robohub.org
 

Meet the AI-powered robotic dog ready to help with emergency response


by
07 January 2026



share this:

Prototype robotic dogs built by Texas A&M University engineering students and powered by artificial intelligence demonstrate their advanced navigation capabilities. Photo credit: Logan Jinks/Texas A&M University College of Engineering.

By Jennifer Nichols

Meet the robotic dog with a memory like an elephant and the instincts of a seasoned first responder.

Developed by Texas A&M University engineering students, this AI-powered robotic dog doesn’t just follow commands. Designed to navigate chaos with precision, the robot could help revolutionize search-and-rescue missions, disaster response and many other emergency operations.

Sandun Vitharana, an engineering technology master’s student, and Sanjaya Mallikarachchi, an interdisciplinary engineering doctoral student, spearheaded the invention of the robotic dog. It can process voice commands and uses AI and camera input to perform path planning and identify objects.

A roboticist would describe it as a terrestrial robot that uses a memory-driven navigation system powered by a multimodal large language model (MLLM). This system interprets visual inputs and generates routing decisions, integrating environmental image capture, high-level reasoning, and path optimization, combined with a hybrid control architecture that enables both strategic planning and real-time adjustments.

A pair of robotic dogs with the ability to navigate through artificial intelligence climb concrete obstacles during a demonstration of their capabilities. Photo credit: Logan Jinks/Texas A&M University College of Engineering.

Robot navigation has evolved from simple landmark-based methods to complex computational systems integrating various sensory sources. However, navigating in unpredictable and unstructured environments like disaster zones or remote areas has remained difficult in autonomous exploration, where efficiency and adaptability are critical.

While robot dogs and large language model-based navigation exist in different contexts, it is a unique concept to combine a custom MLLM with a visual memory-based system, especially in a general-purpose and modular framework.

“Some academic and commercial systems have integrated language or vision models into robotics,” said Vitharana. “However, we haven’t seen an approach that leverages MLLM-based memory navigation in the structured way we describe, especially with custom pseudocode guiding decision logic.”

Mallikarachchi and Vitharana began by exploring how an MLLM could interpret visual data from a camera in a robotic system. With support from the National Science Foundation, they combined this idea with voice commands to build a natural and intuitive system to show how vision, memory and language can come together interactively. The robot can quickly respond to avoid a collision and handles high-level planning by using the custom MLLM to analyze its current view and plan how best to proceed.

“Moving forward, this kind of control structure will likely become a common standard for human-like robots,” Mallikarachchi explained.

The robot’s memory-based system allows it to recall and reuse previously traveled paths, making navigation more efficient by reducing repeated exploration. This ability is critical in search-and-rescue missions, especially in unmapped areas and GPS-denied environments.

The potential applications could extend well beyond emergency response. Hospitals, warehouses and other large facilities could use the robots to improve efficiency. Its advanced navigation system might also assist people with visual impairments, explore minefields or perform reconnaissance in hazardous areas.

Nuralem Abizov, Amanzhol Bektemessov and Aidos Ibrayev from Kazakhstan’s International Engineering and Technological University developed the ROS2 infrastructure for the project. HG Chamika Wijayagrahi from the UK’s Coventry University supported the map design and the analysis of experimental results.

Vitharana and Mallikarachchi presented the robot and demonstrated its capabilities at the recent 22nd International Conference on Ubiquitous Robots. The research was published in A Walk to Remember: MLLM Memory-Driven Visual Navigation.




Texas A&M University

            AUAI is supported by:



Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 153 – Origami-inspired robots, with Chenying Liu

  24 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Chenying Liu from University of Oxford about how a robot's physical form can actively contribute to sensing, processing, decision-making, and movement.

Sony AI table tennis robot outplays elite human players

  22 Apr 2026
New robot and AI system has beaten professional and elite table tennis players.

AI system learns to keep warehouse robot traffic running smoothly

  20 Apr 2026
This new approach adapts to decide which robots should get the right of way at every moment, avoiding congestion and increasing throughput.

Robot Talk Episode 152 – Dexterous robot hands, with Rich Walker

  17 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Rich Walker from Shadow Robot Company about their advanced robotic hands for research and industry.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

and   14 Apr 2026
Ross King created the first robot scientist back in 2009. He spoke to us about the nature of scientific discovery, the role AI has to play, and his recent work in DNA computing.

Robot Talk Episode 151 – Robots to study the ocean, with Simona Aracri

  10 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Simona Aracri from National Research Council of Italy about innovative robot designs for oceanography and environmental monitoring.

Generative AI improves a wireless vision system that sees through obstructions

  08 Apr 2026
With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  07 Apr 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.



AUAI is supported by:







Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence