Natural Intelligence

In a world increasingly reliant on more data and greater resources, the natural world continues to show that autonomy can be both low-power and remarkably robust. Insects, with brains no larger than a pinhead, tackle many of the same challenges we aim to solve in robotics—navigating complex environments and adapting to changing conditions with extraordinary efficiency.

Thanks to advances in behavioral, neuro, genetic, and data sciences, we are rapidly uncovering the secrets of insect brains—challenging man-made assumptions about how to solve autonomy. Through the experiences of Opteran’s researchers and our collaborators, we will explore these groundbreaking insights into Natural Intelligence.

What can desert ants teach kidnapped robots?

Dr Mike Mangan, VP of Research, Opteran

The first time I observed desert ants foraging through their arid scrub habitat, I was convinced that this was the place to look for solutions to robot navigation.

As a PhD student, my collaborators, Dr Fernando Amor & Prof Xim Cerda, and I scouted the outskirts of Seville, Spain, for field sites to study these incredible creatures. The lesson you quickly learn is that you don’t find desert ant nests by looking for them —they are often smaller than a coin, hidden beneath the vegetation-covered floor. Instead, we looked for foragers (the ants that scurry across the landscape in search of food) in likely locations.

We set out a pile of crushed cookies nearby where we had spotted a forager, and in ten minutes, we returned to find a stream of ants shuttling back and forth from their nest.

 

Image courtesy of Hugh Pastol and Michael Mangan

 

Desert ants forage during the hottest parts of the day, in temperatures that can exceed 70°C—conditions that would challenge most creatures, let alone robotics systems. Unlike other ant species, they cannot rely on pheromone trails for navigation, as these evaporate quickly in the extreme heat. Instead, desert ants depend on their low-resolution compound eyes, using visual cues to guide their paths.

For perspective, desert ants essentially see the world through a “2000-pixel camera,” a stark contrast to the 8000×4000-pixel (48MP) resolution of an Iphone 16. Despite this low visual fidelity, they navigate vast, featureless landscapes, returning to their nests with pinpoint accuracy time and again.

Even more impressive is their brain, which contains fewer than one million neurons and weighs about as much as a grain of rice. Yet, with these limited resources, desert ants are able to reliably navigate from nest, to food source, repeatedly. They are a real-world example of a fully autonomous agent, solving precisely the tasks that we want robots to solve, but in a super low-power, lightweight system.

 

Rapid learning and place recognition

One of the most astonishing traits of desert ants that I observed in that first study in Seville is that individual ants learn visually-guided routes through their maze-like habitats. These routes are unique to the individual despite the journey start and end point being the same for all ants, which shows that they are not using shared information to navigate.  Instead, each individual forms its own robust visual memories in the same way you might go the shops by one route, while your neighbor follows a different route although you both travel to and from the same place.

What was incredible was that I could pick up a foraging ant and when I placed her in a different part of her familiar route, she would re-localize. Almost instantly, recognizing the scene, adjusting orientation, before continuing along the correct path as if nothing had happened.

With my appetite whetted, myself and colleagues designed a follow up study to ask the question – “how many times do they need to travel the route to learn it” – the answer, only once. Yes, desert ants can robustly encode their entire visual route having only traveled it completely a single time [2].

 

The foraging history of an individual desert ant showing it’s outward paths in orange and its homeward paths in blue. The black route on the right shows the ant recognizing it’s homeward path, solving the kidnapped robot problem. Data from [1]

 

To show this, we tracked individual ants from their first foraging journey, using an overhead camera. We let each individual forage freely, explore the environment, interact with bushes, and even return to the nest.  However, once they reached 8m from the nest we provided them with what they were looking for – some crushed cookie.  Then the interesting stuff happened.

What you can see in the graphic above is the learning history of one of our ants.  On her first foraging trip (shown in dark orange) she searched the environment reaching 8m where she was fed  (area marked F).  She then returned home by a direct path while avoiding bushes (shown in dark blue) and dropped the cookie off in the nest. On her next outward journey (dark orange) she didn’t follow the previous outward path but instead went quickly and directly back to the feeder.  After grabbing a second cookie crumb she then went home but by an identical path as her first trip –  indicating the potential for one-time learning of that homeward route. To test if she had indeed locked in those visual memories, we picked her up just before she entered the nest, and returned her to the feeder.  And after, and initial search around (common in ants that have just been moved by a human), her route home (shown in black) adhered almost identically to those followed previously.

In the video below, you’ll see in high fidelity how the ant behaved when it was moved back to the start of its route from the nest. Initially the ant is a little panicked and disoriented having been captured in a small tub and transported by a giant human. For the 30 seconds or so, she searches around, hides in the bush, and calms down. However, observe what happens at 50 seconds —this is the moment she recognizes her visual surroundings, and speeds up as she quickly and confidently navigates back to the nest.

 

 

What’s remarkable is that there are no large, obvious landmarks in this environment. The ant is working with a maze of similar-looking bushes, yet its low-resolution compound eyes are enough to memorize and recall this visually guided route.

This behavior highlights the robustness of desert ants’ visual recognition. Multiple studies have shown their ability to navigate remains strong even when the environment changes drastically—for example, when researchers surrounded them with enormous bed sheets to obscure familiar landmarks [3]. Even more impressively, some ants have been shown to recall their routes after hibernating for an entire winter [4].

 

Solving the kidnapped robot problem

The test described above demonstrates that desert ants naturally solve what roboticists call the “kidnapped robot problem.” Imagine a robot in your office running out of battery and powering down. A helpful cleaner moves it back to its charging station on another floor. When the robot powers back up, can it recognize that it’s not in the location where it shut down but instead back in its familiar charging bay?

This challenge—also known as visual place recognition (VPR)—is an ongoing area of research in robotics [5], with multiple competitions and benchmarks dedicated to solving it [6]. Current solutions often depend on high-definition cameras and computationally expensive visual processing algorithms, which are typically trained on vast amounts of data. Desert ants, on the other hand, solve this problem with a fraction of the resources.

 

Natural Intelligence

At Opteran, we start by identifying where nature solves the problem that we want to apply to machines before reverse-engineering key aspects of the visual sensing and processing pathways observed in insects to create low-dimensional yet highly robust visual systems. For example, our cameras capture a super-wide field of view, which inherently provides robustness to environmental changes. Combined with our stabilization algorithms—which account for physical tilt and variations in illumination—we minimize variance in the input data at the source, allowing computational efficiencies to propagate down the line.

By taking our inspiration from natural systems, we approach machine autonomy from a complete different perspective. That is what differentiates Opteran from engineered systems – we start from a system that we can see solves the problem, and then reverse engineer the correct mix of sensing and processing algorithms that evolution has found.

 

References

  1. Mangan M, Webb B. Spontaneous formation of multiple routes in individual desert ants (Cataglyphis velox). Behavioral Ecology. 2012 Sep 1;23(5):944-54. [link]
  2. Haalck L, Mangan M, Wystrach A, Clement L, Webb B, Risse B. Cater: Combined animal tracking & environment reconstruction. Science Advances. 2023 Apr 21;9(16):eadg2094. [link]
  3. Wystrach A, Beugnon G, Cheng K. Landmarks or panoramas: what do navigating ants attend to for guidance?. Frontiers in Zoology. 2011 Dec;8:1-1. [link]
  4. Rosengren R, Fortelius W. Ortstreue in foraging ants of the Formica rufa group-hierarchy of orienting cues and long-term memory. [link]
  5. Schubert S, Neubert P, Garg S, Milford M, Fischer T. Visual Place Recognition: A Tutorial [Tutorial]. IEEE Robotics Autom. Mag.. 2024 Jan 1. [link]
  6. Zaffar M, Garg S, Milford M, Kooij J, Flynn D, McDonald-Maier K, Ehsan S. Vpr-bench: An open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change. International Journal of Computer Vision. 2021 Jul;129(7):2136-74. [link]

 

Dr Michael Mangan
VP of Research, Opteran