Partnership Announcement

Beyond SLAM: redefining autonomy with neuromorphic software

What’s the best way to give a robot the intelligence it needs to behave autonomously?

We believe the answer is to create natural intelligence using neuromorphic algorithms that draw on the deep study of how brains process the world around them. And we do that by starting with what brains evolved to solve in the first place, movement

Before I explain why these natural algorithms are superior, let’s take a look at alternatives: SLAM, deep learning and Generative AI.

SLAM (Simultaneous Localization and Mapping)

SLAM is location-focused technology that involves generating and updating a detailed map, while tracking an object’s movement within the mapped area. This approach is core to the way many autonomous mobile robots work.

It’s very cheap to train, because there is no training. Yet, as I explain below, it’s very expensive to deploy because of the hardware and the compute requirements involved.

SLAM requires lots of complex 3D math to figure out how to build a map of an environment and navigate around it at the same time.

It’s a very hard problem to be able to build a complex map and figure out exactly how far away everything is, by trying to track the world as the robot moves around it. The SLAM approach creates a ‘metric map’, which can be very accurate. But doing all of the math required to produce the map can come at a huge computational, and hence financial, cost.

The sensors required, such as high-definition cameras and Lidar lasers, are expensive, and the data they collect needs to be processed by powerful and typically costly hardware.

SLAM can also struggle with certain situations, such as a tiled wall, where the corner of each tile looks the same, or a factory where lighting conditions change throughout the day.

What’s more, the complex 3D maps of larger environments SLAM produces are often too large to store locally in a robot’s onboard memory. This means they need to be streamed on demand from the cloud as the robots go about their tasks, adding greater complexity and cost to the solution.

Deep learning

The process of training software to recognize specific objects and other patterns within a data set, makes sense if you want to give a robot awareness of what is around it. Once a neural network trained with deep learning has seen enough chairs, it becomes better at recognizing new ones the robot encounters.

But this approach comes with tradeoffs. Deep learning can be cheap to deploy these days, with the advent of AI accelerators that work on the edge of networks. However, the training process can be very expensive and time-consuming.

And while deep learning is often referred to as ‘neuromorphic’ (modeled on the way real brains work), this is only superficially true. In reality the neural networks at the heart of deep learning are based on little more than an out-of-date sketch of how a tiny part of the brain, the visual cortex, works. That’s why they can still be be so unreliable. But, more importantly, you don’t need to classify every object in an environment in order to behave sensibly – when you duck because you see something hurtling towards your head, your brain only processes what it was afterwards.

Generative AI

In robotics automation, as with pretty much every other field of technology right now, there’s a lot of excitement about the potential of Generative AI.

A large language model can work with any kind of tokenized data. Given enough training information, you can create a model that apparently extracts the semantics from audio, video and text, , allowing robots to behave intelligently in their environment.

But to achieve that, you need huge amounts of video in your training data. This is very expensive to acquire.

What’s more, generative AI is famous for its hallucinations, where it assembles data in ways that aren’t quite right. That’s because Generative AI actually doesn’t extract any meaning at all, but rather finds statistical patterns in its training data.

These hallucinations can take many forms. You might have encountered an AI-generated image of a hand with six fingers, or a satirical news story presented as fact by a social media app, or heard about an entirely non-existent legal ruling presented as case law by a chatbot.

Just as Generative AI can hallucinate in those ways, it could also misinterpret the world around it, creating danger for robots and the people and objects around them.

Developers are trying to solve this by adding additional layers of AI as safeguards. But what about when you have to safeguard the safeguards from their own hallucinations? All you’re doing there is adding more complexity and potential points of failure.

What’s so good about Neuromorphic?

Neuromorphic computing involves drawing on the efficient, unrivaled, biological computing package that is the brain to solve autonomy in a way that gracefully adapts to the unpredictability of the world.

Opteran’s approach is based on fundamental neurological study of insect brains, which often raises eyebrows. Many people assume that insects are stupid, and that their brains don’t do very much as they’re so small. But if you look at a tiny insect brain, which is made up of a million or fewer neurons, it’s exquisitely structured. It’s a brilliant lesson in what you can achieve in terms of generating rich, robust behavior with very, very little in the way of resources.

We spent 10 years doing fundamental research, collaborating with neuroscientists, animal behaviorists, and computational neuroscientists to figure out how the brain solves the autonomy problem.

While you can’t put an EEG cap on a fly like you can a human, you can genetically manipulate them to study their brain activity. This can be used to do things like get neurons in their brains to light up fluorescently every time they fire. And photon microscopy can be used to study an insect’s brain without invasive procedures, to name just a couple of techniques we use. We used all of that research to figure out how insects’ brains solve specific problems.

For example, we modeled how fruit flies navigate with a kind of ‘visual compass’, giving them a mental bearing that allows them to avoid getting lost, even as they turn around.

Researchers were able to record the activity of neurons in the central complex region of a fly’s brain, while it was navigating. That gave us enough information to then build a model that reproduced the behavior of the animal.

We also worked out how honeybees perceive motion in the world around them, and use it to work out how they’re moving in their environment, including avoiding collisions with obstacles. Their solution is so different from human and deep-learning solutions that it is simultaneously both more robust and more efficient than either.

That’s just scratching the surface of the findings we’ve built into Opteran’s technology. We translated the brain behaviors we found through years of research into a form suitable for running on standard ultra-low cost computer hardware.

We don’t simulate an insect’s brain. Instead, we translate its brain activity into mathematical operations that can be transferred to standard compute hardware in a straightforward way.

The result? An approach to automation that is simultaneously more robust, reliable, and cheaper to develop and deploy than alternatives.

And what we can do today is just the start.

Neuromorphic intelligence has the potential to achieve a higher level of autonomy, with more abstract reasoning capability. This can lead us towards far more capable robots that will take our understanding of ‘autonomy’ to a whole new level.

Prof. James Marshall
Founder Science Officer, Opteran