Navigating Complexity: challenges facing mobile robots

When we think of autonomy, we think of efficiency. How can we create a simple machine that augments the human workforce?

In the race for greater efficiency, mobile robots have evolved to better navigate their environments, but this evolution has brought complexity, expense, and layers of technology. Now, in reaction to this, there is a myopic focus on reducing BOM costs to compensate. This approach fails to consider the bigger picture.

Ultimately, end users care about practicalities rather than the underlying technology:

– Time to deploy: How easy is it to set up my robot?
Mean Time To Failure (MTTF): How robust is the machine and how often does it break
– Cost: If it meets the above criteria but is prohibitive due to price, adoption is slow

If a machine becomes less cost-effective than humans, it fails its purpose. For successful integration of mobile robots into the workforce, it is crucial to address these practicalities while keeping the costs in check.

The path to complexity

Autonomous Guided Vehicles (AGVs) were the pioneers of mobile warehouse robots, following predetermined paths. While effective, this has its limitations. AGVs are constrained by their reliance on a fixed infrastructure, which is costly to install and maintain. Any deviation to the markers disrupts the service, needing a human to step in.

How do you add more flexibility? By allowing a mobile robot to navigate freely within the environment.

With freedom of movement comes far greater complexity. Once you start to have the ability to navigate anywhere, you need more information about the world – seeing obstacles or knowing where you can and can’t go.

Autonomous Mobile Robots (AMRs) promised greater adaptability but rely on sophisticated sensors such as 2D, 3D LiDAR and high-definition cameras to detect obstacles and navigate accurately. This hardware can add up.

An average depth camera costs between $400 and $600, and typically, two are needed per robot. An Nvidia compute unit might cost around $2,000, and a 2D LiDAR sensor can range from $2,000 to $4,000. A 3D LiDAR sensor can cost up to $7,000. The cost per machine can easily reach between $5,000 and $8,000 for an average setup, and up to $27,000 for a top-end system.

A high sensor cost may be easier to accept, if the result was robust and reliable autonomy. The problem is, the reality of warehouse spaces is challenging for mobile robots.

A brittle system

The real world is not a lab and warehouses are dynamic spaces. Think of autonomous cars loaded with sensors, operating well in the structured environments of California, but making errors in the unpredictability of busy city streets.

Current visual SLAM systems can struggle to recognize a layout if the lighting has changed since it was mapped. Simple things like reflective surfaces or windows letting in daylight that changes as the day goes on can stop a robot in its tracks. Maintaining high-resolution sensors and ensuring they operate correctly in harsh, dusty, or variable lighting conditions increases the operational costs and risk of downtime.

A warehouse can also see rapid changes in the environment, such as shifting inventory or altered layouts. A shelving aisle might contain large square boxes one day and small round boxes another. Navigation systems that rely on static visual cues or maps struggle to adapt to these changes without extensive recalibration. While LiDAR can easily detect obstacles made of metal or wood, it may generate false measurements when encountering materials like plexiglass or some plastics. This brittleness can lead to mislocalization and that’s a problem.

In the automotive industry, the impact of such brittleness can be severe. Some estimates put an average automotive factory downtime at $10,000 per minute of stoppage. For mobile robots, failing to adapt to scene change, or the time required to remap a facility could be substantial.

Mapping Challenges

If mapping flexible warehouses presents a significant challenge, this only multiplies with a fleet of mobile robots. Each time you scan and the environment changes, the map needs to be updated. The infrastructure cost required for this can be incredibly high. One major challenge is the data generated. Typical maps used by AMRs can range from up to 100’s of Mb per square meter, resulting in astronomical data sizes in large factories. This requires significant network infrastructure and cloud storage to handle the data, adding to the overall cost.

A simple route forward

The real world is a complex place, but adding layer upon layer of technical complexity onto automated robots is not the only solution. A quick time to deploy, robustness and low costs are all achievable with the simplicity of nature. As we’ve previously explored, the brain of an insect can offer us a new form of autonomy.

Rather than solve every problem with more technology, why not make things simpler?

Insects achieve complex navigational feats with simple brains and sensors. Their eyes can have an incredibly low-resolution view of the world. But that’s all they need.
That’s what we can harness. With the lowest resolution cameras, the Opteran Mind can tolerate over 60% of the scene changing. The maps generated are at just one kilobyte per square meter – that makes city scale mapping possible over low-end Wi-Fi.

By drawing inspiration from nature, we can create more robust and cost-effective solutions for warehouse automation. We can create natural machines that operate efficiently in the most dynamic and unpredictable environments.

In our next blog, we’ll look beyond SLAM; at redefining autonomy with Natural Intelligence: Neuromorphic general-purpose autonomy.

Charlie Rance
CPO, Opteran