Technology

A New Approach to Robot Arms: Using Touch as Well as Vision

Eka Robotics has developed a new kind of robot arm that learns by combining sight with the sense of touch. Instead of just being told where to move, these robots feel the force and pressure of what th

Martin HollowayPublished 2w ago5 min readBased on 4 sources
Reading level
A New Approach to Robot Arms: Using Touch as Well as Vision

A New Approach to Robot Arms: Using Touch as Well as Vision

Eka Robotics has built a new kind of robot that learns through both sight and touch. The Cambridge, Massachusetts company, founded by MIT professor Pulkit Agrawal and former Google DeepMind researcher Tuomas Haarnoja, is focusing on something most robots overlook: the feeling of force — how hard or soft a robot needs to push or grip.

Most robot arms today struggle with tasks that require a gentle touch. Many cannot screw in a light bulb, a job that needs careful pressure control and the ability to sense when the screw thread is catching properly. According to company demonstrations, Eka's robots can handle tasks like sorting chicken nuggets and screwing in light bulbs by learning how much force to apply.

How Touch Changes the Approach

Traditional robot arms rely mainly on position-based control — they are told where to move, but they don't really "feel" what they are touching. If something resists unexpectedly, they push harder anyway.

Eka inverts this approach. Their robots treat the sense of touch — force feedback — as a primary source of information, equal to what they see. Think of it like the difference between trying to button a shirt while wearing thick gloves and watching through a camera versus doing it with your fingers where you can feel the button and fabric.

This force-first method helps the robot understand and adapt to the physical world without needing a programmer to manually adjust settings for each new object or task.

Learning Through Practice

Eka's robots learn by doing, rather than being programmed with rigid rules. The system uses large amounts of robotic training data and machine learning to figure out how to grip, push, and twist different objects.

The robots start by training in computer simulations before being deployed on real hardware. This speeds up learning because digital training is cheaper and faster than having a physical robot practice for weeks.

Safety and Real-World Use

Because these robots sense force directly, they can stop or adjust if something is wrong. A robot that only knows its position might crush a fragile object if it encounters unexpected resistance. A force-aware robot would feel the resistance and back off.

This makes the robots safer to work around without needing protective cages or isolated work zones. It also suggests they could adapt to new environments and new tasks without being retrained or recalibrated each time.

The broader context here is that we have seen similar leaps before. When AI learned to recognize images without humans having to manually program what to look for, that opened up a new era in computer vision. The question for robotics is whether the same kind of generalization — learning that works across many different situations — can apply to the physical world, where safety and reliability matter even more.

During my years covering factory automation, I consistently heard the same complaint from manufacturers: every time they wanted robots to handle a new product or part, engineers had to reprogram and recalibrate the entire system. A robot that could adapt to new tasks on its own would solve a long-standing problem in manufacturing.

The Challenges Ahead

General-purpose robots have always been difficult to build. Industrial robots today work well in tightly controlled environments — the same factory floor, the same parts, the same sequence every day. But the real world is messier. Objects vary, environments change, and the number of possible situations is nearly endless.

Adding force sensing to a robot also adds complexity. Force sensors need careful calibration and can drift over time. The robot also has to process information from both cameras and touch sensors in real-time while planning its next move, which requires a lot of computing power.

It is worth noting that Eka's claims about making robots accessible to everyone and operating beyond human capability are goals, not yet proven facts. The company has shown promising results in controlled tests, but moving from a laboratory to real-world deployment — where conditions are unpredictable — is historically where many robotics projects stumble.

What This Could Mean

If Eka's approach works reliably in real-world conditions, it could change how robots are used in manufacturing, warehouses, and service industries. Fewer hours spent programming for each new task means faster deployment and lower costs. The ability to handle variations in objects and environments would make robots useful for a wider range of jobs.

The company is located in Boston, which has a strong ecosystem of robotics research and companies like Boston Dynamics. This environment often helps new technology move faster from research into real-world use.

The real test will be whether these robots can reliably perform in messy, unpredictable environments — the kind humans work in every day. That transition from laboratory demonstrations to dependable commercial systems has historically been the hard part of robotics. That is why Eka's progress over the coming months and years will be worth watching.