Robotics 101: Sensors that allow robots to see, hear, touch, and move | Possibility (2024)

Vision, audio, movement, and touch sensors allow modern robots to perform increasingly sophisticated tasks.

BY Possibility EditorialMarch 3, 2022

Have you ever seen a humanoid robot doing parkour?

Two arms, two legs, a square head and a thick rectangular torso hopping hibbity dibbity over boxes and wooden pallets. It even did a backflip.

Yeah, most robots can’t do that.

Most robots don’t need to do that.

The cool parkour robot—Atlas from Boston Dynamics—is an experimental research platform that the company uses to test and evolve robot bodily dexterity. It learns its sweet moves primarily through simulation and mimicry. And while Atlas is interesting and makes for a good viral video, humanoid robots are the exception in the world of robotics. For practical applications, they are not at all useful.

Robots are typically used to perform tasks that are considered “dull, dirty, or dangerous.” The most common type of robot you will find in the world is an industrial robot, like the articulated arm manufacturing companies use on assembly lines. Robotic arms have a stable, immovable base, between two and five moving joints, and an appendage at the end—called a manipulator—that functions like a hand. Other common robots are professional service robots, like the logistics robots that look like large hockey pucks that Amazon uses to move goods through a warehouse, or domestic robots that vacuum your house.

None of these robots are doing parkour.

Robotics 101: Sensors that allow robots to see, hear, touch, and move | Possibility (1)

Robots are often defined as physical machines that can “sense, think, and act.” Sensors measure external conditions and deliver that data to a computer processor or controller within the robot that makes sense of that data then decides how to act. For example, a vacuum cleaning robot will use a vision sensor to image a room, identify a bit of dirt, then move to vacuum it up.

Robots, in one form or another, can perform just about any task that a human can dream up. As such, robots need a wide array of sensors to sense the world and move through it.

Visions sensors

Robots usually need to see what they’re doing. But robots don’t typically see like a human does. The human eye is one of the most sophisticated sensors ever created. Robots do not need nearly as much clarity and focus, but rather can get by with simple vision sensors that, when coupled with machine learning software, can allow the robot to do just about anything that we want.

For instance, iRobot’s Roomba vacuum robots do not use high-tech sensors, but rather inexpensive cameras that could’ve been found in smartphones more than 10 years ago. For the most part, robots are not trying to make out images with high definition (which takes more internal computing power to process and can stress battery life), but rather are looking for general shapes, outlines, or colors that the computer vision algorithm can identify.

Common types of vision sensors in robotics include 2D and 3D cameras, lidar, radar, infrared, and phototransistors. If the vision system is working in the visible range, then lighting becomes necessary, where other vision sensors allow robots to work in the dark. Common visible light tasks include pick & place or assembly tasks, object detection, navigation, and some types of inspection.

One interesting example is called a “delta” style robot that you might find on the conveyor belt of a consumer packaged goods factory. The robot hangs over a conveyor belt and uses simple vision sensors like a camera and/or a barcode scanner to identify items coming down the line. It will then use its manipulators, like little suction cups, to pick up items and pack them into boxes to be shipped.

Some common use cases that a robot will need to use vision sensors include:

  • Quality assurance: using sensors to quickly assess if a product has a defect.
  • Object detection: see an object and determine what it is.
  • Material handling: identify material and move it from place to place.
  • Navigation: avoid obstacles while moving through a room to get from one specific point to another.
  • Mapping: see an area and create a computer-generated model of it.

Audio sensors

Why does a robot need to hear? The realm of robots is rapidly changing. For many years, most robots were the articulated arms found on factory floors, or other types of professional services in factories or warehouses. Robots in these environments have little need to hear.

As robots interact more with humans, audio sensors have become more important. Audio sensors—microphones, mostly—are often used for speech recognition so that a person can talk to a robot, and it will be able to “understand” and then act. Audio sensors can also be used for navigation (sonar or echolocation, for example), or detect pressure differences within an environment.

Common types of audio sensors and varieties of microphone include: acoustic pressure sensors, pressure microphones, high amplitude pressure microphones, and probe microphones.

An interesting hypothetical: are smart speakers with virtual assistants considered to be robots? Siri, Alexa, Cortana, and the Google Assistant do what a robot does according to our definition above. They sense with microphones, then compute the speech, then respond (sense, think, and act). And yet, most people would not classify them as robots.

Movement sensors

Movement sensors often work in conjunction with a robot’s moving bits (actuators) to assist in the robot’s mobility. One of the most common types of a movement sensor is called an incremental encoder which is often found within an industrial robotic arm. It measures the rotation of the joints on the arm so that it moves at the right angle and speed.

Other movement sensors include accelerometers, gyroscopes, inertial sensors, and GPS sensors.

Touch sensors

You wouldn’t think that a robot needs to “feel” like a human would. But touch sensors allow robots to have more nuanced capabilities than the typical “see an object, move and object” that they are often used for.

Robots use touch sensors for a variety of tasks. For example, bump sensors are used for navigation to tell the robot that it bumped into an object and thus must change course. Force sensors allow the robot to know when pressure or mechanical stress is being applied. Temperature sensors tell the robot if something is hot or cold (and thus must be avoided).

Sensor fusion: combining sensors to make more complete robots

The dramatic increase in computer processing power (along with cloud computing) over the decades has made robots much more useful than before. While every individual sensor within a robot will have its own feedback loop within the robot’s controller, the combination of disparate sensor systems (“sensor fusion”) is giving us sophisticated robots that are better at performing a variety of tasks.

The most prominent example of sensor fusion right now is probably in the realm of autonomous vehicles. While the notion of a car driving itself without human input is not currently practical, we’re moving in that direction by combining powerful computers with sensor systems like lidar, radar, 2D and 3D cameras, accelerometers, gyroscopes and more. Whereas simpler robots may have one or two of those sensors to perform singular tasks, sensor fusion allows the vehicle to take all those data inputs and make split second decisions.

Barring some dramatic breakthroughs in artificial intelligence, we will likely never be welcoming our new robot overlords. In fact, robotic systems often work better when paired with human capabilities. We have all the tools to build more powerful and capable robots that will help us as we tackle the problems of the present, and well into the future.

As an enthusiast and expert in the field of robotics, I can attest to the exciting advancements that have been made in recent years, particularly in the integration of sensors to enhance the capabilities of robots. The article you provided delves into the crucial role of sensors, covering vision, audio, movement, and touch sensors, which collectively enable modern robots to perform increasingly sophisticated tasks.

  1. Vision Sensors: Robots, unlike humans, do not require the same level of visual clarity, and they often rely on simpler vision sensors coupled with machine learning software. These sensors include 2D and 3D cameras, lidar, radar, infrared, and phototransistors. Vision sensors, in combination with machine learning algorithms, allow robots to recognize shapes, outlines, and colors. Common use cases for vision sensors include quality assurance, object detection, material handling, navigation, and mapping.

  2. Audio Sensors: The realm of robotics is evolving, and as robots interact more with humans, audio sensors, primarily microphones, have become essential. Audio sensors are used for speech recognition, enabling robots to understand and respond to human commands. They can also be employed for navigation using sonar or echolocation. Various types of audio sensors include acoustic pressure sensors, pressure microphones, high amplitude pressure microphones, and probe microphones.

  3. Movement Sensors: Movement sensors work in conjunction with a robot's actuators to facilitate mobility. Incremental encoders, commonly found in industrial robotic arms, measure joint rotations to ensure precise movement. Other movement sensors include accelerometers, gyroscopes, inertial sensors, and GPS sensors. These sensors contribute to the robot's ability to move at the correct angle and speed.

  4. Touch Sensors: Contrary to common perception, touch sensors are crucial for robots to have nuanced capabilities. Bump sensors aid in navigation by detecting collisions and prompting the robot to change course. Force sensors inform the robot when pressure or mechanical stress is applied, while temperature sensors help identify hot or cold objects.

  5. Sensor Fusion: The article highlights the concept of sensor fusion, where the combination of disparate sensor systems enhances a robot's overall capabilities. Sensor fusion involves integrating data from various sensors, creating a more comprehensive feedback loop within the robot's controller. This approach is exemplified in autonomous vehicles, where powerful computers combine data from lidar, radar, cameras, accelerometers, and gyroscopes to make split-second decisions.

In conclusion, the integration of vision, audio, movement, and touch sensors, along with sensor fusion, has significantly contributed to the versatility and efficiency of robots in performing diverse tasks. These advancements underscore the continuous evolution of robotics, making them valuable tools for applications ranging from industrial automation to human-robot interaction.

Robotics 101: Sensors that allow robots to see, hear, touch, and move | Possibility (2024)
Top Articles
Latest Posts
Article information

Author: Neely Ledner

Last Updated:

Views: 6004

Rating: 4.1 / 5 (42 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Neely Ledner

Birthday: 1998-06-09

Address: 443 Barrows Terrace, New Jodyberg, CO 57462-5329

Phone: +2433516856029

Job: Central Legal Facilitator

Hobby: Backpacking, Jogging, Magic, Driving, Macrame, Embroidery, Foraging

Introduction: My name is Neely Ledner, I am a bright, determined, beautiful, adventurous, adventurous, spotless, calm person who loves writing and wants to share my knowledge and understanding with you.