Robohub.org
 

The human brain vs. the digital brain: A case for visual inspections


by
16 June 2016



share this:
Production line of coffee cups. Source: CC0 Public Domain

Production line of coffee cups. Source: CC0 Public Domain

by Audrey Boucher-Genesse

I was once at a conference on image processing and the speaker discussed the perception of color. He spoke about a conversation with a potential client who said, “well, it’s pretty easy: the automated visual system just has to check if the part is green or not”. Sounds simple enough, right? Now, when was the last time you had to decide on a color to paint a room in your house? What green did you pick: seaweed, khaki, pistachio or cucumber? What about turquoise, is that green? One of the toughest challenges when automating a visual inspection process is to clearly define the boundaries for what is accepted, and what is not. The machine does not have your intuition.

What is this “machine” anyway? There are various options available out there, but we will define the automated visual inspection system as a non-contact system that detects visual defects and/or checks for desired features. It comprises one or many visual sensors (1D, 2D or 3D) and a data processor (it could be a computer, or the processor could be embedded within the sensor, as is the case with intelligent cameras). The system provides an output (e.g. “good/bad part”). Depending on the complexity of the part, it can also include a handling system (e.g. a robot). This article will focus mainly on inspections made with 2D images that are widely used in the industry.

Well, that’s all pretty similar to the way we humans work: a sensor, a processor and a handling system. What is so different then? Quite a few things, actually.

Reliability

Let’s begin with a small video (it’s only 1:20):

Spoiler alert: watch the video before continuing… otherwise I will spoil it! 

Now, what’s the typical situation for an inspection? The inspector is trained – he focusses on a specific task. Theoretically he should be inspecting all surfaces of a part. In practical terms, he knows what and where the usual defects are. He has a guide that stipulates a certain number of defects. He knows the potential flaws as they are often based on potential failures from previous processes: a die that has reached its maximal wear; an oven that was not at its optimal temperature and causes cracks, etc. All of these failures can leave marks on the part that is being inspected; they even have defect names and categories, because they have occurred quite a few times in the past. But what if the defect is something that has never occurred (or so rarely occurs that this inspector has never seen it). What if the visual defect is not in its usual location? The human inspector, not because s/he is not good enough for the job, but just because s/he is human, might miss something obvious – like the gorilla (if you didn’t get the gorilla reference, now would be a good time to watch the video… but as mentioned before, I have spoiled it).

An automated visual inspection system that will have been trained to inspect all surfaces of the part will always look at all of these surfaces, therefore noticing this type of “unusual defect”. It most definitely will not be able to classify it, never mind know its potential cause. But it will flag it to the attention of the human inspector who will be able to investigate further.

So automated visual inspection systems are always best, right? Not so fast. Our human brain is pretty sophisticated; it has multiple capabilities. Let’s look at a few of them.

Adaptability

Let’s say you’re inspecting a part, and the visual defect guide stipulates “any black spot on the gray surface is considered a defect”. I’m giving you 20 parts with various gray shades – some of them will be darker and some of them will be lighter. Would you still be able to detect black marks on it? I’m pretty sure you could. Now give the same parts to an automated system. You will need to specify exactly what is black (geekly speaking, color code 0x000000 is considered black… but what about 0x090909? That also looks pretty dark to me!), versus what is dark gray.

Now here’s another example of our super-adaptability: you’ve probably come across security questions when completing a transaction online, like: “please prove you’re not a robot”:

Source: Robotiq

Source: Robotiq

 

Why is that? Well, character recognition is hard for a digital brain: our human brain can accomplish wonderful things: complete a character that has been partially erased or twisted, complete a word with missing characters… G33z, y0u c4n 3v3n r34d th1s! Talk about adaptable!

OK, so now you’re thinking: is it impossible to automate an inspection process? Not at all! But you could do a lot to limit the need for adaptability, therefore simplifying the automation process. One possible answer would be to look at the process flow and maybe change it. Swapping the inspection process with the tumbling process might be a strategic change. By tumbling the part first, its surface will be more uniform, making it easier for the machine to inspect it.

Detecting patterns

What is a normal feature and what is a defect? Humans score high at detecting patterns and flagging what is suspicious.

A human might need only one part to detect a pattern, e.g. align holes.

Source: Robotiq

Source: Robotiq

 

The machine, on the other hand, would probably flag these holes (and many others) as individual “suspicious black dots” unless it had learned otherwise. The automated visual system will thus need some training in order to know what is considered a normal pattern and what is not (there might be a size threshold? a color threshold?). Here again, the concept of adaptability comes in. If you do not use enough samples to train the system, a pattern that is not EXACTLY like the one before it will be flagged by the machine as being incorrect. Using many samples is a good way to go, but beware of overtraining it with very different parts as this may cause desensitization. Let’s look at an Optical Character Recognition (OCR) example: you have taught the system that the pattern to be looked for is 8, but a B would also be okay because you know the left-hand side of the character sometimes has problems being punched correctly. Now let’s say the machine reads a 3… would that be acceptable? The left-hand side is different, but you have trained the system to be less picky on this side, because of known punching problems… That’s a good example of desensitization: the more various things you input into the system as “normal”, the less sensitive your system will become. Bottom line, training the system can be tricky and may need to be done in partnership with the integrator.

Punched_characters_flipped.png

‘Unofficial’ Sensors and actuators when inspecting visually

Now, let’s get back to our human inspectors. They take a part, turn it around while constantly looking at it. They notice a small black dot on the part. What is the next thing that happens? Chances are good that they’ll either blow on the part to confirm if it’s just dust or use his/her fingernail to try verify its solidity. Sitting down with inspectors and observing the actual process (not the official one that is written down in the manual) is a good practice when trying to automate a process.

Repeatability

When it comes to repeatability, the digital brain scores higher. This is a key concept that comes back for pretty much any process being automated: the result will be repeatable, no matter if the part is being inspected on a Monday morning or a Friday afternoon.

What can you actually do to automate a visual inspection?

We have seen that the human is very adaptable, good for detecting patterns, and has other tools that can help him achieve an accurate inspection. The machine, on the other hand, is reliable and repetitive… so how can we get the best results by combining these two?

Here are a few clues that might help:

  • Choose your integrator carefully: you will be working closely with him to fully understand your inspection process and its variables, train the system, train the inspectors to use the system, etc.
  • Sit down with inspectors and document the actual process. If “blowing on the part” is not written anywhere in the inspection guidelines, but inspectors do it every day because the parts are always dusty when they get them, then an air blower could be integrated into the automated system.
  • If the parts’ normal surface appearance is very variable, keep in mind that the machine won’t be as adaptable as you are, and consider reordering some process steps (see our tumbling example above)
  • If possible, define numerically the boundary between what is acceptable and what is not: the maximal defect length, the accepted color…. When you cannot define it numerically, you will have to use more examples to train the system.
  • Include your inspectors in the automation process: they know the real deal about normal parts, defect definitions, variables that will influence the appearance of the part, etc.
  • Train the system with parts from various batches, made on different days, in order to have multiple “normal” surface appearances… The system will thus be adjusted to take some variations into account. Keep in mind, however, that there’s a balance to be reached: you want an accurate system that can be flexible, but not too desensitized.
  • Work hand-in-hand with the integrator, as a close collaboration will enable a faster delivery, plus more reliable results.

Now, that you have had a chance to understand the pros and cons of a visual system, you can think about how one might be integrated with your other robotic devices, force torque sensors or grippers, for example, to really automate your system.



tags: ,


Robotiq Inc. Robotiq's mission is to free human hands from tedious tasks so companies and workers can focus where they truly create value.
Robotiq Inc. Robotiq's mission is to free human hands from tedious tasks so companies and workers can focus where they truly create value.





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association