1.CCD (Pixel) and Image Processing Basics
1.1 CCD image sensor
A digital camera has almost the same structure as that of a conventional (analog) camera, but the difference is that a digital camera comes equipped with an image sensor called a CCD. The image sensor is similar to the film in a conventional camera and captures images as digital information, but how does it convert images into digital signals?
The CCD stands for a Charge Coupled Device, which is a semiconductor element that converts images into digital signals. It is approx. 1 cm in both height and width, and consists of small pixels aligned like a grid.
When taking a picture with a camera, the light reflected from the target is transmitted through the lens, forming an image on the CCD. When a pixel on the CCD receives the light, an electric charge corresponding to the light intensity is generated. The electric charge is converted into an electric signal to obtain the light intensity (concentration value) received by each pixel.
This means that each pixel is a sensor that can detect light intensity (photo diode(二极管)) and a 2 million-pixel CCD is a collection of 2-million photo diodes.
A photoelectric sensor can detect presence/absence of a target of a specified size in a specified location. A single sensor, however, is not effective for more complicated applications such as detecting targets in varying positions, detecting and measuring targets of varying shapes, or performing overall position and dimension measurements. The CCD, which is a collection of hundreds of thousands to millions of sensors, greatly expands possible applications including the four major application categories on the first page.
1.what is a conventional (analog) camera ?
2.A single sensor, however, is not effective for more complicated applications such as detecting targets in varying positions ?
1.2 Use of pixel data for image processing
The last section of this guide briefly details the method in which light intensity is converted into usable data by each pixel and then transferred to the controller for processing.
Individual pixel data (In the case of a standard black-and-white camera)：
In many vision sensors, each pixel transfers data in 256 levels (8 bit) according to the light intensity. In monochrome(单色) (black & white) processing, black is considered to be “0” and white is considered to be “255”, which allows the light intensity received by each pixel to be converted into numerical data This means that all pixels of a CCD have a value between 0 (black) and 255 (white). For example, gray that contains white and black, exactly half and half, is converted into “127”.
1.3 An image is a collection of 256-level data
Image data captured with a CCD is a collection of pixel data that make up the CCD, and the pixel data is reproduced as a 256-level contrast data.
As in the example above, image data is represented with values between 0 and 255 levels per pixel. Image processing is processing that finds features on an image by calculating the numerical data per pixel with a variety of calculation methods as shown below.
1.4 Example:Stain / Defect inspection
The average intensity of a segment (4 pixels x 4 pixels) is compared with that of the surrounding area. Stains are detected in the red segment in the above example.
1.5 Summary of CCD and image processing basics
Machine vision systems can detect areas (No. of pixels), positions (point of change in intensity), and defects (change in amount of intensity) with 256-level intensity data per pixel of a CCD image sensor. By selecting systems with higher pixel levels can higher speeds, you can easily expand the number of possible applications for your industry.
2. Basics of lens selection
Image processing roughly consists of the following four steps.
1. Capturing an image
Release the shutter and capture an image
2. Transferring the image data
Transfer the image data from the camera to the controller
3. Enhancing the image data
Pre-process the image data to enhance the features
4. Measurement processing:
measure flaws or dimensions on the image data
Measure and output the processed results as signals to the connected control device (PLC, etc.)
2.1 Image processing flow chart
Many vision sensor manufacturers focus on explaining Step 3, “Processing the image data”, and emphasize the processing capability of the controller in their catalogs. Step 1, “Capturing an image”, however, is the most important piece for accurate and stable image processing. The key to making Step 1 a success is proper selection of a lens and illumination system. This basic guide details how to successfully capture an image by selecting a suitable lens.
2.2 Creating a highly focused image
Application example: Detecting foreign objects/flaws inside of a cup
Q:When detecting foreign objects/flaws inside of a cup, which of the following two images is more suitable for detecting small defects over the entire inspection area?
A:The image on the right
It will be difficult to consistently detect the defects in the image on the left, even if a high-performance controller is used. With the right combination of knowledge, it will be easy to create a highly focused image like the image on the right.
Clear images are the most important part of image processing.
The following three points are essential for high-accuracy, stable inspection.
Capture a large image of the target ;
Focus the image ;
Ensure the image bright and clear
2.3 Lens basics and selection methods
A camera lens consists of multiple lenses, an iris diaphragm(可变光圈) (brightness) ring and a focus ring(光圈).
The iris diaphragm and focus should be adjusted by an operator looking at the camera’s monitor screen to make sure the image is “bright and clear”.
（Some lenses have fixed adjustment systems）
There are various points that need to be considered when selecting a lens, such as field of view, focal distance, focus and distortion. This guide focuses on two points important for all applications, “Selecting a lens to match the field of view” and “Focusing an image with a large depth of field”.
(2)Focal distance and field of view of lenses
Focal distance is one lens specification. Typical lenses for factory automation have focal distances of 8 mm 0.32”/ 16 mm 0.63”/ 25 mm 0.98”/ 50 mm 1.97”. From the necessary field of view of the target and the focal distance of the lens, the WD (working distance) can be determined.
The WD and view size are determined by the focal distance and the CCD size. When NOT using a close up ring, the following proportional expression can be applied.
Working distance : View angle = Focal distance : CCD size
Example 1: When the focal distance is 16 mm 0.63” and the CCD size is 3.6 mm 0.14”, the WD should be 200 mm 7.87” to make the field of view 45 mm 1.77”.
(3)Focusing an image with a large depth of field (range in which a lens can focus on objects)
1)The shorter the focal distance, the larger the depth of field
2)The longer the distance from the lens to the object, the larger the depth of field
Close up rings and macro lenses make the depth of field smaller
3)The smaller the aperture, the larger the depth of field
A small aperture and bright illumination make focusing easy
A camera is installed as shown in the illustration. A graduated tape that indicates the height is attached on a slope. In this situation, the pictures are taken to compare the apertures.
(4)Contrast differences due to lens performance
The following images are captured with KEYENCE’s high-resolution CA-LH16 lens and standard CV-L16 lens. The difference in the image quality is caused by the lens materials and structures. Higher-contrast images can be produced by using a high-resolution lens.
Field of view:60 mm 2.36”/ Stain size: Approx. 0.3 mm 0.01”
Comparison between a 240,000-pixel CCD and a 2 million-pixel CCD
The following images of the same target captured with KEYENCE’s 240,000-pixel and 2 million-pixel camera and magnified with a PC. Which image shows the characters more clearly? Of course, the 2 million-pixel camera. The difference in image quality directly affects the inspection accuracy when using image processing technology. Camera selection according to the application is also important.
What is distortion（畸变，扭曲）?
Distortion is the ratio of change between the center and edge areas of a captured image. Due to the aberration(偏差) of the lens, the distortion is more noticeable at the edges of a captured image. There are two types of distortion: barrel distortion and pincushion distortion. The general rule is that when the absolute value of the distortion value is small, the lens offers higher accuracy. Lenses with smaller distortion should be used for dimension measurement, for example. Lenses with a long focal distance generally have smaller distortion.
2.4 Summary of lens selection and image processing basics
High-quality images are fundamental for image processing. With some is basic knowledge of lens selection:
1)The suitable field of view for the target is ensured
2)The entire image can be focused
3)The contrast between the target and background can be enhanced with a suitable brightness
1.When the focal distance is 16 mm 0.63” and the CCD size is 3.6 mm 0.14”, the WD should be 200 mm 7.87” to make the field of view 45 mm 1.77”.
2.The general rule is that when the absolute value of the distortion value is small, the lens offers higher accuracy,Lenses with smaller distortion should be used for dimension measurement, for example. Lenses with a long focal distance generally have smaller distortion.
3.Basics of Lighting Selction
Three steps for selecting Lighting:
1. Determine the type of lighting (specular reflection/diffuse reflection/transmitted light).
Confirm the characteristics of the inspection (flaw, shape, presence/absence, etc.).
Check if the surface is flat, curved, or uneven.
2. Determine the shape and size of the necessary light.
Check the dimensions of the target and installation conditions.
Examples: ring, low-angle, coaxial( 同轴的), dome.
3. Determine the color (wavelength) of lighting
Check the material and color of the target and background.
Examples: red, white, blue.
3.1 Lighting selection: Step1 (Specular reflection, diffuse reflection, transmitted light)
LED Lights can be roughly divided into the following three types:
– 1 Specular reflection(镜面反射) type:
Light is applied to the target and the lens receives the direct reflection.
– 2 Diffuse reflection(漫反射) type:
Light is applied to the target and the lens receives uniform ambient light.
– 3 Transmitted light type:
Light is applied from behind the target and the lens receives the transmitted silhouette.
(1)Sample image of specular reflection
Inspecting for the presence or absence of inscriptions on metal surfaces
It is necessary to bring out the contrast between the flat metal surface and depressions(凹陷) of the inscription
Since a metal surface reflects light easily and the inscription does not, the optimum method is to use specular reflection to enhance the difference between the surface and inscription.
(2)Sample image of diffuse reflection
Inspecting the print on a chip through transparent film
It is necessary to bring out the contrast between the surface of the chip and the print by eliminating the reflection from the transparent film (halation).
The optimum method is to use diffuse reflection to prevent specular reflection on the transparent tape
(3)Sample image of transmitted light(穿透薄膜，采集数据，孙理解)
Inspecting foreign matter on nonwoven fabric(无纺织物)
It is necessary to bring out the contrast between the target surface and the foreign matter, which is difficult to recognize because of the subtle difference in color.
Even when no difference can be detected with reflected light, applying transmitted light from behind the target will show foreign matter as a black silhouette(轮廓，剪影; （人的）体形; （事物的）形状).
The first step of light selection is to select the lighting method, specular reflection, diffuse reflection, or backlighting, according to the shape of the target and the inspection purpose. Then, select the size and color of light that allows you to capture an optimum image for processing.
3.2 Lighting selection: Step2 (Lighting method and shape)
(1)Sample image of specular lighting
(2)Detection example of diffuse reflection Inspecting chips in rubber packing
(3)Detection example of transmitted light Inspecting lead shapes
After you choose the lighting method, select the type of light based on the detection purpose, background, and surrounding environment.
Basic selections are: coaxial illumination, ring illumination, or bar lights for specular reflection; low angle lights, ring illumination, or bar lights for diffuse reflection; and area illumination or bar lights for backlighting. Ring illumination and bar lights are generally used in particular because they can be used for various purposes by adjusting the installation distance.
3.3 Lighting selection: Step 3 (Color and wavelength of lighting)
The last step is to determine the color of illumination according to the target and background. When a color camera is used, the normal selection is white. When a monochrome(黑白照片) camera is used, the following knowledge is required.
(1)Detection using complementary(互补的) colors
A red candy wrapper is in a cardboard box. The following is a comparison of the contrast when LED illumination is used to detect the presence or absence of the candy.
(2)Detection using wavelength
The following is an image comparison of print on a chip in carrier tape taken through a transparent film. The contrast is higher with red illumination than with blue illumination, because of its higher transmittance (lower scattering rate).
Lights of different wavelength appear as different colors. The wavelength determines the characteristics of a particular color such as being transmitted easily (red light – long wavelength) or being scattered easily (blue light -short wavelength).
3.4 Summary of logical steps for lighting selection
Light selection determines the conditions of captured images, which is most important for image processing. Instead of trying every light without consideration, you can select the right one efficiently by following the procedure below:
– Decide which of specular reflection, diffuse reflection, or backlighting is most appropriate.
– Decide on the type and size of the light.
– Decide on the color of the light.
4.Effects of Color Cameras and Image Enhancement
4.1 Effects of a color camera
Inspection of a gold label attached to a cap
As shown above, when the target is glossy(有光泽的，光滑的) and has a curved surface, a monochrome camera cannot process the image in the same way as the human eye. This is because the brightness of the label is not uniform, as you can see in the actual image.
With a color camera, however, it is possible to extract only the gold color of the label as shown in the rightmost image.
This is because a color camera processes an image using hue (color) data, instead of intensity (brightness) data used by a monochrome camera.
Color cameras often allow stable inspection even for inspections that are difficult with monochrome cameras, which were the most common type used in the past. To this point, we have mainly covered S image capture (picture reproduction). This section explains the use of color cameras for capturing images closer to those seen with the human eye, and image enhancement to modify images to ensure stable image processing with the controller.
4.2 What is a color camera?
A color camera used in a vision system is generally a single-chip camera which contains a single CCD. Since capturing a color image requires information involving three primary colors, Red, Green, and Blue (R,G, and B), a color filter of R, G, or B is attached to each pixel of the CCD. Each pixel sends the intensity information in 256 levels of R, G, or B to the controller.
A color system describes colors numerically. It is generally represented in 3D space with three axes. The HSB color system using three elements of Hue, Saturation, and Brightness, is the closest to the human eye and is best suited to handle image processing
4.3 Image optimization by camera gain adjustment
Camera gain adjustment is an effective method of color differentiation. By adjusting the gain of the individual components of R,G, and B, a better contrast is obtained between close shades of the same color.
Example of camera gain adjustment
4.4 Color binary processing
A color camera offers 16,777,216 levels of shade information (256 levels of R, G, and B individually). That is 80,000 times more information than a monochrome camera (only 256 levels of gray). ‘Color binary processing’ is a function to extract only a specified range from these 16.7 million levels.
Example 1 of color binary processing
Color vision systems use data that has 16.77 million possible values based on 256-level shade data for R, G and B respectively. This allows detection of color differences that cannot be recognized with the 256-level gray scale of monochrome cameras.
Color cameras greatly expand the applications of vision systems.
4.5 Color shade processing
Current demand for vision systems used in high-speed production lines requires a processing time of one-hundredth of a second. “Color shade-scale processing” is a pre-processing method developed to solve problems associated with the tremendously long processing times of color cameras as well as noise interference from excessive information and inconsistent illumination.
(1)Color shade processing
Color shade-scale processing is a method to convert a color image with an enormous amount of data into a 256-level gray image by setting a specified color to be the brightest level(white). Since images are processed with not only brightness but also color information, difficult applications, such as differentiation between gold and silver, are no longer a problem.
(2)Example of color shade processing
Pale color patterns are not easily recognizable with conventional gray processing (as shown on the left). Color shade-scale processing creates a gray image based on color information, resulting in a clearly visible, strong gray image on a black background. This method offers stable results for inspection of different patterns or position deviation.
The advantage of color cameras is the large amount of information, which is directly related to the disadvantage of slow processing time due to the increased amount of information. The image enhancement option Color to Gray Processing was developed to overcome the disadvantage, and color cameras can now achieve high-speed processing on the order of 1/100 second.
4.6 Other pre-processing methods
A vision system is equipped with a variety of pre-processing functions to optimize images according to their various applications. These functions can be used for both monochrome and color images after color binary processing and color shade scale processing have been applied.
Contrast conversion: Surface image adjusted to better detect flaws.
The basics of image processing involve capturing a clear image.
– A color camera enables extraction of color differences in much the same way as the human eye.
– A variety of pre-processing filters are available to optimize image contrast according to the specific requirements of the application.
– Inspection stability will improve greatly when either color processing or pre-processing filters are properly applied to the image.
1.CCD (Pixel) and Image Processing Basics
2.How Do CCDs Capture Images?
2. How Do CCDs Capture Images?
Most cameras used in machine-vision systems employ charge-coupled devices (CCDs) to convert an image into the electrical signals a computer can capture. But how does that conversion take place, and what are the implications of the conversion techniques for camera users? To understand the capabilities of various cameras, you need to understand how different types of CCDs work.
CCDs were developed as data-storage devices in the early 1970s, but the devices’ light sensitivity at visible wavelengths led researchers to use them as sensors. The CCDs used in cameras are simply ICs that provide regular arrays of individual photodetector elements, or sensors, that measure only a few tens of microns(微米) on a side. CCDs manufacturers place the devices in standard IC packages and cover them with a transparent window that passes visible light (Fig. 1).
Figure 1. Rectangular CCDs come in standard IC packages with transparent lids that let visible light shine on the thousands of sensors on the device’s surface. (Courtesy of Kodak.)
The real work in a CCD takes place at individual sensors made up of light-sensitive MOS capacitors. Each capacitor(电容) forms an individual sensor for a picture element, or pixel. Some photons reflect off(反射) the surface and a few others may get absorbed deep in the silicon substrate（硅基片）, but most hit the MOS capacitor and create hole-electron pairs(空穴-电子对). As long as light reaches the MOS capacitor, electrons continue to collect in a charge well (Fig. 2) formed by potentials applied to the CCD. In effect, the sensors integrate the light they receive.
Figure 2. A capacitor formed as a MOS device produces electron-hole pairs when “hit’’ by a photon. The electrons accumulate in a potential well formed by the charge on the gate.
2.1 Shift Those Electrons
After the electrons have accumulated at each sensor for a set exposure time, the CCD must convert these “packets’’ of electrons into a useful electrical signal. The simplest CCD array provides a single line of sensors, a linear array, that finds use in a linescan camera.1 Each sensor connects to a parallel-in, serial out（串行输出 ） shift register (Fig. 3).
Control circuitry(电路) on the CCD loads the shift register with the electrons from all sensors simultaneously. Then, the shift register quickly moves the electron packets to a charge-to-voltage amplifier（电荷电压放大器）. By converting the amplifier’s voltage to a digital value in sync with the shift register’s clock, an analog-to-digital converter (ADC) can produce a digital value corresponding to the light intensity at each sensor in the array. (A camera provides buffering circuits. A frame grabber(帧接收器) would not connect directly to a CCD.)
CCD manufacturers employ several techniques that use multi-phase clocks to shift the charges from place to place in an array. Suffice(有能力) it to say that the shift register acts like a bucket brigade that moves charges from place to place, each in its own packet like a line of people passing along buckets of water. Usually the clock, driver, timing, and logic circuits that control a CCD array exist as separate devices. In most cases, it proves impractical to integrate them into the CCD. Depending on the circuits used in a camera, the camera may produce a raw video output signal, a standard video output signal such as RS-170/NTSC, or digitized signals. (See “For Further Reading’’)
Figure 3. A linear array transfers charge packets to two shift registers that transport the charges to two amplifiers. The amplifiers convert the charges to voltages that, after further signal conditioning, a frame grabber can read.
2.2 Two Registers Double the Speed
To speed the charge-shifting process, CCD manufacturers may provide two shift registers, one on each side of a linear array. The register on one side connects to the even-numbered sensors and the other shift register connects to the odd-numbered sensors. The CCD loads both shift registers simultaneously and shifts out the charges two pixels at a time—one(第一时间 ) per shift register. As a result, the CCD puts out the video information twice as fast as a single shift-register device. External circuitry can recombine the two voltages to furnish a single video output. Or a frame grabber could digitize the two signals and combine the results to reform a vector of pixel intensities.
In an area-array CCD, which captures an entire image at a time, retrieving the charges from the array of sensors proves a bit more difficult. The array must provide shift registers for each column and a horizontal shift register to shift the charge packets to the amplifier (Fig 4.) In a full-frame array, the control circuits move the charges in unison down each column from one MOS capacitor to the next.After a row’s worth of charges get shifted out, charges from the row above get moved down and out to the amplifier. In this way, the charges from successive rows get put out as voltages from the CCD.
A full-frame CCD requires a shutter to block light during the entire read-out operation. As charges get shifted down a column they move from sensor to sensor, and they will gather additional electrons from illuminated sensors. The added electrons act to smear(污迹) the image, thus the need for a shutter.
By incorporating the sensors right in the shift register, a full-frame CCD requires no extra space for shift registers. Thus, these devices work well in applications that require high-resolution images because sensors can cover almost all of the CCD’s surface with no gaps between sensors. Almost all the light that hits the full-frame CCD gets converted to electrons. Speed can suffer, though, particularly in large arrays. Keep in mind that the CCD must shift out all the video information before it can start to acquire a new image.
To circumvent(避免) this time-delay problem, some CCD manufacturers offer frame-transfer devices that integrate two adjacent and equivalent arrays of sensors on one CCD. One array uses its MOS capacitors to gather light. The second array, which is not light sensitive, provides memory cells for the image information.
After the array acquires an image, circuitry quickly shifts the charge packets for the entire image, or frame, into the memory. The memory can then put out the image information using shift registers as described for a full-frame device. Although the CCD rapidly transfers a complete image to the memory, the charges still pass through the other sensors on their way, thus integrating some light during transfer. As a result, if a camera without a shutter uses a frame-transfer device, users can expect some image smearing. Also, because frame-transfer devices require twice the area of full-frame devices, they cost more. But the frame-transfer devices work well in applications that require high-speed image acquisition.
You may find cameras that offer split frame-transfer CCDs. These CCDs simply divide the imaging array in half and provide a memory for each half. The memories may be divided again so they feed four shift registers and produce four output voltages. Dividing the array into several smaller portions complicates the circuitry slightly and it means that the camera’s circuits must cope with several simultaneous video-output signals. Either a frame grabber or the camera must reconstruct an array of intensity values from the multiple outputs.
Figure 4. A full-frame CCD moves the charge packets down each column into a shift register along the bottom edge of a device. The shift register delivers the charges to an amplifier for conversion to a voltage.
2.3 Separate Registers from Sensors
A third type of CCD places columns of shift registers between the sensors. After acquiring an image, the array quickly transfers all of the charges into the column shift registers. These registers then feed a shift register that transfers the charges to an amplifier. As soon as the camera transfers an image to these interline shift registers, it can start to acquire another image.
By separating the shift registers from the sensors, the design eliminates smearing. But the shift registers in these interline devices take space. As a result, the sensors have spaces between them and they convert only a fraction of the light that reaches the CCD into useful information.
No matter which CCD comes in a camera, all CCDs are subject to an effect called “blooming,’’ which occurs when a bright light shines on a sensor or sensors. The bright light causes the sensor to quickly fill, or saturate, its charge well. The excess electrons can flow into adjacent wells and saturate them, too. When saturation occurs, the image obtained from the CCD show a large white splotch(斑点) at the place of bright illumination. The size of the splotch determines how much blooming occurred in the CCD.
CCD manufacturers can overcome blooming. One technique provides a CCD with an electronic overflow that operates much like an overflow drain in a bathroom sink. The electron well is set up so any excess electrons combine with holes, thus moving the electrons into the CCD substrate(底层). A second approach lets users adjust the ratio of photon hits to electrons produced. Producing fewer electrons per burst of photons will reduce blooming.
Some CCDs and cameras let you control anti-blooming effects. Anti-blooming lets the CCD withstand much more light than it could otherwise before saturating charge wells. Of course, you could reduce the aperture of a camera, or add a filter to reduce light levels, too.
2.4 Watch Out for Noise
You might think that when no light shines on a CCD’s sensors, they generate no electrons. Unfortunately, thermal and electrical effects always produce electrons. CCD manufacturers refer to these currents as dark currents. When the shutter is closed, dark currents cause some electrons to get trapped in charge wells. When the shutter opens to capture an image, the dark-current electrons add to those produced by any incident light, thus adding noise to the video information.
The noise reduces the CCD’s dynamic range, a measure of its ability to accurately resolve changes in light levels. The ratio of the CCD’s maximum output signal to the output signal due only to dark current represents the signal-to-noise(信噪比) ratio (in dBs) of the device. A CCD with a dynamic range of 48 dB indicates that a CCD can resolve light levels across its range with an accuracy of about one part in 256 (28), while a dynamic range of 60 dB indicates an accuracy of one part in 1024 (210). A camera manufacturer may improve dynamic range by electrically cooling the CCD, usually with a Peltier device. But such a camera usually will end up in a research lab.
The CCD manufacturers specify dynamic ranges for their devices, so camera manufacturers should provide this information, too. The dynamic range information will help you determine how many shades of gray a camera will accurately detect. The higher the dynamic range, the more shades of gray(灰度梯度). When comparing cameras, be sure that the manufacturers specify the conditions at which they performed the noise tests, usually a temperature and a light sensitivity (for example,
25 8C and 0.5 lux2,3). T&MW
what process of the amplifier ?
Is there a triangle in figure 4;
why need a shutter?
By separating the shift registers from the sensors, the design eliminates smearing?
The blooming phenomenon makes the high gray level luminance data more smooth for its brightness light overflow to its adjacent ?