Selection of industrial CCD camera in machine vision application

- Mar 30, 2020-

In the application of machine vision system, there are many problems in the selection process of industrial camera, industrial lens, image acquisition card, machine vision light source and machine vision system platform software. Today, we will introduce some experience in the selection of industrial camera and industrial CCD camera.

Select signal type of industrial camera

Industrial cameras can be divided into two types: analog signal and digital signal.

The analog camera must have an image acquisition card. The resolution of the standard analog camera is very low, generally 768 * 576. In addition, the frame rate is fixed, 25 frames per second. This should be selected according to the actual needs. In addition, the analog signal collected by the analog camera is converted into digital signal by the digital acquisition card for transmission and storage. Analog signals may be distorted by electromagnetic interference from other equipment in the factory, such as motors or high-voltage cables. With the increase of noise level, the dynamic range (the ratio of original signal to noise) of analog camera will be reduced. The dynamic range determines how much information can be transmitted from the camera to the computer. The industrial digital camera collects digital signals, which are not affected by electrical noise. Therefore, the dynamic range of the digital camera is higher, and it can transmit more accurate signals to the computer.

What is the resolution of industrial cameras.

According to the needs of the system to choose the size of the camera resolution, the following is an application case to analyze.

Application case: suppose to detect the scratch on the surface of an object, the size of the object to be photographed is 10 * 8mm, and the required detection accuracy is 0.01mm. First of all, if the field of view we want to shoot is 12 * 10 mm, then the lowest resolution of the camera should be: (12 / 0.01) * (10 / 0.01) = 1200 * 1000, About 1.2 million pixel camera, that is to say, if a pixel corresponds to a defect detected, then the minimum resolution must not be less than 1.2 million pixels, but 1.3 million pixel camera is common on the market, so generally speaking, 1.3 million pixel camera is selected. But the practical problem is that if a pixel corresponds to a defect, such a system will be extremely unstable, because any interference pixel may be mistaken for a defect, so in order to improve the accuracy and stability of the system, we'd better take the area product of the defect above 3-4 pixels, so that the camera we choose is more than 1.3 million times 3, that is, the lowest It can't be less than 3 million pixels, and it's usually best to use a camera with 3 million pixels (the most people I've seen hold the sub-pixel and don't put the sub-pixel that is a few zeros away, then you don't need such a high-resolution camera. For example, if 0.1 pixel is achieved, it means that a defect corresponds to 0.1 pixel. The size of the defect is calculated by the number of pixel points. How to express the area of 0.1 pixel? These people cheat people with subpixels, which often shows that they have no common sense. In other words, we only use it for measurement, so using sub-pixel algorithm, 1.3-megapixel camera can basically meet the needs, but sometimes because of the influence of edge definition, when extracting the edge, randomly offset a pixel, then the accuracy is greatly affected. Therefore, if we choose 3 million cameras, we can also allow the extracted edge to deviate about 3 pixels, which ensures the measurement accuracy.