Fundamental of Image Processing Systems
Selection of the camera, selection of the lens and lighting source, evaluation of image quality, selection of PC hardware and software and the configuration of all components – all of those are important steps toward an effective image processing system.
What to consider when designing a vision system
Imagine an apple grower asks you to design a machine vision system for inspecting the apples. He’s interested in delivering uniform quality, meaning the ability to sort out bad apples while still working fast. He is faced with the following questions:
What are the precise defined requirements for the system?
Which resolution and sensors do I need?
Do I want to use a color or monochrome camera?
What camera functions do I need, and what level of image quality is sufficient?
The eye of the camera: Scale and lens performance
Which lighting should I use?
What PC hardware is required?
What software is required?
What exactly should the system deliver and under which conditions?
This question sounds so obvious that it's frequently overlooked and not answered in the proper detail. But the fact remains: If you are clear up front about precisely what you want, you'll save time and money later.
Should your system
Only show images of the object being inspected, with tools like magnification or special lighting used to reveal product characteristics that cannot be detected with the human eye?
Calculate objective product features such as size and dimensional stability?
Check correct positioning — such as on a pick-and-place system?
Determine properties that are then used to assign the product into a specific product class?
Resolution and Sensor
Which camera is used for any given application? The requirements definition is used to derive target specifications for the resolution and sensor size on the camera.
But first: What exactly is resolution? In classic photography, resolution refers to the minimum distance between two real points or lines in an image such that they can be perceived as distinct.
In the realm of digital cameras, terms like "2 megapixel resolution“ are often used. This refers to something entirely different, namely the total count of pixels on the sensor, but not strictly speaking its resolution. The proper resolution can only be determined once the overall package of camera, lens and geometry, i.e. the distances required by the setup, is in place. Not that the pixel count number is irrelevant — a high number of pixels is truly needed to achieve high resolutions. In essence, the pixel count indicates the maximal resolution under optimal conditions.
If the characteristics can be detected via their color (such as red blemishes on an apple), then color is often — but not always — needed. Yet these characteristics can also in many cases be picked up in black and white images from a monochrome camera if colored lighting is used. Experiments on perfect samples can help here. If color isn’t relevant, than monochrome cameras are preferable, since color cameras are inherently less sensitive than black and white cameras.
Are you working with a highly complex inspection task? If so, you may want to consider using multiple cameras, especially if a range of different characteristics need to be recorded, each requiring a different lighting or optics configuration.
Camera functions and image quality
When evaluating the image quality of a digital camera, the resolution is one important factor alongside:
Light sensitivity
Dynamic range
In terms of camera functions, one of the most important is the speed, typically stated in frames per second (fps). It defines the maximum number of frames that can be recorded per second.
The eye of the camera: Scale and lens performance
Good optical systems are expensive. In many cases, a standard lens is powerful enough to handle the task. To decide what’s needed, we need information about parameters such as
Lens interface
Pixel size
Sensor size
Image scale, meaning the ratio between image and object size. This corresponds to the ratio of the size of the individual pixels divided by the pixel resolution (The pixel resolution is the length of the edges of a square within the object being inspected that should fill up precisely one pixel of the camera sensor.
Focal length of the lens that determines the image scale and the distance between camera and object
Lighting intensity
Once this information is available, it becomes much easier to examine the spec sheets from lens makers to review whether an affordable standard lens is sufficient or whether a foray into the higher-end lenses is needed.
Lens properties like distortion, resolution (described using the MTF curve), chromatic aberration and the spectral range for which a lens has been optimized, serve as additional selection criteria.
There are for example special lenses for near infrared, extreme wide angle lenses ('fisheye‘) and telecentric lenses that are specially suited for length measurements. These lenses typically come at a high price, though.
Here too the rule is: Tests and sample shots are the best way to clear up open questions.
Lighting
It’s hard to see anything in poor light: It may seem obvious, but it holds true for image processing systems as well.
Optimizing Image Brightness
High inspection speeds often require sensitive cameras and bright lenses. However, it is often possible to modify or optimize the lighting setup with less effort and achieve the same increase in image brightness. There are several ways to achieve higher image brightness: increasing ambient light, shaping the light, for example by using lenses or flashes with a suitable light source. However, it is not only the intensity of the light that is important, but also the way in which the light passes through the object and reaches the camera.
We all know the example from photography: ambient light is usually diffuse, but if that is not enough, a flash is used, which is much more directional - and then there are usually unwanted reflections from smooth surfaces in the image, which obscure the actual details. In machine vision, however, such effects can be desirable to achieve high light intensities on less reflective, straight surfaces. Diffuse light is better suited for objects with many surfaces that reflect in different directions.
Requirements for PC hardware and software for image processing
PC Hardware
The hardware required depends on the task and the required processing speed.While simple tasks can be performed with PC hardware and standard image processing packages, complex and fast image processing tasks may require specialized hardware.
Software
Software is required to assess the images. Most cameras come together with software to display images and configure the camera. That’s enough to get the camera up and running. Special applications and image processing tasks require special software, either purchased or custom developed.
Summary
Before launching into the design of an image processing system, there are a variety of considerations that must be made for all components being used, from the camera and its optics to the lighting and the PC hardware and software supporting the system.
These tasks are fully manageable when tackled step by step, so long as the time is taken in advance to clarify the task and the framework conditions. To learn more about this topic, please see our more comprehensive white paper on this topic: White Paper
Assemble your Vision System
With camera, lens and lighting you are on your way to your vision system. Use our Vision System Configurator to easily assemble your system.