讨论-什么是“结构光源”:大家讨论下“结构光源”的定义
Structured light is the projection of a light pattern (plane, grid, or more complex shape) at a known angle onto an object. This technique can be very useful for imaging and acquiring dimensional information. The most often used light pattern is generated by fanning out a light beam into a sheet-of-light. When a sheet-of-light intersects with an object, a bright line of light can be seen on the surface of the object. By viewing this line of light from an angle, the observed distortions in the line can be translated into height variations.

Acquiring 2-dimensional information Scanning the object with the light constructs 3-D information about the shape of the object. This is the basic principle behind depth perception for machines, or 3D machine vision. In this case, structured lighting is sometimes described as active triangulation.
Since structured lighting can be used to determine the shape of an object in machine vision applications, it can also help recognize and locate an object in an environment. These features make structured lighting useful in assembly lines implementing process control or quality control. These methods use structured lighting for alignment and inspection purposes. Structured light systems can drastically transform a manufacturing plant by decreasing process variation, reducing production time, allowing automation of assembly lines, increasing precision, and generally decreasing overall cost. Although other types of light can be used for structured lighting, laser light is the best choice when precision and reliability are important.
Structured Light ApplicationsStockerYale's Lasiris™ uniform intensity laser projectors are especially useful for structured light applications, including Machine Vision, Inspection, and Alignment.
Machine vision is a combination of structured lighting, a detector, and a computer to precisely gather and analyze data. For example, it is used on robots as 3-D guiding systems to place or insert a part on a car, such as a windshield wiper or a door.
Structured light lasers used in
inspection minimize process variation by drawing attention to parts that do not conform to specifications. They can pick out vegetables with blemishes from food processor lines or can ensure that the right colored capsule goes in the correct bottle in drug packaging lines.
Another laser application is
alignment. In computer assembly, a laser system can help an operator determine if a computer chip is perfectly positioned on a circuit board.
Lasiris™ lasers can be useful for contour mapping of parts, surface defect detection, depth measurements, guidelines, edge detection, and alignment. StockerYale has standard laser configurations available off-the-shelf for a wide variety of applications, but lasers can also be custom manufactured for OEM clients, designed for specific applications.
Companies that are interested in integrating laser technology for machine vision and industrial inspection into the manufacturing process should involve our engineers at the design stage
By Andrew WilsonAndrew Wilson, Editor,
andyw@pennwell.comFor a number of years, structured-light techniques have been used to extract depth information from scenes. In building systems, developers use laser line lights that are projected onto a scene. Reflected light from the object is captured by a solid-state camera and rendered using simple triangulation. In systems that require stationary 3-D objects to be digitized, the structured light is moved across the field of view (FOV) of the object while the camera remains stationary. Where such devices are in motion, such as on a conveyor belt, the structured line light is projected at a cross section across the object. As the object moves through the line light, reflected laser light can analyze the surface and volume characteristics. This technique has found wide acceptance in applications ranging from inspecting the surface of automobile brake pads to determining how fish-slicing machines can be optimized to make the best cut of filleted fish before vacuum-packing (see
Vision Systems Design, February 2005, p. 13).
ADVERTISEMENT
Middle
“In the past,” says Karl Gunnarsson, business development manager at SICK (Minneapolis, MN, USA;
www.sickusa.com), “system integrators needed to select the correct structured-light system, camera, computer, and I/O systems from OEM components. Not only was this time-consuming, it required an intimate knowledge of the 3-D resolution needed.”
To save the expense of developing 3-D structured-light-measurement systems, SICK has introduced the IVC-3D, a camera that incorporates illumination, image capture, and computer-based machine control (left). To program the system, the developer uses a menu-driven interface that runs under an embedded browser (top right). 3-D images of objects such as automobile brake pads can be digitized and analyzed and results used to trigger external conveyor belts or production-line devices (bottom right). Click here to enlarge image
Resolution along the conveyor will depend on the velocity of the conveyor belt and the frame rate of the camera. “For a conveyor moving at 1 m/s and a camera capable of 5000 profiles/s, 5 samples/mm can be achieved.” To compute height or depth resolution, the FOV of the camera must be known. This will depend on the imager format, the focal length, and focus distance of the camera. If a camera is placed above the conveyor and has a 30-in. FOV, for example, it will be capable of resolving height differences down to 0.01 in.
Once images are captured, system integrators are still faced with deciding how to process the captured images. In many systems, low-cost smart cameras are used. However, because many of these contain FPGA- or DSP-based processors, specialized code must be written to perform image analysis. Alternatively, PC-based frame grabbers can be used and images processed using off-the shelf software-development kits. In either case, the developer is forced to produce specialized code for image analysis and process control.
To alleviate the integration problem faced when building 3-D measurement and control systems, SICK|IVP has developed the IVC-3D, a system that the company claims is the world’s first 3-D smart camera. “By combining lighting, camera, and computer into one, the IVC-3D can detect and compute 3-D geometrical features of objects, as well as control an external machine, robot, or conveyor without the use of an external PC,” says Gunnarsson.
At present, the IVC-3D is available in two versions that can image 150 × 50- or 600 × 200-mm typical measurement areas. To image the 150 × 50-mm area, a structured laser light with a 38° fan angle from StockerYale (Salem, NH, USA;
www.stockeryale.com) is incorporated into the system. For 600 × 200 mm, a similar laser with a 68° fan angle is used. And, because the FOV is a trapezoid, each version can image larger objects with a smaller width, depending on the fan angle. To capture images, the system incorporates a camera based on a CMOS imager originally developed by Integrated Vision Products (IVP; Linkoping, Sweden), now part of SICK. To control both the illumination and camera, the system incorporates an embedded XSCALE-based CPU that provides I/O, serial, and Ethernet interfaces to the 3-D camera system.
To program the unit, SICK has developed a menu-driven image-processing software package called IVC Studio that runs in a browser window on the IVC-3D. This software is accessed from a PC over the camera’s Ethernet interface. Camera set-up and inspection tasks can then be arranged by stringing together a number of different tools in a set-up window (see figure). Parameters for each tool are then displayed in another window and can be set either by movement of the mouse or by entering values in parameter fields. After programming, the IVC-3D can operate in a stand-alone mode without the use of a PC.
“Because the IVC-3D toolbox contains tools for image processing on standard gray-scale imaging and special tools used in 3-D measurements, systems that require 3-D measurement and control can be set up rapidly,” says Gunnarsson. “With a price of $14,000 in single quantities, the IVC-3D may at first appear to be more expensive than systems based on off-the-shelf components. However, because of the several months of development time required to build systems using stand-alone components, the integrated solution provided by the IVC-3D is, in fact, more reliable, accurate, and cost-effective.”
本帖最后由 benjamin.zhu 于 2009-2-2 11:08 编辑
StockerYale公司的Lasiris系列结构光源激光器:

一个典型的特点在于光束光强的均匀分布,同时这种非高斯分布特性,在实际应用当中有许多优势:
Lasiris™ Line AdvantageThe Non-Gaussian DistributionMost laser line generators on the market today use cylindrical optics to generate a line. We use a special Lasiris lens system that makes the lines non-gaussian (uniform) in intensity, even when the laser is off-axis. The Gaussian or Non-Gaussian distribution refers to the distribution of power over the projected laser line. The phrase Gaussian Distribution (also called
Normal Distribution) is a statistical term that refers to a bell-shaped graph.
Non-Gaussian Lines are Efficient

The light intensity of a Gaussian line fades away towards the ends of the line, eventually falling below the threshold level of the detector and becoming invisible to the system. Depending on the settings of the detector and the level of uniformity required by the application, as much as 50% of the available power can be lost.
Non-Gaussian Lines are Easy to CalibrateBecause the light intensity of gaussian lines is non-uniform, the calibration of CCDs can become very difficult. Separate calibrations must be made for pixels in the bright central area and for those in the transition area. The low intensity area cannot contribute to the calibration because it is invisible to the system.
Non-Gaussian Lines can Eliminate Safety class ProblemsThe hot spot (central part) of a gaussian line is often a great deal brighter than the rest of the line, pushing the laser into the next level of safety ratings
此外StockerYale公司的激光器还有一个特点在于多种激光样式
:


等

于此同时,StockerYale在国际知名厂家的现代化生产线上得到了广泛的应用:

-----------------------------------------------------------------------------
Mechanical guiding tools have become obsolete and lasers are now the standard tools in
high-end alignment and positioning applications. In computer assembly for example, a laser system can help an operator determine if a computer chip is perfectly positioned on a circuit board. Unlike other laser manufacturers who use cylindrical optics to produce a Gaussian line profile,
Lasiris™ patented optics spreads the light into an evenly illuminated line, creating a non-Gaussian line with uniform intensity. If you are looking for a
high degree of line uniformity, we are able to increase line uniformity of your laser down to ±15%, depending on the laser model.
 |
Line laser used as a guide in lumber cutting alignment applications. |
In addition, Lasiris™ lasers are central components of
patient positioning systems in nuclear medicine, medical oncology, radiation therapy, and diagnostic radiology. Unlike conventional crosshair patterns that are formed either by using two lasers or by splitting and recombining one beam to form a cross, Lasiris™ crosshair projectors uses a patented single optical component set-up to create the illuminated pattern.
Visit our
structured light lasers page for more information on our product offerings
本帖最后由 benjamin.zhu 于 2009-2-2 11:10 编辑
Reading the ShapesBy Andrew Wilson
Machine-vision algorithm improves 3-D modeling
Structured-light techniques using laser and camera-based systems are often used to extract depth information from parts under test. To build a three-dimensional (3-D) profile of these parts, laser line illumination is projected onto the part and the reflected laser profile captured by a high-speed CMOS camera. Image data are then read from the camera and the point cloud data used to reconstruct a 3-D model of the part. While these systems are often used to reverse-engineer existing parts, they have also found widespread use in pharmaceutical and automotive applications. Just two years ago, Comovia Group demonstrated how, using a smart camera from Automation Technology, the technology could be used for high-speed tire inspection (see
Vision Systems Design, February 2006, p. 31).
“One of the most important aspects of designing an accurate structured-light-based system,” says Josep Forest, technical director at AQSENSE, “is determining the center of the reflected Gaussian curve from each point of the reflected laser line profile.” A number of different techniques can be used to do this, including detecting the peak pixel intensity across the laser line (resulting in pixel accuracy) or determining a threshold of the Gaussian and computing an average (resulting in subpixel accuracy). In the camera developed by Automation Technology, a more computationally expensive approach samples multiple points along the laser line and determines the center of gravity (COG) of the Gaussian curve.
“To date, this has been the most widely known and used method of determining the center point of the Gaussian, within the machine-vision community,” says Forest. AQSENSE, a spin-off from the University of Girona, is about to propose an improvement with a newly developed method that is claimed to be more accurate than COG methods. “In materials that exhibit some level of transparency,” says Forest, “light propagation from the inside of the material results in Gaussian profiles that are not completely symmetrical.” Especially in such cases, methods such as peak pixel detection, thresholding, and COG analysis may not accurately determine the peak position of the profile.
Figure 1. Methods such as peak pixel detection, thresholding, and COG analysis may not accurately determine the peak position of the profile. To overcome this, AQSENSE identifies the point of maximum intensity of the Gaussian. Using nonlinear interpolation techniques, up to 64x more pixel values within the ones forming the Gaussian can be inserted and a better estimate of the maximum intensity point obtained. Click here to enlarge image
To overcome this, Forest and his colleagues have developed a simple, yet elegant, solution. “Our purpose is to identify the point of maximum intensity of the Gaussian. Using nonlinear interpolation techniques, up to 64x more pixel values within the ones forming the Gaussian can be inserted, and, therefore, a better estimate of the maximum intensity point can be obtained,” says Forest (see Fig. 1 on p. 7).
“However, because noise is present, the operation cannot be performed in every situation unless filtering is performed as part of the interpolation process. By adjusting finite impulse response (FIR) filters for a given type of material,” continues Forest, “different surfaces with different optical properties and noise levels can be digitized with a more accurate numerical peak detector.”
At Vision 2007 in Stuttgart, Germany, AQSENSE privately demonstrated the technology to a panel of machine-vision experts. In the demonstration, an MV1024/80CL 1024 × 1024 CMOS camera from Photon-focus was used to capture reflected light from a structured laser light from StockerYale (see Fig. 2). Captured images from the camera were transferred over a Camera Link interface to a host computer using a PROCStar II frame grabber fitted with a Camera Link interface board from Gidel.
Figure 2. At Vision 2007, AQSENSE showed a CMOS camera used to capture reflected light from a structured laser light. Captured images from the camera were transferred over a Camera Link interface to a host computer fitted with a Camera Link interface board.
“Because the PROCStar II features a Stratix II FPGA and is offered with a developer’s kit that includes the Altera Quartus II FPGA development tools,” says Forest, “the peak detector and cloud-point reconstruction algorithms could be embedded within the FPGA, although the peak detector design fits in much smaller and simpler FPGAs.” The resulting data were then displayed on the system’s host PC.
To compare the results of such cloud point reconstruction, the effects of scanning a machine part using a simple peak detection algorithm with no subpixel accuracy, an image generated using the COG approach and the algorithm developed by AQSENSE were compared. While simple peak detection results in aliasing effects, the effects of a COG approach show some improvement and that of the AQSENSE method results in the most accurate model (see Fig. 3). To promote the use of the technology, AQSENSE also announced that the algorithm can now be ordered as an option for Photonfocus’ latest 1024 × 1024 CMOS Camera Link camera that incorporates a Xilinx-based FPGA.
Figure 3. To compare the results of cloud point reconstruction, the effects of scanning a machine part using a simple peak detection algorithm with no subpixel accuracy, an image generated using the COG approach, and the algorithm developed by AQSENSE were compared. While simple peak detection results in aliasing effects (top), the effects of a COG approach show some improvement (middle), while that of the AQSENSE method results in the most accurate model (bottom).
In addition, the company will soon roll out its Shape Processor software, a C++ application programming interface that allows dense 3-D clouds of points generated by any 3-D acquisition means to be aligned and compared. Based on a best-fit approach, the alignment software procedure includes source code examples, binary GUIs, and sample 3-D point clouds.
Using this software, 3-D models of scanned images can be compared with known good models. To compare both surfaces, it is necessary that they are accurately aligned, so then the comparison is done as a subtraction of both surfaces. However, in the production line it is very difficult to ensure that all objects are scanned in the same position and orientation. Usually, very expensive and complex mechanisms are used to fix the position of the object to the desired accuracy.
On the other hand, a mathematical alignment can be performed to compute the misalignment, in six degrees of freedom, between both surfaces. In general, mathematic alignment of 3-D clouds of points is complex and slow, so, not applicable on a production line.
Faster alignmentAQSENSE accelerates this alignment to fit the production line requirements. Based on an improvement of the best-fit algorithm, the alignment is performed in few milliseconds, depending on the initial misalignment between both surfaces. After that, a disparity map is generated by subtracting both aligned surfaces. The required time for this operation is linear with the number of points, that is, subtracting two surfaces of 1 million points each in a Pentium IV Core Duo 2 1.8 GHz takes 200 ms.
The software automatically detects the number of CPU cores to decrease the computation time by using all the machine’s computation power. In the case of important defects on the scan (broken/missing parts or big discrepancy with respect to the model), alignment accuracy is not affected, as long as the overlapping areas sum at least 50% of the object.
If proper 3-D calibration of the acquisition system is available, accurate 3-D measurements can be performed. However, surface inspection can be undertaken without acquisition calibration, highlighting object’s defects.
AQSENSE also demonstrated this technology using a C3-1280 3-D camera from Automation Technology. The setup for this demonstration included a structured laser light from StockerYale, as well as a linear motorized stage from Bosch Rexroth (see Fig. 4).
Figure 4. AQSENSE also demonstrated the technology using a camera from Automation Technology, a structured laser light and a linear motorized stage (left). In this example (inset), a known good part, shown in green (top left) is compared with a sample scanned part shown in red (top right).
“As can be seen,” says Forest, “the parts are misaligned and must be properly registered for the operator to discern any difference between them.” To do this, Shape Processor software is used to generate a disparity map between the two. In this manner the differences between the two parts become apparent. “By performing this analysis (alignment + surface differences) in less than 300 ms,” says Forest, “inspection automation systems can more quickly pick out parts that may be defective.”
-------------------------------------------------------------------------------------------------------------------------------
StockerYale公司MFL系列结构光源激光器
本帖最后由 benjamin.zhu 于 2009-2-2 11:13 编辑

MAGNUM2的产品介绍资料:
应用之一:道路检测
3-D system profiles highway surfaces
By Cor Maas
A custom laser line and CMOS-based imaging system measure road-surface smoothness
By Cor MaasRoad-surface analysis is conducted by contractors and state and federal departments of transportation as part of safety and quality checks of new roads and as part of inventory assessments of existing roadways. The smoothness of a road surface impacts many aspects of travel by highway. For example, too much surface variance from the norm can result in an uneven ride, increased vehicle wear, a shortened road-surface lifetime, and safety concerns. Producing the correct road surface can affect as much as 5% of a multimillion-dollar paving contract.
The basic road-surface analysis procedure measures the smoothness of roads in the vehicle wheel path as a test vehicle drives the road at highway speed. The problem is complicated by the need to make surface measurements under uncontrolled lighting conditions and on surfaces that range from black asphalt to white concrete in varying environmental conditions.
In the past, engineers developed time-of-flight ultrasonic transmitters to measure the road’s smoothness or profile, but at travel speeds these systems only could collect 1 data point per foot. The next-generation system was a laser triangulation sensor, which bounced a single laser point off the road and collected the same triangulation data. Using pulsed-laser illumination with a fast read-out sensor enabled the second-generation system to collect data every millimeter, but only at one point.
“The single-point system worked great on a lot of surfaces, but when the road had longitudinal structures, such as grooves for greater traction, the laser point would wander in and out of the groove as the car drifted across the lane, yielding unrepeatable data,” explains Daniel Howe, US market manager for LMI Transportation Division. Asphalt roads have built-in texture because of the aggregate used, but for concrete roads a texture has to be created. In many cases the texture is longitudinal in nature, which is why this problem needed to be solved, specifically for the American Concrete Pavement Association (ACPA; see Fig. 1).
FIGURE 1. The texture of concrete roads may have longitudinal structures such as grooves for greater traction; however, this can create challenges for road-surface profilers that must measure surface variance from the norm. The variance can produce uneven ride, increased vehicle wear, a shortened road-surface lifetime, and safety concerns.
Transportation engineers at the University of Michigan Transportation Research Institute (UMTRI) tackled the problem by using the Selcom RoLine sensor developed by LMI Technologies. The RoLine system generates up to 100 data points across the wheel path with submillimeter precision. The measurement data from the RoLine are collected by OEM profiler machine builders and combined with two external analog interfaces, an accelerometer for vehicle dynamics, and an encoder for vehicle travel distance to achieve a true road profile. By moving from laser point to laser-line illumination, the road profiler delivers 10-mm forward spatial resolution at highway speeds and provides 100 data points across a 100-mm width-or approximately the width of a standard automobile tire.
Lasing the RoadUMTRI recently tested the RoLine sensor on roadway conditions as part of the ACPA Profiler Repeatability Tests. The system was positioned in the wheel path in a test vehicle and driven at below highway speeds (see Fig. 2).
FIGURE 2. UMTRI recently tested the RoLine sensor on roadway conditions as part of the ACPA Profiler Repeatability Tests. The system was positioned in the wheel path in a test vehicle and driven at below highway speeds for test purposes.
The sensor housing includes a custom StockerYale line generator at 660 nm with line-generating optics. LMI uses both CMOS and CCD imagers in its laser-triangulation systems. However, this system uses a CMOS imager optimized for dynamic range, among other proprietary parameters, and custom optics to accommodate changes in focal depths as the car moves up and down while it travels over the road. An on-board Xilinx FPGA reads out the sensor data at frame rates up to 3 kHz, while providing initial image correction and processing, such as gain, sensor timing, and other camera-control functions. FPGAs provide for additional processing speed for multiple operations (see Fig. 3).
FIGURE 3. The RoLine laser-triangulation system generates height information along the z axis by projecting a flat laser line onto a surface, collecting an image of the line, and measuring the defection of the line from the horizontal norm. When properly calibrated and stand-off is distance determined, the distance the line varies from the norm directly translates to the surface height at that point. Click here to enlarge image
The Xilinx FPGA outputs a digital signal to a dedicated Motorola DSP, which runs LMI’s proprietary image-processing algorithms. Laser-triangulation systems generate height information along the
z axis by projecting a flat laser line onto a surface, collecting an image of the line, and measuring the defection of the line from the horizontal norm. When properly calibrated and mounted at the correct stand-off location, the distance the line varies from the norm directly translates to the surface height at that point.
For the purposes of the test, the RoLine sensor data were run through a bridge-height algorithm that filters the data points to generate an “intelligent” mean height average of the tire contact point. LMI’s Howe says, “While we have a basic bridging algorithm embedded in the sensor, most of our OEMs are choosing to receive the full profile data and using their knowledge and experience, are creating their own bridging algorithms. UMTRI’s Steve Karamihas is looking to test and determine the best approach, and, once that is done, we can incorporate this algorithm into the sensor to offload computing tasks from the OEMs host computer to the sensor itself.”
FIGURE 4. To generate a repeatable road-surface map, vertical and horizontal movement data of the vehicle are required. The accelerometer measures up-and-down movements (top) and a DMI measures travel distance (middle). These signals are combined and filtered to obtain a true elevation profile of the pavement surface relative to distance (bottom). Click here to enlarge image
Vehicle-mounted systems that move require two additional pieces of data to generate a repeatable road-surface map: vertical and horizontal movement data on the vehicle itself. In this case, an accelerometer is attached to the top of the system enclosure to measure the up and down movements of the vehicle, and a distance-measuring instrument (DMI) is used to get the vehicle travel distance. OEM profiler manufacturers combine these signals and through filtering techniques achieve a true elevation profile of the pavement surface relative to distance (see Fig. 4). Profiler machine builders process the data streams on a host computer, based on various accelerometers and DMIs-some optical, some mechanical-and many different approaches to filtering and software.
Speedy NetworksThe existing RoLine system is mainly concerned with determining road height at any given point and how the frequency of movement will impact the natural frequency of a car’s vertical movement. Future improvements to the road-profiler system might include using multiple sensors and an improved network system to keep all devices in synch, including the Global Positioning System.
Since this profiler system was tested last year, Selcom has developed a proprietary bus called FireSync for sensor input. When multiple units in a system are used, synchronizing those units becomes a difficult issue. When driving on the road, all inputs should be sampled in a synchronous way. FireSync utilizes a dedicated network control box with proprietary Gigabit Ethernet cabling that adds additional control wires for the laser and guarantees data synchronization from multiple systems. The time between data points between sensors is reduced to less than 1 μs using FireSync.
“The road profiler with LMI/Selcom RoLine lasers demonstrated vastly improved repeatability over the prevailing fleet of road profilers,” notes UMTRI’s Karamihas. “The testing was performed at the request of the ACPA and included four surfaces of diverse macrotexture that, together, are very challenging to most profilers. This application is difficult, because it requires measurement to tight tolerances on very smooth pavements with a high level of macrotexture.”
“The availability of line lasers for road profilers is a recent development,” says Karamihas. “It will provide us with the opportunity to write new standards for accuracy that ensure relevance of the measurements to end-user satisfaction-that is, vehicle dynamic response to road roughness. Since the RoLine is very flexible in the way that it interprets each reading, I expect it to be compatible with any relevant standard that is developed in the near future.”
COR MAAS is president of Selcom Sensors That See, a division of LMI Technologies, Heerlen, The Netherlands;
www.sensorsthatsee.com.
------------------------------------------------------------------------------------------------------------------------------------
StockerYale中国区总代理:北京路科锐威科技有限公司
电话:010-58858423-114(分机)
都是英文啊。。。。汉化后再发来。。。
本帖最后由 benjamin.zhu 于 2009-2-2 11:27 编辑
机器视觉应用概述——半导体激光器在工业探测中的应用
由于机器视觉系统可以快速获取大量信息,而且易于自动处理,也易于同设计信息以及加工控制信息集成,因此,在现代自动化生产过程中,人们将机器视觉系统广泛地用于工况监视、成品检验和质量控制等领域。机器视觉系统的特点是提高生产的柔性和自动化程度。在一些不适合于人工作业的危险工作环境或人工视觉难以满足要求的场合,常用机器视觉来替代人工视觉;同时在大批量工业生产过程中,用人工视觉检查产品质量效率低且精度不高,用机器视觉检测方法可以大大提高生产效率和生产的自动化程度。而且机器视觉易于实现信息集成,是实现计算机集成制造的基础技术。
总之,随着机器视觉技术自身的成熟和发展,可以预计它将在现代和未来制造企业中得到越来越广泛的应用。
机器视觉系统主要部件包括:光源,镜头(有时包含滤光片),相机,图像采集卡,图像处理平台等。
在整个系统中每一部分都有举足轻重的作用,光源部分也不例外。
在机器视觉系统设计中,照明( Lighting )设计可控制的参数有:
1方向( Direction ):主要有直射( Directed )和散射( Diffuse )两种方式,其主要取决于光源类型和放置位置。
2 光谱( Spectrum ):即光的颜色,其主要取决于光源类型和光源或镜头的滤光片性能。光源的光谱用色温( color temperature )进行度量,色温是指当某一种光源的光谱分布与某一温度下的完全辐射体(黑体)的光谱分布相同时完全辐射体(黑体)的温度。
3 极性( Polarization ):即光波的极性,镜面反射光( specularly reflected light )有极性,而漫反射光( diffused reflected light )没有极性。可在镜头前加一滤光片消除镜面反射光。
4 强度( Intensity ):光强不够会降低图像的对比度,而过大则功耗大并且需散热处理。
5 均匀性( Uniformity ):机器视觉系统的基本要求,但光源随距离和角度光强衰减。
物体的主要光学特性包括:
1 反射( Reflectance ):主要有镜面反射( Specular or Fresnel reflection )和漫反射( Diffusereflection )两种类型。
2 透射( Optical density ):其取决于物体的材料构成和厚度。
3 折射( Refraction ):主要存在于透明材料中。
4 颜色( Color ):透射或反射的光能的光谱分布。
5 纹理( Texture ):可用光照来进行增强或减弱。
6 高度( Height ):直射照明可增强高度信息,而散射照明可减弱高度信息。
7 表面方向( Surface orientation ):直射照明可增强表面方向信息,而散射照明可减弱表面方向信息。
在 3-D 视觉检测系统中常常需要优质的线光源,并且对线光源的功率输出、线宽、景深等参数都有较高的要求。
美国StockerYale公司 Lasiris TM 半导体激光器,激光光束利用透镜展成线性可以实现较为均匀的功率分布。如图所示:
可以应用于位置较准、轮廓测量、半导体电路检测、工业检测等诸多领域。相比于普通半导体激光器有以下几方面优点:
1. 激光器输出波长、从输出功率非常稳定。激光器采用恒功率控制,利用 PIN 管探测输出光,通过反馈电路稳定输出功率。
2. 激光器可靠性强。静电保护,过电保护,温度保护。
3. 均匀的功率分布。
4. 多种输出功率选择,模式选择。如图所示:
5. 线宽、景深等技术参数优异。
6. 线性度好。
7. 准直角随温度漂移小。
8. 严格按照国际安全标准 IEC 、 CDRH 分级。
该激光光源在整体设计的细节方面考虑得比较全面,使用寿命长,完全可以满足高端的视觉设备或检测设备的需求。
---------------------------------------------------------------------------------------------------------------------------------------------
StockerYale中国区总代理:北京路科锐威科技有限公司
地址:北京市海淀区上地东路1号盈创动力E座203A
电话:010-58858423-114(分机)
e-mail:benjamin.zhu@tricombj.com
联系人:朱先生
StockerYale结构光源激光器在3-D检测中的应用(配合SICK IVP工业相机):
[ 本帖最后由 benjamin.zhu 于 2008-10-21 12:18 编辑 ]
本帖最后由 benjamin.zhu 于 2009-2-2 11:39 编辑

美国StockerYale公司成立于1946年,其LASIRIS系列结构光源激光器在国际上处于领先地位,
目前,其结构光源激光器种类多样,满足绝大部分3D机器视觉系统的需求。其特点有
1. 光强的均匀分布

均匀的光强分布可以提高激光束的利用效率,减小工业相机的处理难度。
2.激光样式多样:


3.最小线宽小(最小至
5微米);
4.性能稳定可靠(
过压、过温保护、反极性保护、热电制冷等;
5.多种功率调节方式(
功率调制、脉冲调制等);
附件是国外PCB锡膏检测的应用案例,文件格式是*.ASPX,可以用暴风影音打开,请参阅。
-----------------------------------------------------------------------------
StockerYale中国区(包括港、澳、台)总代理:北京路科锐威科技有限公司
地址:北京市海淀区上地东路1号盈创动力E座203A.
电话:010-58858423-114(分机)
楼上的是好东西啊,就是价格十分昂贵。