Sensors Optimized For 3D Digitization

    28 Votes

Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D.

Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available.

The first strategy, known as passive vision, attempts to analyze the structure of the scene under ambient light. In contrast, the second, known as active vision, attempts to reduce the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with 2-D imaging systems. Moreover, with laser based approaches, the 3-D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Thus the task of processing 3-D data is greatly simplified.

COLOUR 3D IMAGING TECHNOLOGY

Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available.

Passive vision, attempts to analyze the structure of the scene under ambient light. Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

The auto-synchronized scanner, depicted schematically on Figure 3.1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene (reflectance map).

Advantage

  • Triangulation is the most precise method of 3D

Limitation

  • Increasing the accuracy increases the triangulation distance. The larger the triangulation distance, the more shadows appear on the scanning object and the scanning head must be made larger.

SENSORS FOR 3D IMAGING

The sensors used in the autosynchronized scanner include

Synchronization Circuit Based Upon Dual Photocells

This sensor ensures the stability and the repeatability of range measurements in environment with varying temperature. Discrete implementations of the so-called synchronization circuits have posed many problems in the past. A monolithic version of an improved circuit has been built to alleviate those problems.

Laser Spot Position Measurement Sensors

High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.

Position Sensitive Detectors

Many devices have been built or considered in the past for measuring the position of a laser spot more efficiently. One method of position detection uses a video camera to capture an image of an object electronically. Image processing techniques are then used to determine the location of the object .For situations requiring the location of a light source on a plane, a position sensitive detector (PSD) offers the potential for better resolution at a lower system cost. The PSD is a precision semiconductor optical sensor which produces output currents related to the “centre of mass” of light incident on the surface of the device. While several design variations exist for PSDs, the basic device can be described as a large area silicon p-i-n photodetector. The detectors can be available in single axis and dual axis models.

DUAL AXIS PSD

This particular PSD is a five terminal device bounded by four collection surfaces; one terminal is connected to each collection surface and one provides a common return. Photocurrent is generated by light which falls on the active area of the PSD will be collected by these four perimeter electrodes.

dual axis PSD

The amount of current flowing between each perimeter terminal and the common return is related to the proximity of the centre of mass of the incident light spot to each collection surface. The difference between the “up” current and the “down” current is proportional to the Y-axis position of the light spot .Similarly ,the “ right “current minus the “left” current gives the X-axis position. The designations “up”,”down”, “right”and ‘left”are arbitrary; the device may be operated in any relative spatial orientation.

LASER SENSORS for 3D Imaging

The state of the art in laser spot position sensing methods can be divided into two broad classes according to the way the spot position is sensed. Among those, one finds continuous response position sensitive detectors (CRPSD) and discrete response position sensitive detectors (DRPSD)

Continuous Response Position Sensitive Detectors (CRPSD)

The category CRPSD includes lateral effect photodiode.

Figure illustrates the basic structure of a p-n type single axis lateral effect photodiode. Carriers are produced by light impinging on the device are separated in the depletion region and distributed to the two sensing electrodes according to the Ohm’s law. Assuming equal impedances, the electrode that is the farthest from the centroid of the light distribution collects the least current. The normalized position of the centroid when the light intensity fluctuates is given by P =I2-I1/I1+I2.

The actual position on the detector is found by multiplying P by d/2, where d is the distance between the two sensing electrodes. A CRPSD provides the centroid of the light distribution with a very fast response time (in the order of 10 MHz). Theory predicts that a CRPSD provides very precise measurement of the centroid versus a DRPSD. By precision, we mean measurement uncertainty. It depends among other things on the signal to noise ratio and the quantization noise. In practice, precision is important but accuracy is even more important. A CRPSD is in fact a good estimator of the central location of a light distribution.

Discrete Response Position Sensitive Detectors (DRPSD)

DRPSD on the other hand comprise detectors such as Charge Coupled Devices (CCD) and arrays of photodiodes equipped with a multiplexer for sequential reading. They are slower because all the photo-detectors have to be read sequentially prior to the measurement of the location of the peak of the light distribution. [1] DRPSDs are very accurate because of the knowledge of the distribution but slow. Obviously, not all photo-sensors contribute to the computation of the peak. In fact, what is required for the measurement of the light distribution peak is only a small portion of the total array. Once the pertinent light distribution (after windowing around an estimate around the peak) is available, one can compute the location of the desired peak very accurately.

monochrome system

Proposed Sensor Colorange Architecture

Figure  shows the proposed architecture for the colorange chip.

colorange chip architecture

An object is illuminated by a collimated RGB laser spot and a portion of the reflected radiation upon entering the system is split into four components by a diffracting optical element. The white zero order component is directed to the DRPSD, while the RGB 1'st order components are directed onto three CRPSD, which are used for colour detection. The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity The centroid is computed on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor. The weighed centroid value is fed to a control unit that will select a sub-set (window) of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD. Then, the best algorithms for peak extraction can be applied to the portion of interest.

APPLICATIONS

  • Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D
  • For the development of hand held 3D cameras
  • Multiresolution random access laser scanners for fast search and tracking of 3D features
Attachments:
Download this file (Sensors on 3D Digitization.pdf)Sensors on 3D Digitization[PPT Presentation]345 Kb
Planning to do an MBA?
A quick all-in-one MBA entry manual for MBA Aspirants. Book covers
  • Possible MBA Tests & Exam Preparation
  • Tips to choose right program
  • Essay, Resume & Letter of Recommendation
  • MBA Interview Preparation
  • MBA Financial Planning
Price - 6.99$
 

Popular Videos

How to speak to people

How to speak so that people want to listen.

Got a tip or Question?
Let us know

Related Articles

DTM (Dynamic synchronous transfer mode)
Application of Image Processing
Bioinformatics
Biochips
Bluetooth
Cache Compression
Chromosome Image Enhancement
Digital Signal Processing in the field of test and measurement
Wavelet Based Image Compression using Subband Threshold
Machine Visualization of Synoptic Topography by DSP
Telephone Conversation Recorder
Echo Cancellation In Voice Over IP
Enhanced Data For Global Evolution (EDGE)
Electronic Watchdog
Embedded Systems
Genetic Algorithm
GPS and Its Application In Hydrographic Survey
GPS Satellites
Intelligent Traffic Light Control Using Embedded Systems
Image Processing