Open Access
30 January 2020 Use of complementary wavelength bands for laser dazzle protection
Gunnar Ritt, Bernd Eberle
Author Affiliations +
Abstract

The use of complementary wavelength bands in camera systems is a long-known principle. The camera system’s spectral range is split into several spectral channels, where each channel possesses its own imaging sensor. Such an optical setup is used, for example, in high-quality three-sensor color cameras. A three-sensor camera is less vulnerable to laser dazzle than a single-sensor camera. However, the separation of the individual channels is not high enough to suppress cross talk, and thus, all three channels will suffer from laser dazzling. To solve that problem, we suggest two different optical designs in which the spectral separation of the channels is significantly increased. The first optical design is a three-channel camera system, which was already presented earlier. The second design is a two-channel camera system based on optical multiband elements, which delivers undisturbed color images even under laser dazzle.

1.

Introduction

Laser protection has gained high importance mainly due to the worldwide distribution of compact, high-performance laser pointers, which are often misused to dazzle people but also optical sensors.1 Recently, a lot of work was done on the investigation of laser eye dazzle,24 including the impact of windscreens on the dazzle effect.5 Regarding laser safety of the human eye, the concepts of maximum dazzle exposure and nominal ocular dazzle distance (NODD) were proposed68 as an extension of the established quantities, maximum permissible exposure, and nominal ocular hazard distance. Moreover, in order to evaluate the influence of optical nuisance, tests on human performance degradation under laser dazzle were carried out.9,10 Regarding sensors, laser dazzling was intensively studied experimentally and theoretically by various groups.1118 The measurement of laser-induced damage thresholds of imaging sensors is also an important and ongoing topic.1820

Protection against laser dazzle faces the challenge that nowadays lasers are available with any wavelength in the visible spectral range. Classical laser protection measures like absorption or interference filters used for laser eye safety provide protection only against preselected, specific wavelengths. To overcome that lack, current research concepts are concentrated on wavelength-independent or tunable laser protection measures, including liquid crystal Lyot filters,21 augmented reality headsets,22,23 and the use of pupil-plane phase elements.24,25 During the last years, we have focused our research on laser dazzle protection of sensor systems based on the use of a digital micromirror device in combination with wavelength multiplexing.2628 Besides that, we became aware of a publication by Svensson et al.,29 who describe various methods for wavelength-independent laser protection measures. Among them, the concept of “complementary wavelength bands” (CWBs) drew our attention. The principle of CWBs is quite simple. As an example, Svensson et al. proposed using an infrared (IR) camera in order to protect against visible laser radiation. For instance, they recommended equipping a monochrome camera with an edge filter to attenuate the visible light and, thus, have an imaging device for the near-IR. Such a device cannot be dazzled by laser sources working at visible wavelengths. Since we could not find any other information, whether this concept was already realized and investigated for laser dazzle protection in more detail, we set up and tested such a sensor.

In this publication, we present our work on sensor protection using the principle of CWBs. This includes our earlier work on a three-channel CWB sensor30,31 but also our recent work on an improved concept of a two-channel CWB sensor.32 In Sec. 2, we explain the CWB principle in more detail while Sec. 3 describes the system design of our CWB sensors together with theoretical calculations of their spectral transmittance. We also worked on assessing the dazzle vulnerability of our CWB sensors, which will be presented in Sec. 4. The system performance is compared with other sensors, such as a standard color CMOS camera comprising a Bayer filter mosaic, a three-CMOS camera, and a hyperspectral imager. Finally, in Sec. 5, we show example images taken with the CWB sensors at a field trial.

The term CWBs as used in this publication shall not be confused with the terms “complementary colors” or “complementary wavelengths” of color science:

  • Complementary colors are usually illustrated by a color wheel that arranges colors of different hue on a circle and is used to describe the result of color mixing. A pair of complementary colors comprises a primary color (e.g., yellow, blue, or red) and a secondary color (e.g., purple, orange, or green) placed on opposite sides of the color wheel. When complementary colors are mixed, a grayscale color is produced.

  • The complementary wavelength is the wavelength opposite to the “dominant wavelength” (regarding the white point) in the CIE chromaticity diagram.

In this publication, we use the term CWBs to describe that the spectral working range of the sensor is basically split up in a number of nonoverlapping spectral channels, which complement each other to ideally recover the complete sensor’s spectral range. In practice, this will, of course, not be achieved due to the finite edge steepness of the optical filters and the residual transmittance outside the filter’s transmittance bands.

2.

Complementary Wavelength Bands

In contrast to the example given by Svensson et al. (see Sec. 1), we envisioned transferring the approach of CWB to just the visible spectral region. In other words, we aimed to design an optical setup for a camera sensor working in the visible spectral range, which is protected (or at least hardened) against lasers operating in the visible (or near-IR) spectral range. To achieve this goal, the light entering the sensor setup is spectrally split into spatially separated channels by means of dichroic optical elements. Each channel contains a dedicated imaging sensor. Subsequently, the images from the different channels are fused into a single output image to reproduce the scene.

Figure 1 illustrates the working principle of a standard single-sensor color camera and that of a two-channel CWB sensor. Usually, a standard camera is dazzled when it is illuminated with intense laser radiation [see Fig. 1(a)]. In the case of the CWB sensor, monochromatic laser light is directed into that channel, whose spectral transmittance corresponds to the laser wavelength [see Fig. 1(b)]. As a result, only one of the two channels will be dazzled, whereby the fused image still contains the information of the undazzled complementary channel. Possibly, some color distortion may occur in the fused image, but the structural information is kept.

Fig. 1

(a) Operating principle of a standard camera. The camera takes an image formed by the complete visible light spectrum of the scene. (b) Operating principle of a two-channel CWB sensor. A dichroic optical element splits the visible light spectrum in two channels. Two cameras take images of the separated spectral bands. The scene’s spectral information is recovered in a fused image.

OE_59_1_015106_f001.png

The three wavelength bands that are realized in three-sensor color cameras (three-CCD or three-CMOS cameras) corresponding to the three primary RGB colors are a very similar concept. Three spectral channels are used to capture dedicated images of the red, green, and blue spectral bands at the same time to generate high-resolution color images. Also, standard single-sensor color cameras split the incoming light in different spectral bands by means of a Bayer filter mosaic but often on the cost of spatial resolution. However, experience has shown that all three color channels of a single-sensor color camera are easily dazzled by monochromatic laser radiation. This is caused by the weak spectral separation of the color channels since the responsivity curves of the red, green, and blue filters of a Bayer filter mosaic overlap. Thus, green laser light, for example, can also dazzle the red and the blue color channel. In this respect, three-sensor cameras perform better since the spectral separation is improved compared with cameras with Bayer filter mosaic. However, the spectral separation maybe still too low to protect such a camera system efficiently against laser dazzle. Consequently, the spectral separation of the different channels has to be increased in order to avoid undesirable dazzling of a complementary imaging sensor. In other words, when the spectral separation of the bands is chosen appropriately (monochromatic), laser light will only jam the corresponding spectral transmission channel and, at the same time, the fused image will still deliver scene information generated from the complementary bands.

The example of Fig. 1(b) represents only one specific layout, namely the two-channel CWB sensor; a classical three-sensor camera comprises three channels. An n-channel CWB sensor could only be dazzled completely, when illuminated with n different laser wavelengths at the same time, which fits to the spectral passbands of the sensor’s channels. This means that a larger number of channels make a CWB sensor less vulnerable to dazzling with multiple laser wavelengths but on the cost of complexity and larger dimensions.

3.

System Design

3.1.

Concept for a Three-Channel CWB Sensor

3.1.1.

Optical layout

Figure 2(a) shows the optical layout of a CWB sensor that features three spectral channels similar to traditional RGB color cameras. First, incoming light passes through a telescope formed by an external camera lens and a collimator C (f=28  mm and f/#=2.0). Subsequently, the light is split by two successive dichroic beam splitters DBS1 and DBS2 into three channels corresponding to blue light (400 to 500 nm), green light (500 to 600 nm), and red light (600 to 700 nm). The spectral separation of the three bands is improved by the use of an appropriate band-pass filter BPx in each channel. This is necessary to attenuate out-of-band laser radiation effectively. Finally, the light is focused on the respective (monochrome) imaging sensor Sx by lens Lx (f=25  mm and f/#=1.4). A photograph of our laboratory demonstrator, built with standard optical and optomechanical components, is shown in Figs. 2(b) and 2(c). Table 1 lists the optical elements used to implement the three-channel CWB sensor.

Fig. 2

(a) Scheme of the optical layout of the three-channel CWB sensor. (b), (c) Photographs of the laboratory demonstrator (without external camera lens).

OE_59_1_015106_f002.png

Table 1

Optical elements used to implement the three-channel CWB sensor.

DenotationOptical elementTransmission band
Collimator CSchneider–Kreuznach Xenoplan 2.0/28
Dichroic beam splitter DBS1Semrock FF484-FDi01-25x36492–950 nm
Dichroic beam splitter DBS2Semrock FF580-FDi01-25x36591–950 nm
Band-pass filter BP1Semrock FF01-492/SP-25400–480 nm
Band-pass filter BP2Semrock BLP01-514R-25 + Semrock FF01-612/SP-25529–900 nm, 509–591 nm
Band-pass filter BP3Edmund Optics 84746 + Edmund Optics 84714635–1650 nm, 400–685 nm
Focusing lenses L1, L2, L3Edmund Optics 59871
Imaging sensors S1, S2, S3VRmagic VRmMS-12 (monochrome Aptina MT9V024 CMOS sensor)

The telescope (camera lens and collimator C) at the start of the optical path is not necessary to implement the CWB sensor. However, it offers an intermediate focal plane that can be used for various purposes. The original intention was to introduce a nonlinear optical limiter at this position in order to protect the imaging sensors additionally against laser damage. Furthermore, a calibration target can be placed there, for example, to verify the correct alignment of the imaging sensors.

Please note that the band-pass filters depicted by single elements in the scheme of Fig. 2(a) maybe realized in the laboratory demonstrator by a combination of a long-pass and a short-pass filter (see Table 1). Furthermore, the short-pass filter Edmund Optics 84714 used to form band-pass filter BP3 is located in front of the collimator C in our laboratory demonstrator. Thus, it acts as an IR cutoff filter for the whole system.

3.1.2.

Transmittance

To estimate the sensor’s performance, we calculated the spectral transmittance of the three channels. For that purpose, we used transmittance data of the optical elements listed in Table 1, as specified by the manufacturer; an external camera lens was not taken into account. For data not available in digital form, we digitized transmittance curves provided by the manufacturer in graphical form. Figure 3 shows the calculated transmittance for all three channels in diabatic scale.

Fig. 3

Calculated spectral transmittance of the three-channel CWB sensor (without external camera lens).

OE_59_1_015106_f003.png

The calculations were performed both for unpolarized and s-/p-polarized light since the transmittance of the dichroic beamsplitters (DBSs) is polarization-dependent. The different colors (blue, green, and red) in the graphs correspond to the different channels, whereas the line styles of the curves (solid, dashed, and dotted) show the polarization state of the light (unpolarized, s-polarized, and p-polarized). Additionally, colored bands in the graphs highlight the range of values between minimum and maximum transmittance.

From the calculations, we expected out-of-band values of the transmittance ranging from 106 to 108 [corresponding to an optical density (OD) ranging from 6 to 8] for each channel. This would be a reasonable value to protect the imaging sensors against out-of-band laser radiation and, thus, keeping their image information in case of a laser attack. To verify the calculations, we performed measurements, which are presented in Sec. 4.1.

3.1.3.

Image fusion

It may be advantageous if the CWB sensor’s output is only a single image that is shown to the operator instead of presenting all three images acquired by the three individual imaging sensors. Therefore, the three channel’s monochrome images are fused together. For the image fusion process, there is a multitude of possibilities. The simplest approach would be to calculate the mean of all three images, which would result in a monochrome image. Since the design of the three-channel CWB sensors offers three spectral channels corresponding roughly to blue, green, and red light, it seems obvious to generate a colored fused image just by putting the three images into the channels of an RGB image. Both methods are illustrated in Fig. 4.

Fig. 4

Result of different image fusion methods applied to the three images (B, G, R) of the three-channel CWB sensor. M, calculation of the mean; MF, calculation of the mean with filtering of overexposed image areas; RGB, creation of an RGB image; RGBF, creation of an RGB image with filtering of overexposed image areas.

OE_59_1_015106_f004.png

The upper row of Fig. 4 shows the three input images, as acquired by the three-channel CWB sensor. The labels B, G, and R correspond to channel 1, channel 2, and channel 3, respectively (see Fig. 2). In this example, the CWB sensor is dazzled by laser radiation with a wavelength of 656 nm; the image of channel 3 is nearly completely overexposed. The output image resulting from the simple calculation of the mean is labeled M; the color fused image is labeled RGB. We can see that in the case of laser dazzle, the partial or complete failure of one channel leads to a decrease in image contrast for both methods, although the structural image information is still there. Therefore, we decided to implement some kind of filtering prior to image fusion.

The simplest approach is to analyze the three channels for overexposed/saturated pixels. For filtering, we apply the rule that only saturated pixels (of corresponding image areas) occurring not simultaneously in all three channels shall be filtered out by the fusion algorithm. We define a pixel as saturated when its pixel value is larger than the (arbitrarily chosen) threshold value of 250 for 8-bit images (maximum pixel value is 255). In that case, the pixel values of the overexposed areas are neglected for the calculation of the mean image or the generation of the color image. For the case that in all three channels, corresponding image areas are overexposed, we attribute this to a (natural) broadband light source and the filtering process does not take place. The result of the filtering process is shown in Fig. 4 by the images labeled MF and RGBF for the calculation of the mean image and the generation of the color image, respectively.

We learn that this kind of filtering leads to a monochrome image with good contrast since, in this example, the fused image is mainly composed of the sensor images B and G. In the case of the color image RGBF, the filtering process also improves the contrast. However, there is a strong color distortion because the signal of channel 3 (R) is missing. Therefore, we usually operate the three-channel CWB sensor with fusion method MF. Since this is unsatisfactory in some way, we thought about a way to implement the principle of CWBs in a sensor system, which is able to deliver color output images without color distortion in the case of laser dazzle. We were able to realize this in a two-channel sensor system based on multiband optical elements. This two-channel CWB sensor is presented in the following Sec. 3.2.

3.2.

Concept for a Two-Channel CWB Sensor for Undisturbed Color Imaging

3.2.1.

Optical layout

We improved the optical layout of our CWB concept by utilizing multiband optical elements (band-pass filters and DBS), which are characterized by alternating high and low transmittance windows. Based on a multiband beamsplitter and two appropriate multiband band-pass filters, we set up a two-channel CWB sensor. The optical layout is shown in Fig. 5(a); a photograph of the two-channel CWB sensor with the external camera lens is given in Fig. 5(b). The optical elements for its realization are listed in Table 2.

Fig. 5

(a) Scheme of the optical layout of a two-channel CWB sensor and (b) photograph of the laboratory demonstrator.

OE_59_1_015106_f005.png

Table 2

Optical elements used to implement the two-channel CWB sensor.

DenotationOptical elementTransmission bands
Collimator CEdmund Optics 35172
Dichroic beam splitter DBSSemrock Di03-R405/488/561/635-t1-25x36426–462 nm, 503–545 nm, 582–618 nm, 663–1200 nm
Band-pass filter BP1Semrock FF01-390/482/563/640-25370–410 nm, 473–491 nm, 559–568 nm, 633–647 nm
Band-pass filter BP2Semrock FF01-446/523/600/677-25423–462nm, 503–545 nm, 582–618 nm, 663–691 nm
Focusing lenses L1, L2Edmund Optics 35172
Imaging sensors S1, S2VRmagic VRmMS-12 (color CMOS sensor Aptina MT9V024)

Similar to the three-channel CWB sensor, the incoming light passes through a telescope formed by an external camera lens and a collimator C (f=25  mm and f/#=1.4). Then, the light is split by a multiband DBS into two channels. The spectral separation of the two bands is increased by the use of appropriate multiband band-pass filters BP1 and BP2. The spectral transmittance of the multiband optical elements is shown in Fig. 6. The red lines correspond to the DBS; the blue and green lines correspond to the band-pass filters for channel 1 and channel 2, respectively. The transmittance is plotted for unpolarized light (solid line), s-polarized light (dashed line), and p-polarized (dotted line) light.

Fig. 6

Polarization-dependent transmittance of the multiband optical elements used to implement the two-channel CWB sensor.

OE_59_1_015106_f006.png

3.2.2.

Transmittance

Based on the transmittance curves of the optical elements shown in Fig. 6, we estimated the transmittance of the two channels. As for the three-channel CWB sensor, we used the data provided by the manufacturers or digitized graphical data if necessary. Figure 7 shows a plot of the transmittance as a function of wavelength for channel 1 and channel 2 in diabatic scale.

Fig. 7

Calculated spectral transmittance of the two-channel CWB sensor (internal optics only).

OE_59_1_015106_f007.png

The two colors (blue and green) in the graphs distinguish the different channels. As before, the line styles of the curves (solid, dashed, and dotted) show the polarization state of the light (unpolarized, s-polarized, and p-polarized). Colored bands in the graphs highlight the range of values between minimum and maximum transmittance. For the two-channel CWB sensor, we expect out-of-band transmittance values of 108 (corresponding to an OD of 8) for each channel.

From the graph of Fig. 7, we can recognize the advantage of using optical multiband elements. Both channels receive light from the blue, green, and red parts of the visible spectrum. Therefore, we can use color imaging sensors for the two channels, which allows us to receive two independent color images of the same scene section. If one channel is dazzled by laser radiation, the complementary channel will still provide an undisturbed color image.

3.2.3.

Image fusion

For the two-channel CWB sensor, the image fusion process is the same as for the three-channel CWB sensor, except that RGB color images are processed instead of monochrome images. This is illustrated in Fig. 8, where the sensor system is illuminated with laser radiation of wavelength 532 nm.

Fig. 8

Result of different image fusion methods applied to the two images (Ch. 1, Ch. 2) of the two-channel CWB sensor. M, calculation of the mean; MF, calculation of the mean with filtering of overexposed image areas.

OE_59_1_015106_f008.png

A mean image is calculated from the two acquired input images (labeled Ch. 1 and Ch. 2 in Fig. 8). The calculation of the mean image can be performed by taking the two images just as they are. This results in an output image that may have reduced contrast when one channel is dazzled by laser light (labeled M in Fig. 8). Alternatively, as explained for the three-channel CWB sensor, we can introduce filtering of overexposed image areas in advance of the mean image calculation. This leads to an output image with high contrast (labeled MF in Fig. 8).

4.

System Performance

In Sec. 3, we presented theoretical calculations of the spectral transmittance of the individual channels of our two-channel and three-channel CWB sensors. To verify these theoretical results, we experimentally measured the channel transmittance of both sensors as a function of wavelength. Details of the measurement and the results are discussed in Sec. 4.1.

In Sec. 4.2, we introduce the quantity spectral separation SSxy of two individual channels x and y, which is basically the ratio of the channel’s signals, expressed by the difference of the OD values of the corresponding channels x and y, ODxODy. As we will describe below, this quantity can be measured for devices, where the values of transmittance/OD of the channels cannot be easily obtained. We measured the spectral separation of three commercial off-the-shelf (COTS) camera devices: a single-sensor color CMOS camera, a three-CMOS camera, and a snapshot hyperspectral imager. The spectral separation of these devices is compared with our CWB sensors. In Sec. 4.3, we additionally compare our CWB sensors with the COTS camera devices by using a testbed for the assessment of laser dazzle vulnerability.

4.1.

Transmittance

For the measurement of the spectral transmittance of our CWB sensors, we simply removed the imaging sensors Sx and replaced them by power meters PMx. The experimental setup is illustrated in Fig. 9 using the example of the two-channel CWB sensor. As light sources, we used a multiwavelength laser source Toptica iChrome MLE (comprising the wavelengths 488, 515, 561, and 640 nm) and a supercontinuum light source Koheras SuperK Extreme. In case of the supercontinuum light source, narrowband radiation (FWHM 3 to 7 nm) was generated using an acousto-optical tunable filter (AOTF). For both coherent light sources, the polarization state of the light (s- or p-polarization) was controlled using a combination of a half-wave plate λ/2 (Thorlabs AHWP10M-600) and a Glan–Thompson polarizer P (Edmund Optics 47046). Since the output of the AOTF is accompanied by a broadband background, we used a set of appropriate band-pass filters BP (Thorlabs FBxxx-10; xxx denotes the wavelength and 10 refers to the bandwidth in nanometers) for spectral cleaning. For reference measurements, a small fraction of the light was directed to a power meter PMref (Ophir PD300-1W) using a beam splitter BS. The transmitted part of the light was directed to the CWB sensor, which was equipped with an external camera lens (Kowa LM25NC3, f=25  mm, and f/#=1.8). The internal imaging sensors S1 and S2 (see Fig. 5) were replaced by power meters PM1 and PM2 (both Ophir PD300R-UV).

Fig. 9

Experimental setup to measure the channel transmittance of the two-channel CWB sensor.

OE_59_1_015106_f009.png

In the course of the measurements, we tuned the laser wavelength in small steps (for the supercontinuum light source in steps of 10 nm between 470 and 730 nm) and recorded the power meters’ readings. We calibrated the splitting ratio of the beam splitter BS, in order to calculate the respective input power Pin from the readings of the reference power meter when performing the measurements. Thus, the transmittance Tx of channel x could be calculated by dividing the corresponding power meter reading Px by the input power Pin: Tx=Px/Pin.

The results of the measurements are shown in Figs. 10 and 11 for the three-channel CWB sensor and the two-channel CWB sensor, respectively. In both figures, the graph shows the transmittance as a function of wavelength in diabatic scale. The measurement data are shown as “thin diamonds,” whereby the orientation of the diamonds indicates the state of polarization. Additionally, all graphs contain the theoretical calculations in condensed form as hatched bands, where the upper and lower line of the band depicts the maximum and minimum of the calculated values for the different states of polarization (compare Figs. 3 and 7). Please note that we did not take into account the external camera lens for the graphs of Figs. 3 and 7, though we did for Figs. 10 and 11. The color of the data points or the hatched bands indicates the respective channel.

Fig. 10

Measured spectral transmittance of the three-channel CWB sensor.

OE_59_1_015106_f010.png

Fig. 11

Measured spectral transmittance of the two-channel CWB sensor.

OE_59_1_015106_f011.png

For the three-channel CWB sensor, we conclude:

  • The measured transmittance values are lower than the calculated values within the respective passbands of the channels.

  • For the green channel, the shoulder at wavelengths around 500 nm is not present in the measurement data.

  • In the wavelength range between 470 and 695 nm, at least one channel has a transmittance below 106, corresponding to an OD>6. For the blue channel, we measured considerably lower transmittance values (corresponding to OD values of 8) for wavelengths outside the passband than expected by the theoretical calculations.

For the two-channel CWB sensor, we conclude:

  • The measured transmittance values correspond quite well to the calculated values within the respective passbands of the channels.

  • The measured out-of-band values of transmittance are higher than expected. From our theoretical calculations, we estimated out-of-band values T<108 (OD>8) for both channels. For channel 1, we measured satisfying OD values of 7.5. The measured OD values for channel 2 range between 6 and 7, which is still a good result but below the calculated values.

4.2.

Spectral Separation

4.2.1.

Definition

We now introduce the quantity “spectral separation” SSxy of two individual channels x and y at a specific wavelength. We define this quantity as the absolute value of the difference of the OD values ODx and ODy for channel x and y multiplied by a factor of 10:

Eq. (1)

SSxy=10·|ODxODy|=10·|log10TxTy|.

By using the factor 10, we will state values of spectral separation using the unit decibel (dB).

The spectral separation tells us how well these channels are spectrally separated at a specific wavelength. Let us assume that one of our CWB sensors is affected by laser radiation and channel x shall be dazzled. If at the concerning laser wavelength, the spectral separation SSxy of channels x/y is quite large, it means that the value of ODy of channel y is very different at this wavelength. In case ODy is larger than ODx, the vulnerability of channel y to laser dazzle would be lower compared with channel x. As long as the laser power is not too high to also dazzle channel y, the CWB sensor can provide image information to the user.

Contrarily, a very low value of the spectral separation SSxy means that the OD values of channels x and y are very similar. Thus, laser radiation will also have a similar effect on both channels. This does not necessarily mean that the threatening laser would dazzle both channels since the OD values of both channels could be similar but very large.

Thus, the value of the spectral separation alone is not a sufficient measure of the CWB sensor’s resilience against laser dazzle. However, it gives a hint at which wavelengths the CWB sensor is hardened against laser dazzle and at which wavelengths problems may occur. The reason why we introduce this quantity is that it can also be easily measured for other devices, e.g., color cameras using an imaging sensor with Bayer filter mosaic, whereas the transmittance/OD may not be measurable in a simple way.

For example, in case of a sensor with Bayer filter mosaic, we cannot measure the transmittance of the different filters directly since we do not have access to the space behind the filters. Also, in the case of three-sensor cameras, the transmittance of the channels can only be measured, if the camera is disassembled, which involves the danger of damaging the camera or, at least, misaligning the three imaging sensors. For such cameras, however, it is possible to measure the spectral separation using the signal of the imaging sensor(s) when irradiated with light, as will be described in the following.

The digital signal DS of a pixel of an imaging sensor can be calculated by33

Eq. (2)

DS=DSdark+KηAλhcEtexp,
where DSdark is the signal for zero irradiation, K is the overall system gain, η is the quantum efficiency, A is the size of a pixel, λ is the wavelength of the light, h is the Planck constant, c is the (vacuum) speed of light, E is the irradiance at the pixel, and texp is the exposure time.

Now, we assume that the dark signal DSdark of an imaging sensor is negligible or that the acquired images have been corrected accordingly (dark-frame correction). Furthermore, the irradiance of an imaging sensor’s channel x of a CWB sensor is proportional to its channel transmittance: Ex=TxEin. The digital signal of channel x can then be written as

Eq. (3)

DSx=KηAλhcTxEintexp.

Since the different imaging sensors of a multisensor camera (three-sensor camera or CWB sensor) are usually of the same type (i.e., same quantum efficiency, etc.), the ratio of transmittances for channel x and y is given by

Eq. (4)

TxTy=DSx/texp,xDSy/texp,y.

Equation (4) can be used together with Eq. (1) to calculate the spectral separation of two channels of a camera sensor, when values of transmittance/OD cannot be measured directly. In Eq. (4), we have assumed that the exposure times texp,x and texp,y of the channels’s imaging sensors can be different to be able to measure higher ratios of Tx/Ty.

4.2.2.

Measurements

We measured the spectral separation of three COTS camera devices (see Fig. 12) in order to compare it with our CWB sensors. The first one was a standard color CMOS camera using an imaging sensor with Bayer filter mosaic (Allied Vision Mako G-158C), the second one was a three-CMOS camera (JAI AP-1600T-PGE), and the third one was a hyperspectral imager using a 4×4 filter mosaic (Photonfocus MV1-D2048x1088-HS03-96-G2).

Fig. 12

COTS camera devices: (a) Color CMOS camera Allied Vision Mako G-158C, (b) three-CMOS camera JAI AP-1600T-PGE, and (c) hyperspectral imager Photonfocus MV1-D2048x1088-HS03-96-G2.

OE_59_1_015106_f012.png

Some technical data regarding the imaging sensors of these cameras as well as of our CWB sensors are listed in Table 3. The standard CMOS camera and the three-CMOS camera have been selected in such a way to have the same imaging sensor.

Table 3

Parameters of the imaging sensors utilized in the cameras under test.

CameraImaging sensorPixel (hor.×vert.)Pixel size (μm)Color/monochrome
Allied Vision Mako G-158CSony IMX2731456×10883.45Bayer filter mosaic
JAI AP-1600T-PGESony IMX273 (3×)1456×10883.45Monochrome
Photonfocus MV1-D2048x1088-HS03-96-G2IMEC SNm4x4 VIS (based on CMOSIS CMV2000)2048×10885.54×4 filter mosaic (16 passbands)
Fraunhofer IOSB 3-channel CWB sensorAptina MT9V024752×4806.0Monochrome
Fraunhofer IOSB two-channel CWB sensorAptina MT9V024752×4806.0Bayer filter mosaic

Our experimental setup to measure the spectral separation is shown in Fig. 13. Since we expected rather low values of spectral separation (50  dB) for the three COTS camera devices, we used an incoherent, broadband halogen light source (Thorlabs SLS201/M). A set of 27 band-pass filters BP (Thorlabs FBxxx-10) was used to generate narrowband light, with (nominal) center wavelengths ranging from 470 to 730 nm in steps of 10 nm. The (nominal) FWHM of the band-pass filters was 10 nm. Behind the band-pass filter, the light was coupled into a bifurcated fiber bundle (Thorlabs BFY400LS02, one input connector and two output connectors) using a fiber coupler FC1. The first fiber output was connected to a power meter for monitoring, and the second fiber output was connected to a fiber collimator FC2 (Thorlabs RC08SMA-P01). The collimated output beam was sent to an aperture (6 mm diameter) and subsequently directed to the camera sensor. For the measurements, all camera sensors were equipped with the same camera lens (Kowa LM25NC3, f=25  mm).

Fig. 13

Experimental setup to measure the spectral separation of the spectral channels of a camera sensor.

OE_59_1_015106_f013.png

The procedure of the spectral separation measurement was as follows: We first chose a wavelength by inserting the respective band-pass filter. Then, the camera’s exposure time texp was chosen so to produce a strong signal but no saturation. For later analysis (to estimate the digital signal DS), an image was acquired using this exposure time. In the course of the experiments, the exposure time was increased stepwise in order to generate also a signal in another color channel. This procedure was repeated until all color channels delivered a signal at all individual measurement wavelengths.

As an example, Fig. 14 shows three camera images acquired with the three-CMOS camera at the wavelength of 510 nm. Figures 14(a)14(c) correspond to exposure times of 30, 500, and 40,000  μs, respectively. Since the camera lens was adjusted to infinity, it had the effect that the end facet of the fiber was imaged onto the imaging sensor. Thus, the camera images showed a centrally located disk with quite homogeneous pixel values (see the line profiles in Fig. 14).

Fig. 14

Camera images acquired with the three-CMOS camera JAI AP-1600T-PGE using the experimental setup shown in Fig. 13 (λ=510  nm) for different exposure times of (a) 30  μs, (b) 500  μs, and (c) 40,000  μs. Additionally, the pixel values along the horizontal lines marked in the images by red dashed lines are shown in the graphs below.

OE_59_1_015106_f014.png

For the analysis, we extracted the pixel values of a rectangular part located within the central disk (see the orange squares in the images of Fig. 14) and calculated the mean values for each of the n color channels for that area. Using the mean values DSx and the corresponding exposure times texp,x noted during the measurement (x ranging from 1 to n), we calculated the mutual spectral separation of the channels using Eqs. (1) and (4).

For a camera system with n spectral channels, the number of mutual ratios of channel transmittance would be n(n1)/2. This results in one spectral separation value for the two-channel CWB sensor and in three values (redgreen, redblue, and greenblue) for a three-channel RGB systems like the standard color camera, the three-CMOS camera, or our three-channel CWB sensor. The hyperspectral imager exhibits 16 passbands, which yields 120 mutual values of spectral separation.

4.2.3.

Results

We measured the spectral separation of three COTS devices using the aforementioned experimental setup and procedure. For our CWB sensors, we used the data of the transmittance measurements (see Sec. 4.1) to calculate the spectral separation. For comparison and in contrast to the transmittance measurements described in Sec. 4.1, we also performed a measurement on our two-channel CWB sensor in the same way as for the COTS devices. But with the difference that we used an unpolarized, tunable supercontinuum light source (NKT SuperK Compact + NKT SuperK Varia) instead of the incoherent light source, as described in the experimental setup of Fig. 13. The reason the halogen light source could not be applied was that the spectral purity of these band-pass filters is just not good enough for cases with very high spectral separation larger than 50  dB. In Fig. 15, the spectral separation as a function of wavelength is plotted for all devices tested.

Fig. 15

Separation of the spectral channels of different cameras as a function of wavelength: (a) Color CMOS camera Allied Vision Mako G-158C using a single imaging sensor with Bayer filter mosaic. (b) Three-CMOS camera JAI AP-1600T-PGE, (c) hyperspectral imager Photonfocus MV1-D2048x1088-HS03-96-G2 using a 4×4 filter mosaic (16 passbands), (d) three-channel CWB sensor, and (e) two-channel CWB sensor.

OE_59_1_015106_f015.png

The graphs of Figs. 15(a)15(c) show the results for the COTS devices; the graphs of Figs. 15(d) and 15(e) are the results for our three-channel and two-channel CWB sensors. Typically, all plots include the mutual spectral separation of the individual channels. Regarding the results presented in Fig. 15(c), however, we decided to plot only the maximum values of the 120 curves of spectral separation to avoid overfilling the graph. The maximum values are also shown in Figs. 15(a) and 15(b) for the single-sensor camera and the three-CMOS camera, respectively. For the plots regarding our CWB sensors, we also included the calculated values as hatched bands just as for the plots of transmittance of Figs. 10 and 11.

The interpretation of the plots has to be done carefully. In principle, a high value of spectral separation is desirable. Thus, the maximum curve shown in the graphs of Figs. 15(a)15(c) is the most interesting curve regarding the assessment of a sensor’s vulnerability to laser dazzle. If a sensor system has a high (maximum) value of spectral separation at a specific wavelength, it means that at least one of its spectral channels has a high value of OD at this wavelength and will (to a certain degree) be saved from laser dazzle. When we look at the graphs of Figs. 15(a)15(c), we can see that the spectral separation ranges from 30 to 50 dB for the three-CMOS camera and is below 20 dB for the standard color camera and the hyperspectral imager. This means that the three-CMOS camera will be less vulnerable to information loss due to laser dazzle than the other two cameras. Values around 50 dB are quite good; we did not expect such high values for the three-CMOS camera before the measurements.

Looking at the spectral separation of our two CWB sensors, we can see that the value of spectral separation is quite high (60 to 80 dB) within the channel’s passbands. At the crossover points of the channel’s transmittance curves (see Figs. 10 and 11), the spectral separation drops to low values. However, this does not necessarily mean that the CWB sensors are vulnerable to laser dazzle at these wavelengths. For example, the spectral separation of the three-channel CWB sensor drops nearly to zero at a wavelength of around 500 nm. Looking at the transmittance plots of Fig. 10, we can see that the transmittance values of all spectral channels are very similar for these wavelengths (OD7 to 8), resulting in a low spectral separation. However, due to the high OD values and very low transmittance, the sensor is not vulnerable to laser dazzle at wavelengths around 500 nm. Altogether, we can conclude that our CWB sensors are much less vulnerable to laser dazzle than COTS camera devices.

4.3.

Assessment of Information Loss in the Case of Laser Dazzle

In recent work, we examined the quantitative assessment of laser protection measures for imaging sensors.34,35 The main objective of this work is to assess how protection measures influence the information content of sensor images. This may be seen from a very general point of view like sensor performance with/without laser protection as well as for the cases of laser dazzle. A laser protection measure cannot be assessed solely by stating value for the attenuation of laser light. For example, an opaque metal shield would be a perfect laser protection measure regarding the attenuation of laser radiation. However, an operator or sensor would not be able to see through the shield and would, therefore, lose the image information. Thus, a more comprehensive analysis of protection performance should include the amount of information loss introduced by a protection measure.

In our earlier work, we examined different possibilities, based on “triangle orientation discrimination” and the calculation of the “structural similarity” (SSIM) index. For this work, we concentrated on the SSIM index method. We will describe the method only briefly and refer the reader to the aforementioned references.

The SSIM index is a metric for measuring the loss of information of an image by comparing it with an original reference image.36 The metric bases on the assumption that the human visual system is designed to recognize structures in images and estimates to what extent two images exhibit the same structures. Usually, SSIM is used to assess the quality of image compression algorithms. In our case, we use SSIM to compare images taken with a sensor in a dazzling respective nondazzling situation. Thus, we can estimate how much image information is lost due to laser dazzle. Alternatively, in case of protected sensors, we gain a measure to estimate how much image information can be retrieved when a particular protection measure is applied as compared with the unprotected sensor.

4.3.1.

Measurements

For this work, we performed laser dazzle experiments with our CWB sensors as well as with the COTS devices presented in Sec. 4.2. Additionally, we performed the laser dazzle experiments with a standard monochrome CMOS camera, Allied Vision Mako G-158B, for purposes of comparison. This monochrome camera is identical to the color camera Allied Vision Mako G-158C introduced in Sec. 4.2, except that the imaging sensor is not equipped with the Bayer filter mosaic.

A sketch of the experimental setup is shown in Fig. 16(a). The sensor under test observed a white screen at a distance of 5.14 m. A test pattern was projected on the screen using a video projector (Optoma UHD60). As test pattern, we decided to use a highly structured, fractal test pattern according to the work of Landeau37 [see Fig. 16(b)]. Such a test pattern consists of a large number of black and white squares arranged in a particular manner; details can be found in the publication of Landeau. For our measurements, we prepared the fractal test pattern in such a way that a single square in the fractal test pattern had an angular size of 0.1 deg as seen by the sensor. Sensor dazzling was performed using a multiwavelength laser source iChrome MLE from Toptica offering four different laser wavelengths (488, 515, 561, and 640 nm). A hole in the screen’s center allowed laser illumination of the sensor along its optical axis. At the position of the sensor, the laser beam diameter (1/e2) was 16  cm. Maximum irradiance at the position of the sensor was 638, 279, 606, and 401  μW/cm2 using a laser wavelength of 488, 515, 561, and 640 nm, respectively.

Fig. 16

(a) Experimental setup for the assessment of laser protection measures. (b) Image section of the fractal test pattern projected onto the screen for the measurements.

OE_59_1_015106_f016.png

For the measurements, we set the sensor’s exposure time to a value such that the white areas of the test pattern deliver a digital signal of half of the sensor’s dynamical range, e.g., a digital signal of 127 for an 8-bit sensor with a maximum pixel value of 255. Accomplishing this condition was not always possible since the video projector is based on digital light processing technology. The projector uses a single digital micromirror and a rotating color wheel to produce color images. When the sensor’s exposure time is not tuned to the rotary frequency of the filter wheel, it can happen that color changes occur in successive sensor images. For our video projector, the rotary frequency of the filter wheel seems to be 120 Hz, because the sensor’s exposure time had to be set to a multiple of 8.333  ms=1/120  Hz in order to acquire images without color distortion. Therefore, it was not possible to equal the exposure conditions, the mean pixel value of white areas ranged from 105 to 165 for the different camera devices.

The dazzle experiments were set up in such a way that each sensor was illuminated with a series of different levels of irradiance while images were taken each time. This procedure was carried out for all laser wavelengths (488, 515, 561, and 640 nm). Subsequently, we analyzed the images by calculating the SSIM index for all images. Figure 17 presents some examples of images taken with the different camera sensors at a laser wavelength of 640 nm. The camera images shown were taken at two extreme situations: for a rather low value of irradiance of 10  μW/cm2 (measured at the entrance of the camera lens) and for the maximum irradiance of 400  μW/cm2. Since the sensors have slightly different fields of view (FOVs), we additionally indicated matching image areas by green rectangles drawn into the camera images. The indicated image areas were used for further analysis, as will be described hereafter.

Fig. 17

Sensor images taken according to the experimental setup of Fig. 16: (a) Color CMOS camera Allied Vision Mako G 158C, (b) monochrome CMOS camera Allied Vision Mako G-158B, (c) three-CMOS camera JAI AP-1600T-PGE, (d) hyperspectral imager Photonfocus MV1-D2048x1088-HS03-96-G2, (e) three-channel CWB sensor, and (f) two-channel CWB sensor.

OE_59_1_015106_f017.png

4.3.2.

Data analysis

To compare the SSIM results of our different camera sensors, we have to take into account that the sensors have different FOVs and different image resolutions. The calculated value of SSIM will depend on these parameters since changes in FOV and resolution cause differences in the image information. Therefore, in order to be able to compare the SSIM results of the different sensors, we had to match the FOV and the resolution of the image data before the SSIM calculations were made.

The images of all sensors had to be cropped in a first step to match the favored FOV. The matching image areas are indicated in the example images of Fig. 17 by green rectangles. In a second step, the image data were sampled down to equalize the final image resolution. In this step, the image resolution was defined by the cropped image of the three-channel sensor, which had the lowest resolution. The final size of the processed images was 581×365  pixels. Using the processed image data, the SSIM calculations were performed. For our CWB sensors, we analyzed the fused image, which is monochrome in case of the three-channel CWB sensor and colored in case of the two-channel CWB sensor (although the test chart is monochrome).

4.3.3.

Results

As an example, Fig. 18 presents detailed results gained for the color CMOS camera Allied Vision Mako G-158C. It shows four plots of the SSIM index as a function of irradiance corresponding to the four different laser wavelengths. For the color channels, the SSIM index was calculated separately (curves labeled R, G, and B in the plots). Additionally, the mean SSIM value for all color channels was calculated [curve labeled Mean(R,G,B) in the plots]. Furthermore, the color image was converted to a monochrome image by taking the mean of the color channels before calculating the SSIM index (curve labelled RGBMono in the plots). The latter two curves are quite similar but not equal. In the plots, a colored background highlights the area between these two curves.

Fig. 18

Results of the SSIM analysis of the color CMOS camera Allied Vision Mako G-158C. (a) 488 nm, (b) 515 nm, (c) 561 nm, and (d) 640 nm.

OE_59_1_015106_f018.png

Looking at the plots of Fig. 18, we recognize a behavior as expected. The SSIM curve for the color channel corresponding to the laser wavelength starts to decrease earlier with increasing irradiances as compared with the other channels. The out-of-band channels are less affected.

Figure 19 shows an overview of the results of the SSIM analysis for all sensors under test. Since the amount of information given by Fig. 18 is overwhelming, we just kept the maximum SSIM curve for each wavelength in the plots of Fig. 19. We can see that both of our CWB sensors perform best in keeping the image information under laser dazzle followed by the three-CMOS camera. The results for the three-CMOS camera are quite good, especially for the red laser wavelength of 640 nm, since the spectral separation of the three-CMOS camera peaks at this wavelength (see Fig. 15). The hyperspectral imager performs better than the single-sensor color and monochrome cameras except for the laser wavelength of 640 nm.

Fig. 19

Results of the SSIM analysis for (a) the color CMOS camera Allied Vision Mako G-158C, (b) the monochrome CMOS camera Allied Vision Mako G-158B, (c) the three-CMOS camera JAI AP-1600T-PGE, (d) the hyperspectral imager Photonfocus MV1-D2048x1088-HS03-96-G2, (e) the three-channel CWB sensor, and (f) the two-channel CWB sensor.

OE_59_1_015106_f019.png

5.

Field Trial

Both of our CWB sensors were tested during a field trial of the NATO research task group SET-249 at the Bundeswehr Technical Center for Protective and Special Technologies (WTD 52) in Oberjettenberg. We will not go into details of the measurement campaign here but refer to a dedicated publication.38 Briefly, the sensors were placed at a cable car and were illuminated with the light of laser sources located at a ground station. The distance between the cable car and ground station was around 660 m. We used two coherent light sources to dazzle the CWB sensor: a multiwavelength laser source Toptica iChrome MLE and a supercontinuum light source Koheras SuperK Extreme (the same as used for the measurement of the spectral transmittance of the CWB sensors, see Sec. 4.1). At the field trial, the supercontinuum light source was used without AOTF and was thus delivering broadband light. For reasons of laser safety, the emission spectrum was limited to the visible spectral range below 700 nm.

Here, we will present only a chosen selection of images taken with the CWB sensors at the field trial, those that have been acquired when the sensors were irradiated by two laser wavelengths simultaneously or by the supercontinuum light source. Such a multiwavelength illumination or broadband illumination represents the worst-case scenario for a CWB sensor. To dazzle an n-channel CWB sensor, n different laser wavelengths tuned to the passbands of the spectral channels are necessary or a broadband light source covering all passbands. Thus, the results of the field trial presented here reveal the limits of the CWB approach.

Figures 20 and 21 show images taken with the three-channel CWB sensor and the two-channel CWB sensor during the field trial, respectively. The column label indicates the wavelengths used for illumination and the maximum possible irradiance at the place of the sensor. Please note that these maximum values of irradiance stated in the figures were calculated taking into account the laser output power, beam divergence, and distance of laser source to the sensor. No real-time monitoring of the irradiance took place during the measurements. This means that the irradiance at the time of image acquisition maybe well below the maximum value of irradiance due to atmospheric turbulence, movement of the cable car, and/or laser source and pointing errors.

Fig. 20

Images taken with the three-channel CWB sensor during a field trial.

OE_59_1_015106_f020.png

Fig. 21

Images taken with the two-channel CWB sensor during a field trial.

OE_59_1_015106_f021.png

In Fig. 20, the location of the laser source can be seen as a small bright dot in the images without laser dazzle. In the top left image of Fig. 20, this dot is marked by a red arrow.

For the three-channel CWB sensor (see Fig. 20), there is very little laser dazzle in the case of illumination with two different laser wavelengths, which is not unexpected. Depending on the chosen wavelengths, only one channel was affected since the laser wavelength 488 nm is out-of-band for the three-channel CWB sensor. The structural image information is kept in the fused image. This is the same for the supercontinuum light source because channel 1 (transmitting blue light) is nearly not affected by the broadband light. The reason for this can be found in the spectral power distribution of the supercontinuum light source. The emission spectrum of the supercontinuum light source is plotted in Fig. 22 for different power levels of 10%, 30%, and 60% of the maximum output power. We can see from the plot that the spectral flux is negligible for wavelengths below 470 nm.

Fig. 22

Emission spectrum of the supercontinuum light source Koheras SuperK Extreme for different power levels of 10%, 30%, and 60% of the maximum output power.

OE_59_1_015106_f022.png

Regarding the two-channel CWB sensor (see Fig. 21), we can see that multiple wavelengths can be a problem, when directed to both channels, for example, the combinations of 488  nm/515  nm and 515  nm/640  nm. The wavelength combination 488  nm/640  nm did not state a problem since the light of both wavelengths is directed to channel 1 only. Unsurprisingly, the supercontinuum light source can jam the two-channel CWB sensor.

Furthermore, we can learn from the image data that the fusion algorithm has some potential for improvement. For example, when we look at the fused image of the two-channel CWB sensor for the two-wavelength combinations, we can see that the sole image of channel 2 seems to be a better choice for the observer than the fused image.

6.

Conclusions

We presented two different sensor concepts both based on the principle of CWBs as a means of protection against laser threats, mainly against dazzling but also against damage to a certain degree. Unlike classical protection measures, which work only against a few predetermined wavelengths, the CWB concept offers the advantage of less susceptibility to agile laser threats, i.e., it is largely independent of the threatening wavelength.

Using appropriate optical elements (band-pass filters and dichroic beam splitters), the spectral out-of-band channel transmittance of our CWB sensors corresponds to an OD of 8, providing low vulnerability of the sensor output to monochrome laser radiation in the respective channel. The spectral separation of the individual channels of our CWB sensors is higher than those of standard COTS devices like color cameras with Bayer filter mosaic, three-sensor cameras, or hyperspectral imagers. Relative to the other COTS cameras, the examined three-CMOS camera showed astonishingly good results regarding laser dazzle protection.

In contrast to the three-channel CWB sensor, which delivers three independent monochrome images, the two-channel CWB sensor delivers two color images. If the laser dazzle is caused by a single wavelength, only one of the sensor’s channels would be completely jammed. In this case, the two-channel CWB sensor still delivers undisturbed color images, whereas the three-channel CWB sensor would deliver only color-distorted information. However, since the image quality will only be reduced by the failure of one channel, the output of the system will still allow an observer to fulfill his task. Only a multiple wavelength dazzler or broadband light can jam such a sensor. Nevertheless, concerning handheld high-power laser pointers, the concept offers a low-cost and simple approach for sensor dazzle protection/hardening since it relies on already available standard optical elements. Even when several laser pointers are used handheld at the same time, the possibility of hitting the CWB sensor with multiple laser wavelengths simultaneously at the same image area within the sensor’s FOV is rather low. Thus, a CWB sensor may be capable to deliver an output image that keeps structural image information even in such a scenario.

Although all tests of our CWB sensors were performed with continuous-wave laser radiation, the approach also works for pulsed laser sources. Since the pulsed laser represents a higher threat regarding laser damage of the imaging sensors, we designed our CWB sensors to provide an intermediate focal plane before the channels are split, to allow the use of an additional protection measure such as a nonlinear optical power limiter to counter damage.

Our future work will concentrate on the improvement of the image fusion algorithm and minor changes of the sensor layout, for example, the use of imaging sensors offering a higher resolution than the current ones.

References

1. 

B. Eberle, D. Forster and G. Ritt, “Visible laser dazzle,” Proc. SPIE, 9989 99890J (2016). https://doi.org/10.1117/12.2241041 PSISDG 0277-786X Google Scholar

2. 

C. A. Williamson, “Simple computer visualization of laser eye dazzle,” J. Laser Appl., 28 012003 (2016). https://doi.org/10.2351/1.4932620 JLAPEN 1042-346X Google Scholar

3. 

C. A. Williamson et al., “Measuring the contribution of atmospheric scatter to laser eye dazzle,” Appl. Opt., 54 7567 –7574 (2015). https://doi.org/10.1364/AO.54.007567 APOPAI 0003-6935 Google Scholar

4. 

J. M. P. Coelho, J. Freitas and C. A. Williamson, “Optical eye simulator for laser dazzle events,” Appl. Opt., 55 2240 –2251 (2016). https://doi.org/10.1364/AO.55.002240 APOPAI 0003-6935 Google Scholar

5. 

C. A. Williamson et al., “Impact of windscreen scatter on laser eye dazzle,” Opt. Exp., 26 27033 –27057 (2018). https://doi.org/10.1364/OE.26.027033 OPEXFF 1094-4087 Google Scholar

6. 

C. A. Williamson and L. N. McLin, “Nominal ocular dazzle distance (NODD),” Appl. Opt., 54 1564 –1572 (2015). https://doi.org/10.1364/AO.54.001564 APOPAI 0003-6935 Google Scholar

7. 

C. A. Williamson et al., “Wavelength and ambient luminance dependence of laser eye dazzle,” Appl. Opt., 56 8135 –8147 (2017). https://doi.org/10.1364/AO.56.008135 APOPAI 0003-6935 Google Scholar

8. 

C. A. Williamson and L. N. McLin, “Determination of a laser eye dazzle safety framework,” J. Laser Appl., 30 032010 (2018). https://doi.org/10.2351/1.5029384 JLAPEN 1042-346X Google Scholar

9. 

O. Steinvall et al., “Laser dazzling impacts on car driver performance,” Proc. SPIE, 8898 88980H (2013). https://doi.org/10.1117/12.2028505 PSISDG 0277-786X Google Scholar

10. 

M. Vandewal et al., “Evaluation of laser dazzling induced task performance degradation,” Proc. SPIE, 10797 10797E (2018). https://doi.org/10.1117/12.2325245 PSISDG 0277-786X Google Scholar

11. 

R. (H.) M. A. Schleijpen et al., “Laser dazzling of focal plane array cameras,” Proc. SPIE, 6543 65431B (2007). https://doi.org/10.1117/12.718602 PSISDG 0277-786X Google Scholar

12. 

R. (H.) M. A. Schleijpen et al., “Laser dazzling of focal plane array cameras,” Proc. SPIE, 6738 67380O (2007). https://doi.org/10.1117/12.747009 PSISDG 0277-786X Google Scholar

13. 

K. W. Benoist and R. (H.) M. A. Schleijpen et al., “Modeling of the over-exposed pixel area of CCD cameras caused by laser dazzling,” Proc. SPIE, 9251 92510H (2014). https://doi.org/10.1117/12.2066305 PSISDG 0277-786X Google Scholar

14. 

A. Durécu et al., “Assessment of laser-dazzling effects on TV-cameras by means of pattern recognition algorithms,” Proc. SPIE, 6738 67380J (2007). https://doi.org/10.1117/12.737264 PSISDG 0277-786X Google Scholar

15. 

A. Durécu, O. Vasseur and P. Bourdon, “Quantitative assessment of laser-dazzling effects on a CCD-camera through pattern-recognition algorithms performance measurements,” Proc. SPIE, 7483 74830N (2009). https://doi.org/10.1117/12.833975 PSISDG 0277-786X Google Scholar

16. 

C. N. Santos et al., “Visible and near-infrared laser dazzling of CCD and CMOS cameras,” Proc. SPIE, 10797 107970S (2018). https://doi.org/10.1117/12.2325631 PSISDG 0277-786X Google Scholar

17. 

T. Özbilgin and A. Yeniay, “Laser dazzling analysis of camera sensors,” Proc. SPIE, 10797 107970Q (2018). https://doi.org/10.1117/12.2325393 PSISDG 0277-786X Google Scholar

18. 

G. D. Lewis et al., “In-band low-power laser dazzle and pixel damage of an uncooled LWIR thermal imager,” Proc. SPIE, 10797 107970F (2018). https://doi.org/10.1117/12.2325261 PSISDG 0277-786X Google Scholar

19. 

C. Burgess et al., “Modelled and experimental laser-induced sensor damage thresholds to continuous wave infrared sources,” Proc. SPIE, 10797 107970T (2018). https://doi.org/10.1117/12.2326401 PSISDG 0277-786X Google Scholar

20. 

B. Schwarz et al., “Laser-induced damage threshold of camera sensors and micro-optoelectromechanical systems,” Opt. Eng., 56 034108 (2017). https://doi.org/10.1117/1.OE.56.3.034108 Google Scholar

21. 

E. I. L. Jull and H. F. Gleeson, “Tunable and switchable liquid crystal laser protection system,” Appl. Opt., 56 8061 –8066 (2017). https://doi.org/10.1364/AO.56.008061 APOPAI 0003-6935 Google Scholar

22. 

F. Quercioli, “Beyond laser safety glasses: augmented reality in optics laboratories,” Appl. Opt., 56 1148 –1150 (2017). https://doi.org/10.1364/AO.56.001148 APOPAI 0003-6935 Google Scholar

23. 

F. Quercioli, “Augmented reality in laser laboratories,” Opt. Laser Technol., 101 25 –29 (2018). https://doi.org/10.1016/j.optlastec.2017.10.033 OLTCAS 0030-3992 Google Scholar

24. 

J. H. Wirth, A. T. Watnik and G. A. Swartzlander, “Experimental observations of a laser suppression imaging system using pupil-plane phase elements,” Appl. Opt., 56 9205 –9211 (2017). https://doi.org/10.1364/AO.56.009205 APOPAI 0003-6935 Google Scholar

25. 

G. J. Ruane, A. T. Watnik and G. A. Swartzlander, “Reducing the risk of laser damage in a focal plane array using linear pupil-plane phase elements,” Appl. Opt., 54 210 –218 (2015). https://doi.org/10.1364/AO.54.000210 APOPAI 0003-6935 Google Scholar

26. 

G. Ritt and B. Eberle, “Automatic laser glare suppression in electro-optical sensors,” Sensors, 15 792 –802 (2015). https://doi.org/10.3390/s150100792 SNSRES 0746-9462 Google Scholar

27. 

G. Ritt and B. Eberle, “Automatic suppression of intense monochromatic light in electro-optical sensors,” Sensors, 12 14113 –14128 (2012). https://doi.org/10.3390/s121014113 SNSRES 0746-9462 Google Scholar

28. 

G. Ritt and B. Eberle, “Electro-optical sensor with spatial and spectral filtering capability,” Appl. Opt., 50 3847 –3853 (2011). https://doi.org/10.1364/AO.50.003847 APOPAI 0003-6935 Google Scholar

29. 

S. Svensson et al., “Countering laser pointer threats to road safety,” Proc. SPIE, 6402 640207 (2006). https://doi.org/10.1117/12.689057 PSISDG 0277-786X Google Scholar

30. 

G. Ritt, B. Schwarz and B. Eberle, “Preventing image information loss of imaging sensors in case of laser dazzle,” Proc. SPIE, 10797 107970R (2018). https://doi.org/10.1117/12.2325307 PSISDG 0277-786X Google Scholar

31. 

G. Ritt, B. Schwarz and B. Eberle, “Preventing image information loss of imaging sensors in case of laser dazzle,” Opt. Eng., 58 013109 (2019). https://doi.org/10.1117/1.OE.58.1.013109 Google Scholar

32. 

G. Ritt and B. Eberle, “Use of complementary wavelength bands for laser dazzle protection,” Proc. SPIE, 11161 1116109 (2019). https://doi.org/10.1117/12.2533080 PSISDG 0277-786X Google Scholar

33. 

EMVA Standard 1288, Standard for Characterization of Image Sensors and Cameras, Release 3.1, Barcelona, Spain (2010). Google Scholar

34. 

G. Ritt and B. Eberle, “Evaluation of protection measures against laser dazzling for imaging sensors,” Opt. Eng., 56 033108 (2017). https://doi.org/10.1117/1.OE.56.3.033108 Google Scholar

35. 

G. Ritt et al., “Protection performance evaluation regarding imaging sensors hardened against laser dazzling,” Opt. Eng., 54 053106 (2015). https://doi.org/10.1117/1.OE.54.5.053106 Google Scholar

36. 

Z. Wang et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., 13 600 –612 (2004). https://doi.org/10.1109/TIP.2003.819861 IIPRE4 1057-7149 Google Scholar

37. 

S. Landeau, “Evaluation of super-resolution imager with binary fractal test target,” Proc. SPIE, 9249 924909 (2014). https://doi.org/10.1117/12.2067499 PSISDG 0277-786X Google Scholar

38. 

B. Eberle et al., “NATO SET-249 joint measurement campaign on laser dazzle effects in airborne scenarios,” Proc. SPIE, 11161 111610C (2019). https://doi.org/10.1117/12.2533744 PSISDG 0277-786X Google Scholar

Biography

Gunnar Ritt is a research associate at Fraunhofer IOSB, Ettlingen, Germany. He received his diploma and PhD degrees in physics from the University of Tübingen, Germany, in 1999 and 2007, respectively. His main research focus is on laser protection.

Bernd Eberle is a senior scientist at Fraunhofer IOSB in Ettlingen, Germany, where he is the head of the optical countermeasure and laser protection group. He received his diploma degree in physics at the University of Konstanz in 1983. He received his PhD in physics at the University of Konstanz in 1987. His research activities comprise laser technology, laser spectroscopy, nonlinear optics, femtosecond optics, optical countermeasures including protection against laser radiation and imaging laser sensors.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Gunnar Ritt and Bernd Eberle "Use of complementary wavelength bands for laser dazzle protection," Optical Engineering 59(1), 015106 (30 January 2020). https://doi.org/10.1117/1.OE.59.1.015106
Received: 22 November 2019; Accepted: 14 January 2020; Published: 30 January 2020
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Sensors

Cameras

Transmittance

Optical sensors

Image fusion

Optical filters

Image filtering

RELATED CONTENT


Back to Top