Open Access
10 March 2020 Image shift due to atmospheric refraction: prediction by numerical weather modeling and machine learning
Author Affiliations +
Abstract

We develop and study two approaches for the prediction of optical refraction effects in the lower atmosphere. Refraction can cause apparent displacement or distortion of targets when viewed by imaging systems or produce steering when propagating laser beams. Low-cost, time-lapse camera systems were deployed at two locations in New Mexico to measure image displacements of mountain ridge targets due to atmospheric refraction as a function of time. Measurements for selected days were compared with image displacement predictions provided by (1) a ray-tracing evaluation of numerical weather prediction data and (2) a machine learning algorithm with measured meteorological values as inputs. The model approaches are described and the target displacement prediction results for both were found to be consistent with the field imagery in overall amplitude and phase. However, short time variations in the experimental results were not captured by the predictions where sampling limitations and uncaptured localized events were factors.

1.

Introduction

The Earth’s atmosphere includes several phenomena that affect the propagation of light. While scattering and absorption by clouds, fogs, and aerosols primarily affect the intensity of the radiation received, the atmosphere can also affect the spatial resolution properties and the propagation trajectory of light. For example, atmospheric turbulence causes image shimmering and blurring and is stochastic in nature with fluctuations over short timescales (e.g., milliseconds). Another phenomenon is atmospheric refraction where refractive index gradients can steer or bend light rays. The index gradients are associated with changes in air density, which for optical wavelengths is primarily a function of air temperature gradients. Atmospheric refraction tends to cause more deterministic, larger-scale effects than turbulence and the effects can persist from minutes to hours.14 The interest here is refraction in the lower atmosphere, which can cause apparent displacement or distortion of objects when viewed by imaging systems or produce steering when propagating laser beams.

For several years, we have been developing a low-cost, mobile camera system to study atmospheric refraction at New Mexico State University. One system was recently deployed at White Sands Missile Range (WSMR), New Mexico (NM), and a second system was set up at the Jornada Experimental Range (JER), near Las Cruces, NM. Both systems collect time-lapse images of distant natural targets, such as mountain ridges. A time-lapse system was previously used in Las Cruces, New Mexico, with a building as a target to study diurnal image displacement due to refraction.5 A similar system in Dayton, Ohio, was used by Basu et al.4 to investigate the temporal variations of the refractive index gradient. Time-lapse imagery has also been used to investigate the apparent stretch and compression of objects due to atmospheric refraction lensing effects6 and the approach has also been applied to the estimation of turbulence strength.7

The prediction of atmospheric refraction effects can be advantageous for many terrestrial optical applications where prior knowledge of the light’s trajectory can improve the speed and accuracy of pointing and tracking functions. The goal of this paper is to develop and evaluate two different methods, numerical weather prediction (NWP) and machine learning (ML), for predicting image displacement due to atmospheric refraction. NWP is an attractive approach for our application as it is deeply rooted in physics. However, it is computationally expensive, and the results are subject to initial conditions and terrain characteristics. An alternative, more empirical tactic is to apply an ML algorithm to build a predictive model based on local meteorological data. In this paper, we describe our application of NWP and ML methods to image displacement due to refraction and compare the results with time-lapse camera measurements.

2.

Methods

2.1.

Time-Lapse Image Collection and Processing

During January and February of 2018, we collected image data with a time-lapse camera located at WSMR that was pointed generally north at a natural desert landscape and a mountain range (Oscura Mountains) on the horizon. Another camera was set up at JER and was pointed west to image a mountain range (Dona Ana Mountains) and a desert valley. This system began collecting images in May 2018 and is still operating. The mountain ridgelines observed were at distances of about 20 km for JER and over 100 km for WSMR.

The battery-powered camera systems are easily transportable and consist of a weatherproof case on a tripod that contains a Nikon D5200 camera operated in a time-lapse mode. A zoom lens is set at its maximum focal length of 400 mm for the WSMR measurements and at 300 mm for the JER observations. The camera is typically programmed to collect images in 5-min intervals with a fixed 5.6 f-number and automatic shutter speed. Example frames of the mountain targets and valleys for the WSMR and JER experiments are shown in Fig. 1. The rectangles indicate areas in the images that were cropped and used for the refraction analysis in this paper.

Fig. 1

Example time-lapse image frames for (a) WSMR experiment and (b) JER experiment. Rectangles indicate the mountain ridge areas of interest that are used for the analysis in this paper.

OE_59_8_081803_f001.png

Local weather has a significant effect on the vertical temperature gradients that are primarily responsible for the atmospheric refraction effects. The weather variables of interest in our study include temperature, humidity, pressure, and solar radiation. For the WSMR experiment, online metrological data were downloaded from a weather station near the target mountain. A Davis Vantage Vue brand weather station next to the camera was utilized for the JER experiment. These measurements are interpolated in time to align with the time-lapse image frames.

The Kanade–Lucas–Tomasi point-tracking algorithm8,9 is implemented to measure the apparent motion of the mountain ridges in the images.10 The general steps of the data-processing approach are depicted in Fig. 2. An area containing the far-field target in each frame is cropped and the N “best” features in the subframe are determined by a threshold setting. The results are stored in a feature list in descending order of “goodness.” Then, the algorithm tracks these features in consecutive frames. A near-field object reference close to the camera is also selected and point-tracking of this feature is applied in the analysis to remove shifts in the far-field images that are due to camera platform motion. Figure 3(a) shows some selected points associated with the cropped image of the mountain peak in the JER experiment, and Fig. 3(b) shows the positions of these points in the next consecutive frame. The point-tracking algorithm on average selects the same points in each frame. The vertical positions of the multiple points are averaged to give a measurement of the ridge position.

Fig. 2

Image processing block diagram.

OE_59_8_081803_f002.png

Fig. 3

Selected tracking points for the JER mountain peak target: (a) initial frame and (b) the following frame.

OE_59_8_081803_f003.png

After point-tracking, the near-field average pixel coordinates are subtracted from the far-field coordinates frame by frame to obtain the apparent position of the far-field target. Displacement of the target’s apparent position from frame to frame is attributed to changes in atmospheric refraction. The most significant shifts are found to occur in the vertical direction.

2.2.

Numerical Weather Prediction and Ray Tracing

NWP is a discipline where governing equations and parameterizations that describe fluid flow and other physical processes are applied to current (or previous) weather conditions to provide a future forecast. For our purposes, the model results can be used to predict the vertical structure of the refractive index in the atmosphere. However, an extension of the established models is required to provide higher spatial resolution along our paths of interest.11,12 In this section, we describe our approach for using refractive index gradient data generated by NWP and the application of ray tracing to determine corresponding image shifts. The results of the approach are compared with time-lapse measurements in Sec. 3. The numerical weather model (called the Weather Research and Forecasting – WRF model) uses initial and boundary conditions from a global-scale reanalysis data set (called ERA-5) and topographical effects to generate the refractive gradient data for a particular location and time range corresponding to our field measurements.12 Figure 4 illustrates the refractive index gradient [dn(h)/dh] model results for the WSMR experiment time-lapse imaging path on the morning of February 5, 2018, where n(h) is the vertical profile of refraction index as a function of height h. The gradient values are presented as a function of altitude relative to mean sea level and a function of distance along the propagation path. In Fig. 4, the camera site is on the left and the mountain ridge is on the right.

Fig. 4

Example refractive index gradient results from NWP for the WSMR experiment on February 5, 2018, at 7:50 am MST. The white area is the ground. The time-lapse camera is located in the basin on the left and a peak in the Oscura Mountains is on the right. Example ray-tracing paths are shown between the target position on the peak and the area near the camera.

OE_59_8_081803_f004.png

Figure 5 shows the NWP model result for the JER experiment on the morning of July 18, 2018. The camera site is on the left and the mountain ridge is to the right. The target corresponding to this result is the rectangular area indicated at the lower right side of the mountain ridge in Fig. 1(b). In fact, the actual mountain peak in Fig. 1(b) is a narrow ridge that is not adequately resolved by the current NWP model spatial resolution (1  km). Thus, NWP results are not available for the mountain peak. However, the model result shown in Fig. 5 allows us to examine the shift of a lower portion of the mountain ridge at about 1400 m in elevation where the viewing path is nearly horizontal across the basin.

Fig. 5

Example refractive index gradient results from NWP for the JER experiment on July 18, 2018, at 7:10 am MDT. Example ray paths are shown between the camera (left) and the target feature (right), which is a lower part the mountain ridge.

OE_59_8_081803_f005.png

Ray tracing through the gradient profiles is used to determine the image displacements predicted by the model. Ray-tracing techniques are often applied for refraction analysis over near-ground horizontal paths assuming diffraction effects are not significant.3,13 The NWP data essentially consist of “blocks” of constant refractive index gradient, as indicated in Figs. 4 and 5. Rather than using a conventional linear ray-tracing algorithm that requires subsampling the blocks to provide accurate trajectories, we apply a second-order ray tracer14 where the linear ray transfer equation is expanded with a quadratic correction term to model the curved ray trajectory within each block. This approach requires only one tracing step for each data block and is significantly faster than a linear ray trace approach with subsampling. A summary of this method is now presented.

Consider a two-dimensional form of the eikonal equation describing the ray trajectory in an inhomogeneous media:15

Eq. (1)

d2hdx2=1n(h)dn(h)dh,
where h is the height and x is the horizontal distance. This expression assumes horizontal paraxial propagation, and the refractive index gradient is vertical. Each block in the NWP data has a constant vertical gradient value indicated by κ=dn/dh and, assuming n(h)1, the solution to Eq. (1) for the ray vertical position within a block can be given as

Eq. (2)

h(x)=x2κ2+θ0x+h0,
where θ0 and h0 are the initial ray angle and height, respectively. Taking the derivative of Eq. (2) with respect to x gives the ray trajectory angle as a function of distance,

Eq. (3)

θ(x)=xκ+θ0.

With respect to the block boundaries in the NWP index gradient data, Eqs. (2) and (3) are used in succession to transfer the ray height between one boundary and the next and then provide the bending of the ray angle for the next block. Iteratively, the equations become:

Eq. (4)

hj=Δx2κj12+θj1Δx+hj1,
and

Eq. (5)

θj=Δxκj1+θj1,
where Δx is the distance between adjacent blocks (1  km for our NWP data) and j is an index that identifies the different blocks.

Example ray trajectories generated by the tracing approach are illustrated in Figs. 4 and 5. Rays (200 rays for WSMR and 100 rays for JER) are launched from the image target (mountain ridge elevation of 2200  m for WSMR and the basin edge elevation of 1415  m for JER) for a range of initial angles (0.1 to 8  mrad for WSMR and 1 to +1  mrad for JER). The ray trajectories are traced through the model gradients until they reach the ground near the camera. The specific ray that strikes the ground at the camera location is identified and a line is backprojected at the ray arrival angle. The height of this line at the target plane indicates the apparent target position as seen by the camera. The apparent positions are calculated for successive model frames and the relative shifts are determined. For the results presented here, NWP results were computed for 10-min intervals, and the ray-tracing procedure was applied. We note that it is also possible to trace the rays from the camera position toward the target position.

2.3.

Machine Learning Predictions

In this section, we describe an ML approach to predict image displacement due to atmospheric refraction based on a set of measured metrological values. The input variables we use for prediction are temperature (T), humidity (H), pressure (P), and solar radiation (S). The predicted output is the image displacement (y^) due to refraction. Other available meteorological data, such as wind speed, were also applied as test inputs to the ML model, but we found that these alternative parameters had little influence on the prediction results. The weather station for the JER experiment provides measurements of these variables at 15-min intervals. In addition, we utilize other local measurements available online at hourly intervals. Prior to input to the algorithm, the measurement values are normalized to the range of (0,1) by dividing each value by the maximum value observed. Our ML prediction process follows the conventional approach of splitting the image displacement and meteorological data into three sets: training, validation, and testing. Our prediction is a comparison of the ML results with the testing data set.

The ML approach is based on linear regression and we assumed a model of the form:

Eq. (6)

y^(T,H,P,S;w)=w1+w2T+w3T2+w4H+w5H2+w6P+w7P2+w8S+w9S2+w10TH+w11TP+w12TS+w13HP+w14HS+w15PS,
where w=[w1w15]T are the coefficient weights, and linear, square, and pairwise products of the meteorological parameters are used as nonlinear kernel functions. The choice of kernel functions was based on trial-and-error experimentation. The coefficient values w are determined by fitting Eq. (6) to the training data by minimizing an error function that measures the misfit between y^ and the measured values   y=[y1yN]T as a function of w. Our choice of error function is the regularized squared error:16,17

Eq. (7)

E(w)=12n=1N[y^(Tn,Hn,Pn,Sn;w)yn]2+λ2w2.
The term (λ/2)w2 is a penalty (regularization) term to avoid overfitting where the parameter λ is an input to the model that governs the relative importance of the regularization term compared with the squared error term.

Given N data points (Tn,Hn,Pn,Sn;yn), the coefficients w that minimize the cost function in Eq. (7) are obtained in closed form by differentiating E(w) with respect to w, setting the result to zero, and solving for w. This produces the following well-known result:17

Eq. (8)

w=(λI+ΦHΦ)1ΦHy
where I is the identity matrix and

Eq. (9)

Φ=(1T1T12H1H12P1P12S1S12T1H1T1P1T1S1H1P1H1S1P1S11T2T22H2H22P2P22S2S22T2H2T2P2T2S2H2P2H2S2P2S21TnTn2HnHn2PnPn2SnSn2TnHnTnPnTnSnHnPnHnSnPnSn1TNTN2HNHN2PNPN2SNSN2TNHNTNPNTNSNHNPNHNSNPNSN).
As shown in Eq. (9), the input observations are arranged as row vectors. The algorithm steps involve constructing the matrix   Φ, obtaining the vector y; and applying Eq. (8) to compute the weights w.

Referring to Eq. (7), in order to generate a predictive model that generalizes well for new data input, the selection of the value of λ needs to strike a balance between overfitting and underfitting of the training data. A larger value of λ tends toward a “simpler” fit but there is more likelihood of underfitting the data where some of the dominant trends are not captured. On the other hand, a low value of λ provides a more “complex” fit but there is more likelihood of overfitting the data where specific noise events are captured as trends. We use a tuning approach to determine the value of λ where a search over a range of values is performed. For each λ trial, the model is first fit to the training data set. This model realization is next applied to the validation data, and finally the mean squared error (MSE) between the model and measured validation data displacement result is calculated. The value of λ that provides the lowest MSE is selected and used for the next step of prediction comparisons using the testing data.

Finally, we mention that for the JER experiment, the model form in Eq. (6) was used but we added a fifth binary variable (D) that takes either a value of 1 if the sky is “clear” or 0 if it is “cloudy.” The value of D is determined from a visual analysis of the sky conditions in the time-lapse imagery. If the sky appears to be cloudy in more than 50% of the frames during the daytime hours, then D is set to 0, otherwise it is set to 1. Although this is an unrefined measurement and a simple binary parameter, we found the approach improved the accuracy of the model for varying sky conditions.

3.

Results

NWP results were computed for 10-min intervals and the ray-tracing procedure was applied to determine the target shifts. The shift results were interpolated in time to align with the time-lapse image frames. Figure 6 shows comparative results for the WSMR experiment on February 5, 2018, as a function of time-of-day where the red curve is the apparent shift (in radians) of the mountain ridge as derived from ray tracing the NWP data. The black curve is the predicted shift by the ML algorithm where, prior to the date shown, we trained the algorithm on 4 days of displacement and meteorological data (500 data points) and tuned the result on 2 days of validation data. The search range for λ was (0, 40) and, for these results, λ was found to be 2. The blue curve is the shift measured in the actual camera frames. Measurements are only available for the daytime hours as the ridge is not clearly distinguishable by the camera at night.

Fig. 6

Apparent shift of the mountain peak as a function of local time for the WSMR experiment on February 2018: time-lapse image measurement (blue), NWP + ray-tracing prediction (red), and the ML model prediction (black).

OE_59_8_081803_f006.png

The general downward drift throughout the daytime hours, as shown in Fig. 6, is an effect we commonly observe in clear weather and this corresponds to a slow reduction in the average refractive index gradient of the atmosphere along our line of sight. The NWP ray trace and the ML model results agree well in amplitude and phase with the image shift results from the camera measurements, although there are differences in the short-time variations. By manually evaluating some of the time-lapse images, we verified that the point tracking results appear to be accurate, and so we believe the short-time variations are due to the atmosphere. These excursions probably represent turbulent fluctuations in the refractive index that are not completely captured by the NWP and it is not possible to predict these short time variations by the ML model due to the differences in sampling where the time intervals for the meteorological data (1 h) are much longer than the time-lapse intervals (5 min). The meteorological measurements were also not collected directly in the imaging path, which could contribute to further differences in the measurements and ML results.

Figure 7 presents example shift results for the lower part of the ridge in the JER experiment on July 18, 2018. Details of the data-processing approach are the same as described for the WSMR results (Fig. 6). For this result, the training and validation data sets consisted of 2 days each and λ was found to be 0.01. Like the WSMR results, the NWP ray trace and ML predictions in this case demonstrate the same overall amplitude and phase behavior as the time-lapse measurements, but the short time variations are not captured.

Fig. 7

Apparent shift of the lower mountain ridge as a function of local time for the JER experiment on July 18, 2018: time-lapse image measurement (blue), NWP + ray-tracing prediction (red). and the ML model prediction (black).

OE_59_8_081803_f007.png

Figure 8 shows the measurement and ML prediction results for the mountain peak in the JER experiment [Fig. 1(b)] over a period of 6 days from January 20, 2019 to January 25, 2019. As discussed in Sec. 2.2, NWP results are not available for this peak target. The 4-day training set in this case included different weather conditions (sunny, very cloudy, and mixtures of sunny and cloudy conditions). The value of λ was found to be 5. The weather was sunny and clear for all the days shown except for January 22 when the sky was cloudy. The results show that the ML model prediction for different weather conditions and over different path angles is consistent with the general trends of the measurements. We note that the cloudy-day result shows less of the daily downward progression of the peak position. Note that the overall shifts for these JER mountain peak results are significantly smaller than the results for WSMR (Fig. 6). This is likely because the WSMR ridgeline is much further from the camera (>100  km) than the ridgeline for JER (20  km).

Fig. 8

Apparent shift of the mountain peak as a function of local time for the JER experiment for January 20 to 25, 2019, daytime: time-lapse image measurement (blue) and the ML model prediction (black).

OE_59_8_081803_f008.png

We end this section with a few comments about the range of values found for the regularization parameter λ. Generally, we expect a smaller value of λ (less regularization) to be found when the short time variation amplitudes are relatively smaller than the mean trend deviations within the training and validation data. However, the value of λ selected through our tuning approach appears to be sensitive to other factors such as the shape of the mean trend curve. Also, because we trained and validated the model on relatively small data sets, even adding a few days of data or including a few unusual shift results can affect the λ value. It is important to note that although the tuning process identifies a particular λ value, we found through additional testing that there is a range that typically produces nearly the same prediction result. For example, similar ML results to that shown in Fig. 8 can be produced with λ values ranging from 0.001 to about 10.

4.

Conclusions

We found that NWP along with a ray-tracing technique can be used for prediction of image displacement effects due to atmospheric refraction. The ray-tracing approach that we applied to determine the effect of the gradients produced by the model was straightforward to implement and provided credible results. The model results of target displacement were consistent with field imagery in amplitude and overall trend. However, the model could not predict some of the short time variations in the field measurements, which may be due to localized events that are not completely captured by the numerical model grid or simulation process. As an alternative to NWP, we explored the use of an ML algorithm to build a predictive model based on meteorological data collected near the camera location. We found that our ML model was successful in making predictions over different weather conditions. Similar to the NWP result, the ML model prediction could not follow the short time excursions of the field results. In this case, the slow sample rate of the meteorological data compared to the time-lapse image frame rate is an aggravating factor. We are now working to determine if the ML approach can be extended to encompass different seasons as well as different weather conditions. We also are investigating our ability to apply image point-tracking and ML prediction to detect geometrical distortions of the target image rather than just the apparent target shift.

Acknowledgments

Earlier results of this work appeared in the conference proceedings paper: W. Al-Younis, C. Nevares, D. Voelz, S. Sandoval, S. Basu, “Predicting Atmospheric Refraction with Weather Modeling and Machine Learning,” Proc. SPIE 11133, Laser Communication and Propagation through the Atmosphere and Oceans VIII 2019, 11133-13 (13 August 2019). This work was supported by the Directed Energy Joint Transition Office; Award No. N00014-17-1-2535. We thank Daniel Short and Ivan Dragulin for collecting the White Sands Missile Range time-lapse data. We also acknowledge the New Mexico State University TAR Capstone group who constructed and operated the Jornada time-lapse system.

References

1. 

A. T. Young, “Air mass and refraction,” Appl. Opt., 33 (6), 1108 –1110 (1994). https://doi.org/10.1364/AO.33.001108 APOPAI 0003-6935 Google Scholar

2. 

L. H. Auer and E. M. Standish, “Astronomical refraction: computation for all zenith angles,” Astron. J., 119 (5), 2472 –2474 (2002). https://doi.org/10.1086/301325 ANJOAA 0004-6256 Google Scholar

3. 

P. Valtr and P. Pechac, “Tropospheric refraction modeling using ray-tracing and parabolic equation,” Radioengineering, 14 (4), 98 –104 (2005). https://doi.org/10.13164/re RAIOE3 Google Scholar

4. 

S. Basu et al., “Estimation of temporal variations in path-averaged atmospheric refractive index gradient from time-lapse imagery,” Opt. Eng., 55 (9), 090503 (2016). https://doi.org/10.1117/1.OE.55.9.090503 Google Scholar

5. 

D. Voelz et al., “Low cost digital photography approach to monitoring optical bending and guiding in the atmosphere,” in IEEE Aerosp. Conf. Proc., 1 –7 (2015). https://doi.org/10.1109/AERO.2015.7119059 Google Scholar

6. 

D. Short et al., “Atmospheric refraction: applied image analysis and experimental example for index profile with curvature,” Proc. SPIE, 9833 98330D (2016). https://doi.org/10.1117/12.2224294 PSISDG 0277-786X Google Scholar

7. 

J. E. McCrae, S. R. Bose-Pillai and S. T. Fiorino, “Estimation of turbulence from time-lapse imagery,” Opt. Eng., 56 (7), 071504 (2017). https://doi.org/10.1117/1.OE.56.7.071504 Google Scholar

8. 

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. 7th Int. Joint Conf. Artif. Intell., 674 –679 (1981). Google Scholar

9. 

C. Tomasi and T. Kanade, “Detection and tracking of point features,” (1991). Google Scholar

10. 

W. Al-Younis, C. Nevarez and D. Voelz, “Time-lapse imaging for studying atmospheric refraction: measurements with natural targets,” in IEEE Aerosp. Conf. Proc., 1 –7 (2019). https://doi.org/10.1109/AERO.2019.8742012 Google Scholar

11. 

S. Basu, “Simulating an extreme over-the-horizon optical propagation event over Lake Michigan using a coupled mesoscale modeling and ray tracing framework,” Opt. Eng., 56 (7), 071505 (2017). https://doi.org/10.1117/1.OE.56.7.071505 Google Scholar

12. 

C. G. Nunalee et al., “Mapping optical ray trajectories through island wake vortices,” Meteorol. Atmos. Phys., 127 (3), 355 –368 (2015). https://doi.org/10.1007/s00703-015-0366-4 MAPHEU 1436-5065 Google Scholar

13. 

M. Grabner, V. Kvicera, “Atmospheric refraction and propagation in lower troposphere,” Electromagnetic Waves, 139 –156 InTech, London (2011). Google Scholar

14. 

D. J. Short, “Refraction in the lower troposphere: higher order image distortion effects due to refractive profile curvature,” (2016). Google Scholar

15. 

B. E. Saleh and M. C. Teich, Fundamentals of Photonics, 2nd ed.Wiley, Hoboken, New Jersey (2007). Google Scholar

16. 

A. E. Hoerl and R. Kennard, “Ridge regression: biased estimation for nonorthogonal problems,” Technometrics, 12 55 –67 (1970). https://doi.org/10.1080/00401706.1970.10488634 TCMTA2 0040-1706 Google Scholar

17. 

C. M. Bishop, Pattern Recognition and Machine Learning, Springer Science+Business Media, New York (2006). Google Scholar

Biography

Wardeh Al-Younis is a PhD student in the Klipsch School of Electrical and Computer Engineering at New Mexico State University (NMSU). Her research interests include laser beam propagation through atmospheric refractivity, image processing, and remote sensing. She earned her BS degree in telecommunication engineering from Yarmouk University, Jordan in 2012. She received her MS degree in wireless communication engineering in 2016 from Yarmouk University, Jordan.

Christina Nevarez is a graduate student at the Klipsch School of Electrical and Computer Engineering at NMSU. Her research interests include laser beam propagation through atmospheric refractivity. She earned her BS degree in atmospheric physics from New Mexico Institute of Mining and Technology in 2014.

Mohammad Abdullah-Al-Mamun is a research assistant at the Klipsch School of Electrical and Computer Engineering in NMSU. He earned his BS and MS degrees in physics from the University of Dhaka, Bangladesh. He also earned his MS degree in physics from NMSU in 2018. His research interests include laser propagation through the atmosphere, atmospheric turbulence, adaptive optics, and optical system design. He is a student member of SPIE.

Steven Sandoval received his BS degree in electrical engineering and MS degree in electrical engineering from NMSU in 2007 and 2010 respectively, and his PhD in electrical engineering from Arizona State University in 2016. Currently, he is at the Klipsch School of Electrical and Computer Engineering at NMSU. His research interests include audio and speech processing, time-frequency analysis, and machine learning.

Sukanta Basu is an associate professor at Delft University of Technology. He received his PhD in civil engineering from the University of Minnesota in 2004. His current research interests include atmospheric optics, turbulence modeling, and NWP. His research has been disseminated through more than 50 peer-reviewed publications in the fields of fluid mechanics, meteorology, nonlinear physics, atmospheric optics, and engineering. He is a member of SPIE.

David Voelz is a professor at the Klipsch School of Electrical and Computer Engineering in NMSU. His research interests include optical propagation through turbulence, spectral and polarization sensing, imaging theory, astronomical instrumentation development, and laser communications. He earned his PhD in electrical engineering from the University of Illinois in 1987. He is a fellow of SPIE and OSA.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Wardeh Al-Younis, Christina Nevarez, Mohammad Abdullah-Al-Mamun, Steven Sandoval, Sukanta Basu, and David Voelz "Image shift due to atmospheric refraction: prediction by numerical weather modeling and machine learning," Optical Engineering 59(8), 081803 (10 March 2020). https://doi.org/10.1117/1.OE.59.8.081803
Received: 3 December 2019; Accepted: 18 February 2020; Published: 10 March 2020
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Atmospheric modeling

Refraction

Cameras

Data modeling

Machine learning

Refractive index

Atmospheric propagation

Back to Top