Dr. John N. Sanders-Reed
Owner at Image & Video Exploitation Technologies, LLC
SPIE Involvement:
Conference Chair | Author | Editor | Instructor
Area of Expertise:
Enhanced Vision , Degraded Visual Environments , Tracking Systems , Videogrammetry
Profile Summary

Dr. Sanders-Reed is widely recognized for his work in advanced and Distributed Aperture Sensor (DAS) systems, Degraded Visual Environment (DVE) sensing and phenomenology, and photogrammetry and video motion analysis. He has also worked extensively in the areas of faint target detection and multi-sensor, multi-target detection and tracking. He is the developer of the Visual Fusion video-grammetry motion analysis software package. In addition to over 25 peer reviewed or conference papers, 6 patents, and chairing an annual advanced vision system conference since 2014, he is also co-author of the "Photonics Rules of Thumb" optical handbook and the soon to be released, "Principles of Vision Enabled Autonomous Flight".

Dr. Sanders-Reed holds a PhD in physics from Case Western Reserve University in Cleveland and an MBA from the Northeastern University High-Tech MBA program. He is a Fellow of the SPIE. He is currently the owner of Image and Video Exploitation Technologies (IVET) LLC and developer of the Visual Fusion motion analysis software package. He has worked at Picker X-ray (medical imaging), MIT Lincoln Laboratory, SVS Inc (start-up company eventually bought by Boeing), and was a Technical Fellow with The Boeing Company and Chief Technologist in Boeing Research & Technology (BR&T) until his retirement in 2020. He also continues (since 2002) to teach a one day course on motion analysis at MIT as part of a larger high speed imaging course.

Publications (22)

Proceedings Article | 12 April 2021 Presentation
Proc. SPIE. 11759, Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations II

SPIE Press Book | 10 June 2020

SPIE Journal Paper | 3 April 2019
OE Vol. 58 Issue 05
KEYWORDS: Visualization, Driver's vision enhancers, Visibility through fog, Visibility, Sensors, Aerospace engineering, Fiber optic gyroscopes, Situational awareness sensors, Data integration, Augmented reality

Proceedings Article | 2 May 2018 Presentation + Paper
Proc. SPIE. 10642, Degraded Environments: Sensing, Processing, and Display 2018
KEYWORDS: Signal attenuation, Visibility through fog, Visibility, Long wavelength infrared, Atmospheric modeling, Visible radiation, Mid-IR, Mass attenuation coefficient

Proceedings Article | 22 May 2015 Paper
Proc. SPIE. 9467, Micro- and Nanotechnology Sensors, Systems, and Applications VII
KEYWORDS: Extremely high frequency, Terahertz radiation, Sensors, LIDAR, Radar, Long wavelength infrared, Imaging systems, Signal attenuation, Particles, Fiber optic gyroscopes

Showing 5 of 22 publications
Proceedings Volume Editor (7)

Showing 5 of 7 publications
Conference Committee Involvement (8)
Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations II
12 April 2021 | Online Only, Florida, United States
Situation Awareness in Degraded Environments 2020
27 April 2020 | Online Only, California, United States
Situation Awareness in Degraded Environments 2019
16 April 2019 | Baltimore, Maryland, United States
Situation Awareness in Degraded Environments 2018
17 April 2018 | Orlando, Florida, United States
Degraded Environments: Sensing, Processing, and Display 2017
11 April 2017 | Anaheim, California, United States
Showing 5 of 8 Conference Committees
Course Instructor
SC536: Image Based Motion Analysis
Image based motion analysis is a key technology that can be used to analyze missile flight performance, aircraft stores separation, aircraft safety and crash worthiness, ejection seat dynamics, artillery and small arms projectile performance and scoring, and explosive projectile distribution and velocity. Other applications include biological motion analysis and automotive crash test analysis. Imagery may be visible (standard or high speed), IR, or digitized film. This course describes techniques for extracting quantitative information from a time sequence of imagery. The primary focus is on position versus time of multiple objects, but intensity, separation distances, velocities, shape, angle of attack, and other features can be determined as well. The course covers basic single camera motion analysis, multiple camera three-dimensional motion analysis, rigid body 6 Degree Of Freedom analysis, and the use of non-image data (such as mount pointing data) to provide additional information during analysis. The course also provides a basic understanding of the image formation sequence from target radiance to image pixels, in order to understand how various effects in the image formation process may affect the final motion analysis results, and how these effects can be compensated for during analysis.
  • View contact details

Is this your profile? Update it now.
Don’t have a profile and want one?

Back to Top