Dr. John N. Sanders-Reed
SPIE Involvement:
Editor | Author | Instructor
Area of Expertise:
Enhanced Vision , Degraded Visual Environments , Tracking Systems , Videogrammetry
Websites:
Profile Summary

Dr. Sanders-Reed is widely recognized for his work in advanced and Distributed Aperture Sensor (DAS) systems, Degraded Visual Environment (DVE) sensing and phenomenology, and photogrammetry and video motion analysis. He has also worked extensively in the areas of faint target detection and multi-sensor, multi-target detection and tracking. He is the developer of the Visual Fusion video-grammetry motion analysis software package. In addition to over 25 peer reviewed or conference papers, 6 patents, and chairing an annual advanced vision system conference since 2014, he is also co-author of the "Photonics Rules of Thumb" optical handbook and the soon to be released, "Principles of Vision Enabled Autonomous Flight".

Dr. Sanders-Reed holds a PhD in physics from Case Western Reserve University in Cleveland and an MBA from the Northeastern University High-Tech MBA program. He is a Fellow of the SPIE. He is currently the owner of Image and Video Exploitation Technologies (IVET) LLC and developer of the Visual Fusion motion analysis software package. He has worked at Picker X-ray (medical imaging), MIT Lincoln Laboratory, SVS Inc (start-up company eventually bought by Boeing), and was a Technical Fellow with The Boeing Company and Chief Technologist in Boeing Research & Technology (BR&T) until his retirement in 2020. He also continues (since 2002) to teach a one day course on motion analysis at MIT as part of a larger high speed imaging course.

Publications (23)

SPIE Press Book | 25 August 2021
KEYWORDS: Sensors, LIDAR, Long wavelength infrared, Radar, Cameras, Global Positioning System, Imaging systems, Target detection, Mid-IR, Content addressable memory

Proceedings Article | 12 April 2021 Presentation
Proceedings Volume 11759, 1175902 (2021) https://doi.org/10.1117/12.2597582

SPIE Press Book | 10 June 2020
KEYWORDS: Sensors, Mirrors, Infrared radiation, Telescopes, Eye, Signal to noise ratio, Glasses, Optical components, Reflectivity, Target detection

SPIE Journal Paper | 3 April 2019 Open Access
OE, Vol. 58, Issue 05, 051801, (April 2019) https://doi.org/10.1117/12.10.1117/1.OE.58.5.051801
KEYWORDS: Visualization, Driver's vision enhancers, Visibility through fog, Visibility, Sensors, Aerospace engineering, Fiber optic gyroscopes, Situational awareness sensors, Data integration, Augmented reality

Proceedings Article | 2 May 2018 Presentation + Paper
John Sanders-Reed, Stephen Fenley
Proceedings Volume 10642, 106420S (2018) https://doi.org/10.1117/12.2305008
KEYWORDS: Signal attenuation, Visibility through fog, Visibility, Long wavelength infrared, Atmospheric modeling, Visible radiation, Mid-IR, Mass attenuation coefficient

Showing 5 of 23 publications
Proceedings Volume Editor (9)

Showing 5 of 9 publications
Conference Committee Involvement (9)
Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations III
4 April 2022 | Orlando, Florida, United States
Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations II
12 April 2021 | Online Only, Florida, United States
Situation Awareness in Degraded Environments 2020
27 April 2020 | Online Only, California, United States
Situation Awareness in Degraded Environments 2019
16 April 2019 | Baltimore, MD, United States
Situation Awareness in Degraded Environments 2018
17 April 2018 | Orlando, FL, United States
Showing 5 of 9 Conference Committees
Course Instructor
SC1312: Principles of Vision Enabled Autonomous Flight
This course address three basic questions related to vision enabled autonomous flight: 1. Why are vision systems fundamental and critical to vision enabled autonomous flight? 2. What are the vision system tasks required for autonomous flight? 3. How can those tasks be approached? Autonomous vehicle operations depend on developing a temporally evolving world model of the vehicle environment including items such as terrain, fixed and moving obstacles, other aircraft, landing sites and taxiways, and ownship location. Much of this information can be obtained from databases such as digital terrain elevation databases, cultural features databases, maps, air traffic control, ADS-B, and GPS antennas. However, this information can be incomplete, due to database errors and omissions, GPS failure, jamming, or spoofing, new construction or temporary closures not reflected in current databases, moving objects such as ground vehicles, personnel, and wildlife, and equipment failures such as ADS-B transponders. Vision systems provide up to the moment input to the world model and form a critical component of the safety case for autonomous operations, including both fixed wing and vertical lift aircraft. This course addresses the role of vision systems for autonomous operations and discusses the critical tasks required of a vision system including taxi, take-off, enroute navigation, detect and avoid, and landing, as well as formation flight or approach and docking at a terminal or with other vehicles. These tasks are then analyzed to develop field of view, resolution, latency, and other sensing requirements, and to understand when one sensor can be used for multiple applications. Along the way, the reader is introduced to the various airspace classifications, landing visibility categories and decision height criteria, and typical runway dimensions. Following this analysis of basic functional requirements, the book provides an overview of sensors and phenomenology from visible through infrared, extending into the radar bands, including both passive and active systems. Human visual system performance is discussed as a comparison benchmark. System architectures are discussed including distributed aperture sensor systems and multi-use sensors. Finally various algorithms for extracting information from sensor data are examined, including moving target detection for detect and avoid, shape from motion, multi-sensor triangulation, model based pose estimation, wire and cable detection, and geo-location techniques.
SC536: Image Based Motion Analysis
Image based motion analysis is a key technology that can be used to analyze missile flight performance, aircraft stores separation, aircraft safety and crash worthiness, ejection seat dynamics, artillery and small arms projectile performance and scoring, and explosive projectile distribution and velocity. Other applications include biological motion analysis and automotive crash test analysis. Imagery may be visible (standard or high speed), IR, or digitized film. This course describes techniques for extracting quantitative information from a time sequence of imagery. The primary focus is on position versus time of multiple objects, but intensity, separation distances, velocities, shape, angle of attack, and other features can be determined as well. The course covers basic single camera motion analysis, multiple camera three-dimensional motion analysis, rigid body 6 Degree Of Freedom analysis, and the use of non-image data (such as mount pointing data) to provide additional information during analysis. The course also provides a basic understanding of the image formation sequence from target radiance to image pixels, in order to understand how various effects in the image formation process may affect the final motion analysis results, and how these effects can be compensated for during analysis.
SIGN IN TO:
  • View contact details

UPDATE YOUR PROFILE
Is this your profile? Update it now.
Don’t have a profile and want one?

Advertisement
Advertisement
Back to Top