We recently developed a deep learning method that can determine the critical peak stress of a material by looking at scanning electron microscope (SEM) images of the material’s crystals. However, it has been somewhat unclear what kind of image features the network is keying off of when it makes its prediction. It is common in computer vision to employ an explainable AI saliency map to tell one what parts of an image are important to the network’s decision. One can usually deduce the important features by looking at these salient locations. However, SEM images of crystals are more abstract to the human observer than natural image photographs. As a result, it is not easy to tell what features are important at the locations which are most salient. To solve this, we developed a method that helps us map features from important locations in SEM images to non-abstract textures that are easier to interpret.
Hohlraums convert the laser energy at the National Ignition Facility (NIF) into X-ray energy to compress and implode a fusion capsule, creating fusion. The Static X-ray Imager (SXI) diagnostic collects time-integrated images of hohlraum wall X-ray illumination patterns viewed through the laser entrance hole (LEH). NIF image processing algorithms calculate the size and location of the LEH opening from the SXI images. Images obtained come from different experimental categories and camera setups and occasionally do not contain applicable or usable information. Unexpected experimental noise in the data can also occur where affected images should be removed and not run through the processing algorithms. Current approaches to try and identify these types of images are done manually and on a case-by-case basis, which can be prohibitively time-consuming. In addition, the diagnostic image data can be sparse (missing segments or pieces) and may lead to false analysis results. There exists, however, an abundant variety of image examples in the NIF database. Convolutional Neural Networks (CNNs) have been shown to work well with this type of data and under these conditions. The objective of this work was to apply transfer learning and fine tune a pre-trained CNN using a relatively small-scale dataset (~1500 images) and determine which instances contained useful image data. Experimental results are presented that show that CNNs can readily identify useful image data while filtering out undesirable images. The CNN filter is currently being used in production at the NIF.
Two machine-learning methods were evaluated to help automate the quality control process for mitigating damage sites on laser optics. The mitigation is a cone-like structure etched into locations on large optics that have been chipped by the high fluence (energy per unit area) laser light. Sometimes the repair leaves a difficult to detect remnant of the damage that needs to be addressed before the optic can be placed back on the beam line. We would like to be able to automatically detect these remnants. We try Deep Learning (convolutional neural networks using features autogenerated from large stores of labeled data, like ImageNet) and find it outperforms ensembles of decision trees (using custom-built features) in finding these subtle, rare, incomplete repairs of damage. We also implemented an unsupervised method for helping operators visualize where the network has spotted problems. This is done by projecting the credit for the result backwards onto the input image. This shows regions in an image most responsible for the networks decision. This can also be used to help understand the black box decisions the network is making and potentially improve the training process.
LIDAR devices for on-vehicle use need a wide field of view and good fidelity. For instance, a LIDAR for avoidance of landing collisions by a helicopter needs to see a wide field of view and show reasonable details of the area. The same is true for an online LIDAR scanning device placed on an automobile. In this paper, we describe a LIDAR system with full color and enhanced resolution that has an effective vertical scanning range of 60 degrees with a central 20 degree fovea. The extended range with fovea is achieved by using two standard Velodyne 32-HDL LIDARs placed head to head and counter rotating. The HDL LIDARS each scan 40 degrees vertical and a full 360 degrees horizontal with an outdoor effective range of 100 meters. By positioning them head to head, they overlap by 20 degrees. This creates a double density fovea. The LIDAR returns from the two Velodyne sensors do not natively contain color. In order to add color, a Point Grey LadyBug panoramic camera is used to gather color data of the scene. In the first stage of our system, the two LIDAR point clouds and the LadyBug video are fused in real time at a frame rate of 10 Hz. A second stage is used to intelligently interpolate the point cloud and increase its resolution by approximately four times while maintaining accuracy with respect to the 3D scene. By using GPGPU programming, we can compute this at 10 Hz. Our backfilling interpolation methods works by first computing local linear approximations from the perspective of the LIDAR depth map. The color features from the image are used to select point cloud support points that are the best points in a local group for building the local linear approximations. This makes the colored point cloud more detailed while maintaining fidelity to the 3D scene. Our system also makes objects appearing in the PanDAR display easier to recognize for a human operator.
KEYWORDS: Image segmentation, 3D modeling, Space operations, System on a chip, Image classification, Atomic force microscopy, Cameras, Associative arrays, Visualization, Space telescopes
3D reconstruction of objects via Shape from Motion (SFM) has made great strides recently. Utilizing images from a variety of poses, objects can be reconstructed in 3D without knowing a priori the camera pose. These feature points can then be bundled together to create large scale scene reconstructions automatically. A shortcoming of current methods of SFM reconstruction is in dealing with specular or flat low feature surfaces. The inability of SFM to handle these places creates holes in a 3D reconstruction. This can cause problems when the 3D reconstruction is used for proximity detection and collision avoidance by a space vehicle working around another space vehicle. As such, we would like the automatic ability to recognize when a hole in a 3D reconstruction is in fact not a hole, but is a place where reconstruction has failed. Once we know about such a location, methods can be used to try to either more vigorously fill in that region or to instruct a space vehicle to proceed with more caution around that area. Detecting such areas in earth orbiting objects is non-trivial since we need to parse out complex vehicle features from complex earth features, particularly when the observing vehicle is overhead the target vehicle. To do this, we have created a Space Object Classifier and Segmenter (SOCS) hole finder. The general principle we use is to classify image features into three categories (earth, man-made, space). Classified regions are then clustered into probabilistic regions which can then be segmented out. Our categorization method uses an augmentation of a state of the art bag of visual words method for object categorization. This method works by first extracting PHOW (dense SIFT like) features which are computed over an image and then quantized via KD Tree. The quantization results are then binned into histograms and results classified by the PEGASOS support vector machine solver. This gives a probability that a patch in the image corresponds to one of three categories: Earth, Man-Made or Space. Here man-made refers to artificial objects in space. To categorize a whole image, a common sliding window protocol is used. Here we utilized 90 high resolution images from space shuttle servicing missions of the international space station. We extracted 9000 128x128 patches from the images, then we hand sorted them into one of three categories. We then trained our categorizer on a subset of 6000 patches. Testing on 3000 testing patches yielded 96.8% accuracy. This is basically good enough because detection returns a probabilistic score (e.g. p of man-made). Detections can then be spatially pooled to smooth out statistical blips. Spatial pooling can be done by creating a three channel (dimension) image where each channel is the probability of each of the three classes at that location in the image. The probability image can then be segmented or co-segmented with the visible image using a classical segmentation method such as Mean Shift. This yields contiguous regions of classified image. Holes can be detected when SFM does not fill in a region segmented as man-made. Results are shown of the SOCS implementation finding and segmenting man-made objects in pictures containing space vehicles very different from the training set such as Skylab, the Hubble space telescope or the Death Star.
Passive millimeter wavelength (PMMW) video holds great promise, given its ability to see targets and obstacles through fog, smoke, and rain. However, current imagers produce undesirable complex noise. This can come as a mixture of fast shot (snowlike) noise and a slower-forming circular fixed pattern. Shot noise can be removed by a simple gain style filter. However, this can produce blurring of objects in the scene. To alleviate this, we measure the amount of Bayesian surprise in videos. Bayesian surprise measures feature change in time that is abrupt but cannot be accounted for as shot noise. Surprise is used to attenuate the shot noise filter in locations of high surprise. Since high Bayesian surprise in videos is very salient to observers, this reduces blurring, particularly in places where people visually attend. Fixed pattern noise is removed after the shot noise using a combination of non-uniformity correction and mean image wavelet transformation. The combination allows for online removal of time-varying fixed pattern noise, even when background motion may be absent. It also allows for online adaptation to differing intensities of fixed pattern noise. We also discuss a method for sharpening frames using deconvolution. The fixed pattern and shot noise filters are all efficient, which allows real time video processing of PMMW video. We show several examples of PMMW video with complex noise that is much cleaner as a result of the noise removal. Processed video clearly shows cars, houses, trees, and utility poles at 20 frames per second.
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to
know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we
discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create
a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system
spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion,
even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the
EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each
other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract
features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be
placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of
targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model.
These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each
from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long
periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored
away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method
has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly
similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model
compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null
hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target
reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of
target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and
tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since
the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of
objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
Passive millimeter wavelength (PMMW) video holds great promise given its ability to see targets and obstacles through
fog, smoke and rain. However, current imagers produce undesirable complex noise. This can come as a mixture of fast
shot (snow like) noise and a slower forming circular fixed pattern. Shot noise can be removed by a simple gain style
filter. However, this can produce blurring of objects in the scene. To alleviate this, we measure the amount of Bayesian
surprise in videos. Bayesian surprise is feature change in time which is abrupt, but cannot be accounted for as shot noise.
Surprise is used to attenuate the shot noise filter in locations of high surprise. Since high Bayesian surprise in videos is
very salient to observers, this reduces blurring particularly in places where people visually attend. Fixed pattern noise is
removed after the shot noise using a combination of Non-uniformity correction (NUC) and Eigen Image Wavelet
Transformation. The combination allows for online removal of time varying fixed pattern noise even when background
motion may be absent. It also allows for online adaptation to differing intensities of fixed pattern noise. The fixed pattern
and shot noise filters are all efficient allowing for real time video processing of PMMW video. We show several
examples of PMMW video with complex noise that is much cleaner as a result of the noise removal. Processed video
clearly shows cars, houses, trees and utility poles at 20 frames per second.
Super resolution image reconstruction allows for the enhancement of images in a video sequence that is superior to the
original pixel resolution of the imager. Difficulty arises when there are foreground objects that move differently than the
background. A common example of this is a car in motion in a video. Given the common occurrence of such situations,
super resolution reconstruction becomes non-trivial. One method for dealing with this is to segment out foreground
objects and quantify their pixel motion differently. First we estimate local pixel motion using a standard block motion
algorithm common to MPEG encoding. This is then combined with the image itself into a five dimensional mean-shift
kernel density estimation based image segmentation with mixed motion and color image feature information. This
results in a tight segmentation of objects in terms of both motion and visible image features. The next step is to combine
segments into a single master object. Statistically common motion and proximity are used to merge segments into master
objects. To account for inconsistencies that can arise when tracking objects, we compute statistics over the object and fit
it with a generalized linear model. Using the Kullback-Leibler divergence, we have a metric for the goodness of the track
for an object between frames.
Many vision research projects involve a sensor or camera doing one thing and doing it well. Fewer research projects have been done involving a sensor trying to satisfy simultaneous and conflicting tasks. Satisfying a task involves pointing the sensor in the direction demanded by the task. We seek ways to mitigate and select between competing tasks and also, if possible, merge the tasks together to be simultaneously achieved by the sensor. This would make a simple pan-tilt camera a very powerful instrument. These two approaches are task-selection and task-merging respectively. We built a simple testbed to implement our task-selection and task-merging schemes. We use a digital camera as our sensor attached to pan and tilt servos capable of pointing the sensor in different directions. We use three different types of tasks for our research: target tracking, surveillance coverage, and initiative. Target tracking is the task of following a target with a known set of features. Surveillance coverage is the task of ensuring that all areas of the space are routinely scanned by the sensor. Initiative is the task of focusing on new things of potential interest should they appear in the course of other activities. Given these heterogeneous task descriptions, we achieve task-selection by assigning priority functions to each task and letting the camera select among the tasks to service. To achieve task-merging, we introduce a concept called "task maps" that represent the regions of space the tasks wish to attend with the sensor. We then merge the task maps and select a region to attend that will satisfy multiple tasks at the same time if possible.
We are developing a distributed system for the tracking of people and objects in complex scenes and environments using biologically based algorithms. An important component of such a system is its ability to track targets from multiple cameras at multiple viewpoints. As such, our system must be able to extract and analyze the features of targets in a manner that is sufficiently invariant of viewpoints, so that they can share information about targets, for purposes such as tracking. Since biological organisms are able to describe targets to one another from very different visual perspectives, by discovering the mechanisms by which they understand objects, it is hoped such abilities can be imparted on a system of distributed agents with many camera viewpoints. Our current methodology draws from work on saliency and center surround competition among visual components that allows for real time location of targets without the need for prior information about the targets visual features. For instance, gestalt principles of color opponencies, continuity and motion form a basis to locate targets in a logical manner. From this, targets can be located and tracked relatively reliably for short periods. Features can then be extracted from salient targets allowing for a signature to be stored which describes the basic visual features of a target. This signature can then be used to share target information with other cameras, at other viewpoints, or may be used to create the prior information needed for other types of trackers. Here we discuss such a system, which, without the need for prior target feature information, extracts salient features from a scene, binds them and uses the bound features as a set for understanding trackable objects.
Many sensor systems such as security cameras and satellite photography are faced with the problem of where they should point their sensors at any given time. With directional control of the sensor, the amount of space available to cover far exceeds the field-of-view of the sensor. Given a task domain and a set of constraints, we seek coverage strategies that achieve effective area coverage of the environment. We develop metrics that measure the quality of the strategies and give a basis for comparison. In addition, we explore what it means for an area to be "covered" and how that is affected by the domain, the sensor constraints, and the algorithms. We built a testbed in which we implement and run various sensor coverage strategies and take measurements on their performance. We modeled the domain of a camera mounted on pan and tilt servos with appropriate constraints and time delays on movement. Next, we built several coverage strategies for selecting where the camera should look at any given time based on concepts such as force-mass systems, scripted movements, and the time since an area was last viewed. Finally, we describe several metrics with which we can compare the effectiveness of different coverage strategies. These metrics are based on such things as how well the whole space is covered, how relevant the covered areas are to the domain, how much time is spent acquiring data, how much time is wasted while moving the servos, and how well the strategies detect new objects moving through space.
We discus a tool kit for usage in scene understanding where prior information about targets is not necessarily understood. As such, we give it a notion of connectivity such that it can classify features in an image for the purpose of tracking and identification. The tool VFAT (Visual Feature Analysis Tool) is designed to work in real time in an intelligent multi agent room. It is built around a modular design and includes several fast vision processes. The first components discussed are for feature selection using visual saliency and Monte Carlo selection. Then features that have been selected from an image are mixed into useful and more complex features. All the features are then reduced in dimension and contrasted using a combination of Independent Component Analysis and Principle Component Analysis (ICA/PCA). Once this has been done, we classify features using a custom non-parametric classifier (NPclassify) that does not require hard parameters such as class size or number of classes so that VFAT can create classes without stringent priors about class structure. These classes are then generalized using Gaussian regions which allows easier storage of class properties and computation of probability for class matching. To speed up to creation of Gaussian regions we use a system of rotations instead of the traditional Psuedo-inverse method. In addtion to discussing the structure of VFAT we discuss training of the current system which is relatively easy to perform. ICA/PCA is trained by giving VFAT a large number of random images. The ICA/PCA matrix is computed by features extracted by VFAT. The non-parametric classifier NPclasify it trained by presenting it with images of objects having it decide how many objects it thinks it sees. The difference between what it sees and what it is supposed to see in terms of the number of objects is used as the error term and allows VFAT to learn to classify based upon the experimenters subjective idea of good classification.
One of the important components of a multi sensor “intelligent” room, which can observe, track and react to its occupants, is a multi camera system. This system involves the development of algorithms that enable a set of cameras to communicate and cooperate with each other effectively so that they can monitor the events happening in the room. To achieve this, the cameras typically must first build a map of their relative locations. In this paper, we discuss a novel RF based technique for estimating distances between cameras. The algorithm proposed for RF can estimate distances with
relatively good accuracy even in the presence of random noise.
We have developed a method for clustering features into objects by taking those features which include intensity,
orientations and colors from the most salient points in an image as determined by our biologically motivated
saliency program. We can train a program to cluster these features by only supplying as training input the number of
objects that should appear in an image. We do this by clustering from a technique that involves linking nodes in a
minimum spanning tree by not only distance, but by a density metric as well. We can then form classes over objects
or object segmentation in a novel validation set by training over a set of seven soft and hard parameters. We discus
as well the uses of such a flexible method in landmark based navigation since a robot using such a method may have
a better ability to generalize over the features and objects.
Utilizing off the shelf low cost parts, we have constructed a robot that is small, light, powerful and relatively inexpensive (< $3900). The system is constructed around the Beowulf concept of linking multiple discrete computing units into a single cooperative system. The goal of this project is to demonstrate a new robotics platform with sufficient computing resources to run biologically-inspired vision algorithms in real-time. This is accomplished by connecting two dual-CPU embedded PC motherboards using fast gigabit Ethernet. The motherboards contain integrated Firewire, USB and serial connections to handle camera, servomotor, GPS and other miscellaneous inputs/outputs. Computing systems are mounted on a servomechanism-controlled off-the-shelf “Off Road” RC car. Using the high performance characteristics of the car, the robot can attain relatively high speeds outdoors. The robot is used as a test platform for biologically-inspired as well as traditional robotic algorithms, in outdoor navigation and exploration activities. Leader following using multi blob tracking and segmentation, and navigation using statistical information and decision inference from image spectral information are discussed. The design of the robot is open-source and is constructed in a manner that enhances ease of replication. This is done to facilitate construction and development of mobile robots at research institutions where large financial resources may not be readily available as well as to put robots into the hands of hobbyists and help lead to the next stage in the evolution of robotics, a home hobby robot with potential real world applications.
In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate’s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.
Visual line following in mobile robotics can be made more complex when objects are places on or around the line being followed. An algorithm is presented that suggests a manner in which a good line track can be discriminated from a bad line track using the expected size of the line. The mobile robot in this case can determine the size of the width of the line. It calculates a mean size for the line as it moves and maintains a set size of samples, which enable it to adapt to changing conditions. If a measurement is taken that falls outside of what is to be expected by the robot, then it treats the measurement as undependable and as such can take measures to deal with what it believes to be erroneous data. Techniques for dealing with erroneous data include attempting to look around the obstacle or making an educated guess as to where the line should be. The system discussed has the advantage of not needing to add any extra equipment to discover if an obstacle is corrupting its measurements. Instead, the robot is able to determine if data is good ro bad based upon what it expects to find.
A method is discussed describing how different types of Omni-Directional fisheye lenses can be calibrated for use in robotic vision. The technique discussed will allow for full calibration and correction of x,y pixel coordinates while only taking two uncalibrated and one calibrated measurement. These are done by finding the observed x,y coordinates of a calibration target. Any Fisheye lense that has a roughly spherical shape can have its distortion corrected with this technique. Two measurements are taken to discover the edges and centroid of the lens. These can be done automatically by the computer and does not require any knowledge about the lens or the location of the calibration target. A third measurement is then taken to discover the degree of spherical distortion. This is done by comparing the expected measurement to the measurement obtained and then plotting a curve that describes the degree of distortion. Once the degree of distortion is known and a simple curve has been fitted to the distortion shape, the equation of that distortion and the simple dimensions of the lens are plugged into an equation that remains the same for all types of lenses. The technique has the advantage of needing only one calibrated measurement to discover the type of lens being used.
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The current use of these machines in outer space, medicine, hazardous materials, defense applications and industry is being pursued with vigor but little funding. In factory automation such robotics machines can improve productivity, increase product quality and improve competitiveness. The computer and the robot have both been developed during recent times. The intelligent robot combines both technologies and requires a thorough understanding and knowledge of mechatronics. In honor of the new millennium, this paper will present a discussion of futuristic trends and predictions. However, in keeping with technical tradition, a new technique for 'Follow the Leader' will also be presented in the hope of it becoming a new, useful and non-obvious technique.
A single rotating sonar element is used with a restricted angle of sweep to obtain readings to develop a range map for the unobstructed path of an autonomous guided vehicle (AGV). A Polaroid ultrasound transducer element is mounted on a micromotor with an encoder feedback. The motion of this motor is controlled using a Galil DMC 1000 motion control board. The encoder is interfaced with the DMC 1000 board using an intermediate IMC 1100 break-out board. By adjusting the parameters of the Polaroid element, it is possible to obtain range readings at known angles with respect to the center of the robot. The readings are mapped to obtain a range map of the unobstructed path in front of the robot. The idea can be extended to a 360 degree mapping by changing the assembly level programming on the Galil Motion control board. Such a system would be compact and reliable over a range of environments and AGV applications.
KEYWORDS: Neural networks, Mobile robots, Control systems, Sensors, Robot vision, Space robots, Evolutionary algorithms, Detection and tracking algorithms, Environmental sensing, Computing systems
The purpose of this paper is to present a new approach for path planing of a mobile robot in static outdoor environments. A simple sensor model is developed for fast acquisition of environment information. The obstacle avoidance system is based on a micro-controller interfaced with multiple ultrasonic transducers with a rotating motor. Using sonar readings and environment knowledge, a local map based on weight evaluation function is built for the robot path planing. The path planner finds the local optimal path using the A* search algorithm. The robot is trained to learn a goal-directed task under adequate supervision. The simulation experiments show that a robot, utilizing our neural network scheme, can learn tasks of obstacle avoidance in the work space of a certain geometrical complexity. The result shows that the proposed algorithm can be efficiently implemented in an outdoor environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.