The Johns Hopkins University MultiModal Action (JHUMMA) dataset contains a set of twenty-one actions recorded with four sensor systems in three different modalities. The data was collected with a data acquisition system that includes three independent active sonar devices at three different frequencies and a Microsoft Kinect sensor that provides both RGB and Depth data. We have developed algorithms for human action recognition from active acoustics and provide benchmark baseline recognition performance results.
The advanced imagers team at JHU APL and ECE has been advocating and developing a new class of sensor systems
that address key system level performance bottlenecks but are sufficiently flexible to allow optimization of associated
cost and size, weight, and power (SWaP) for different applications and missions. A primary component of this approach
is the innovative system-on-chip architecture: Flexible Readout and Integration Sensors (FRIS). This paper reports on
the development and testing of a prototype based on the FRIS concept. It will include the architecture, a summary of test
results to date relevant to the hostile fire detection challenge. For this application, this prototype demonstrates the
potential for this concept to yield the smallest SWaP and lowest cost imaging solution with a low false alarm rate. In
addition, a specific solution based on the visible band is proposed. Similar performance and SWaP gains are expected for
other wavebands such as SWIR, MWIR, and LWIR and/or other applications like persistent surveillance for critical
infrastructure and border control in addition to unattended sensors.
We present a bio-inspired system-on-chip focal plane readout architecture which at the system level, relies on an
event based sampling scheme where only pixels within a programmable range of photon flux rates are output.
At the pixel level, a one bit oversampled analog-to-digital converter together with a decimator allows for the
quantization of signals up to 26 bits. Furthermore, digital non-uniformity correction of both gain and offset
errors is applied at the pixel level prior to readout. We report test results for a prototype array fabricated in a
standard 90nm CMOS process. Tests performed at room and cryogenic temperatures demonstrate the capability
to operate at a temporal noise ratio as low as 1.5, an electron well capacity over 100Ge-, and an ADC LSB down
The desire for persistent, long term surveillance and covertness places severe constraints on the power consumption of a sensor node. To achieve the desired endurance while minimizing the size of the node, it is imperative to use application-specific integrated circuits (ASICs) that deliver the required performance with maximal power efficiency while minimizing the amount of communication bandwidth needed. This paper reviews our ongoing effort to integrate several micropower devices for low-power wake-up detection, blind source separation and localization and pattern classification, and demonstrate the utility of the system in relevant surveillance applications. The capabilities of each module are presented in detail along with performance statistics measured during recent experiments.
The function of a large number of MEMS and NEMS devices relies critically on the transduction method employed to convert the mechanical displacement into electrical signal. Optical transduction techniques have distinct advantages over more traditional capacitive and piezoelectric transduction methods. Optical interferometers can
provide a much higher sensitivity, about 3 orders of magnitude, but are hardly compatible with standard MEMS and microelectronics processing. In this paper, we present a scalable architecture based in silicon on sapphire (SOS) CMOS 1 for building an interferometric optical detection system. This new detection system is currently
being applied to the sense the motion of a resonating MEMS device, but can be used to detect the motion of any object to which the system is packaged. In the current hybrid approach the SOS CMOS device is packaged with both vertical cavity surface emitting lasers (VCSELs) and MEMS devices. The optical transparency of the sapphire substrate together with the ultra thin silicon PIN photodiodes available in this SOS process allows for the design of both a Michelson type and Fabry Perot type interferometer. The detectors, signal processing electronics and VCSEL drivers are built on the SOS CMOS for a complete system. We present experimental data demonstrating interferometric detection of a vibrating device.
The U.S. Army Night Vision and Electronic Sensors Directorate (NVESD) currently has efforts internally and externally to develop advanced readout integrated circuits (ROICs) with on- chip processing capabilities. We have funded Raytheon Infrared Operations through a Dual Use Science and Technology program to develo and fabricate an advance ROIC with processing features including non-uniformity correction, extended charge handling, motion detection and edge enhancement. This advanced ROIC has been demonstrated through the successful development of the 'Adaptive Infrared Sensors' camera. Discussions of the circuit concepts and architecture of the 'AIRS' ROIC/FPA, as well as simulation results and test results of the camera is presented. Our internal investigations has resulted in an advanced readout design capable of real-time spatial and temporal filtering, to perform edge detection, edge enhancement, motion detection, and motion enhancement. The details of the circuit design, simulation results, as well as test data is presented.
The retina transduces all visual information that reaches the brain. From an engineering point of view, its function is to reduce the bandwidth required to transmit images to the brain by rejecting irrelevant information. Indeed, the retina is primarily sensitive to temporal and spatial change3 in the image, and not to the absolute level of illumination. This preprocessing greatly reduces the size of the optic nerve and makes higher level processing more effective. However, any process that discards information must necessarily create ambiguities. That is, two different stimuli may affect the same response—one stimulus thus creates an illusion of the other. Vision researchers have discovered many illusions. Any model that seeks to account for the behavior of the eye—brain system must explain this large phenomenological database in a unified and biologically plausible fashion. Grossberg has proposed a model that succeeds in the first respect; that is, he provides a unified mechanistic explanation for optical illusions1 . Grossberg succeeds, were others have failed, because his model takes into account interactions between the processes that control perception of form and appearance. As it turns out, these interacting processes offset each other's complementary inadequacies, producing emergerti properties that cannot be explained by focusing on any one process alone. Grossberg's model has three interacting processes. The first process enhances discontinuities ( edges) in the image and, at the same time, discounts the illuminant. This process is implemented using on—cells with lateral inhibitory connections whose outputs resemble that of the retinal bipolar cells. The second process does the actual edge detection. It is realized by three hierarchical layers of cells. The third process smooths variations in brightness using a syncytium of cells between which signals diffuse freely. Afferent inputs produced by the first process (on—cells) are averaged by the third process (syncytium) within boundaries generated by the second process (edge detection) to generate the final brightness percept. In Grossberg's work, results of computer simulations demonstrating the performance of this model for various 1—D and 2—D images are presented. The model gives the correct brightness percepts for several classic illusions, such as brightness constancy, brightness contrast ,the Craik— O'Brein—Cornsweet effect, the Koffka—Benusi ring, evenly and unevenly illuminated Mondrians, and more recent illusions such as the Kanizsa—Minguzzi anomalous brightness differentiation. That a simple mechanistic model can explain all these illusions away should not be surprising; they are produced by a single (highly evolved) underlying biological structure. 'Now in the CNS program, California Institute of Technology This paper describes a phy3ical model2 which implements the above mechanisms using two resistive networks (grids). The first network forms a spatial average of the input luminance signals, mimicking the retinal horizontal cells. The second network implements the syncytium using nonlinear conductances. The current in these conductances saturates when the voltage across them becomes large, automatically segmenting the image. In the retina, this mechanism is probably mediated by the gap junctions. Our model extends Mahowald and Mead's biologically inspired silicon retina2 to include inner—plexiform processing. It is simple and robust, having only three levels and six parameters (which are actual conductances and currents) compared to six levels and over twenty parameters for Grossberg's model. We have simulated our model on a computer (about 400 lines of C—code) and used it to duplicate the results1 using images with up to 40 x 40 pixels. Brightness percepts produced by the model for various illusions will be presented. Since the model has a simple and regular structure, requiring only nearest—neighbor connections, it can be efficiently implemented in Analog VLSI. It should be possible to realize a 200 x 200 pixel retina in a state—of—the—art CMOS process. Of course, the silicon retina will operate in real—time; its dynamic properties could be compared with available neurophysiological data. This paper is organized as follows: The new model is presented in the next section (Section 2). In Section 3, we describe the software implementation. Results from the simulations are presented in Section 4. In Section 5, we argue that the syncytium is realized by the amacrine cells in the inner—plexiform layer of the retina and show that the model's predictions are consistent with results from motion experiments. Our concluding remarks are in Section 6