Army NVESD MS/Human Signature Exploitation Ft. Belvoir, VA 22060 703-704-0532

• Fellow of American. Institute Medicine & BioEngineering 2004 for breast cancer passive spectrogram diagnoses.

• Fellow of IEEE (1997) for bi-sensor fusion;

• Foreign Academician, Russian Academy of Nonlinear Sciences, 1999, for unsupervised learning.

• Fellow of Optical Society America (1996) for adaptive wavelet

• Fellow of International Optical Engineering (SPIE since 1995) for neural nets.

• Fellow of INNS (2010) for a founding governor and former president of INNS

Dr. Szu has been a champion of brain-style computing for 2 decades; a founder, former president, and a current governor of International Neural Network Society (INNS), he received the INNS D. Gabor Award in 1997 “for outstanding contribution to neural network applications in information sciences and pioneer implementations of fast simulated annealing search,” and the Eduardo R. Caianiello Award in 1999 from the Italy Academy for “elucidating and implementing a chaotic neural net as a dynamic realization for fuzzy logic membership function.” Recently, he contributed to the unsupervised learning theory of the thermodynamic free energy of sensory pair for fusion. Because of this contribution, Dr. Szu is a foreign academician of Russian Academy of Nonlinear Sciences in 1999 for the unsupervised learning based on a homeostasis constant Cybernetic brain temperature. Recently, SPIE awarded him with the Nanoengineering Award and the Biomedical Wellness Engineering Award.

Besides 300 publications, a dozen patents, numerous books & journals, Dr. Szu taught students “how to be creative in interdisciplinary sciences” according to the Uhlenbeck’s Royal Dutch tradition and guided a dozen PhD students. His practice of the creativity is itemized as follows:

• Initiate Biomedical Wellness Engineering for the quality of life of aging societies.

• Promote Nano-Robot for high-yield Nanoengineering based on Nanosciences and Na

• Fellow of American. Institute Medicine & BioEngineering 2004 for breast cancer passive spectrogram diagnoses.

• Fellow of IEEE (1997) for bi-sensor fusion;

• Foreign Academician, Russian Academy of Nonlinear Sciences, 1999, for unsupervised learning.

• Fellow of Optical Society America (1996) for adaptive wavelet

• Fellow of International Optical Engineering (SPIE since 1995) for neural nets.

• Fellow of INNS (2010) for a founding governor and former president of INNS

Dr. Szu has been a champion of brain-style computing for 2 decades; a founder, former president, and a current governor of International Neural Network Society (INNS), he received the INNS D. Gabor Award in 1997 “for outstanding contribution to neural network applications in information sciences and pioneer implementations of fast simulated annealing search,” and the Eduardo R. Caianiello Award in 1999 from the Italy Academy for “elucidating and implementing a chaotic neural net as a dynamic realization for fuzzy logic membership function.” Recently, he contributed to the unsupervised learning theory of the thermodynamic free energy of sensory pair for fusion. Because of this contribution, Dr. Szu is a foreign academician of Russian Academy of Nonlinear Sciences in 1999 for the unsupervised learning based on a homeostasis constant Cybernetic brain temperature. Recently, SPIE awarded him with the Nanoengineering Award and the Biomedical Wellness Engineering Award.

Besides 300 publications, a dozen patents, numerous books & journals, Dr. Szu taught students “how to be creative in interdisciplinary sciences” according to the Uhlenbeck’s Royal Dutch tradition and guided a dozen PhD students. His practice of the creativity is itemized as follows:

• Initiate Biomedical Wellness Engineering for the quality of life of aging societies.

• Promote Nano-Robot for high-yield Nanoengineering based on Nanosciences and Na

**Publications**(222)

This will count as one of your downloads.

You will have access to both the presentation and article (if available).

^{3}V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S

^{3}V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S

^{3}V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

*S = k*turns out to be the Rosette stone for Greek physics translation optical display of the microwave sensing hieroglyphics. The LHS is the molecular entropy S measuring the degree of uniformity scattering off the sensing cross sections. The RHS is the inverse relationship (equation) predicting the Planck radiation spectral distribution parameterized by the Kelvin temperature T. Use is made of the conservation energy law of the heat capacity of Reservoir (RV) change T Δ S = -ΔE equals to the internal energy change of black box (bb) subsystem. Moreover, an irreversible thermodynamics Δ S > 0 for collision mixing toward totally larger uniformity of heat death, asserted by Boltzmann, that derived the so-called Maxwell-Boltzmann canonical probability. Given the zero boundary condition black box, Planck solved a discrete standing wave eigenstates (equation). Together with the canonical partition function (equation) an average ensemble average of all possible internal energy yielded the celebrated Planck radiation spectral (equation) where the density of states (equation). In summary, given the multispectral sensing data (equation), we applied Lagrange Constraint Neural Network (LCNN) to solve the Blind Sources Separation (BSS) for a set of equivalent bb target temperatures. From the measurements of specific value, slopes and shapes we can fit a set of Kelvin temperatures T’s for each bb targets. As a result, we could apply the analytical continuation for each entropy sources along the temperature-unique Planck spectral curves always toward the RGB color temperature display for any sensing probing frequency.

_{B}Log W*h*for the specific case where the fractions of elements in each of two states are the same (

*x*). An example application of this method would be for EEG interpretation in Brain-Computer Interfaces (BCIs), especially in the frontier of invariant biometrics based on distinctive and invariant individual responses to stimuli containing an image of a person with whom there is a strong affiliative response (e.g., to a person’s grandmother). This measure is obtained by mapping EEG observed configuration variables (

_{1}=x_{2}=0.5*z*for next-nearest neighbor triplets) to

_{1}, z_{2}, z_{3}*h*using the analytic function giving

*h*in terms of these variables at equilibrium. This mapping results in a small phase space region of resulting

*h*values, which characterizes local pattern distributions in the source data. The 1-D vector with equal fractions of units in each of the two states can be obtained using the method for transforming natural images into a binarized equi-probability ensemble (Saremi & Sejnowski, 2014; Stephens et al., 2013). An intrinsically 2-D data configuration can be mapped to 1-D using the 1-D Peano-Hilbert space-filling curve, which has demonstrated a 20 dB lower baseline using the method compared with other approaches (cf. SPIE ICA etc. by Hsu & Szu, 2014). This CVM-based method has multiple potential applications; one near-term one is optimizing classification of the EEG signals from a COTS 1-D BCI baseball hat. This can result in a convenient 3-D lab-tethered EEG, configured in a 1-D CVM equiprobable binary vector, and potentially useful for Smartphone wireless display. Longer-range applications include interpreting neural assembly activations via high-density implanted soft, cellular-scale electrodes.

*Gerchburg-Saxon-Hayes-Papoulis (GSHP)*and (ii) the Compressive Sensing scheme:

*Candes-Romberg-Donohoe-Tao (CRDT).*The following two lessons were learned: The mechanism is based on Gibbs overshooting of a step-discontinuity. It is based on relocating the sparsely sampled zeros at missing pixel locations a la spatial and spatial frequency inner product conformal mapping property.

*for homecare cancer screening. It may save hundreds of thousands of women’s and thousands of men’s lives every year from breast cancer and melanoma. The goal is to increase the specificity of*

**affordable, harmless, and administrative (AHA) metabolic biomarker (MBM)***imagery to reduce the*

**infrared (IR)***The patient’s hands are immersed in icy cold water, about 11oC, for 30 seconds. We then compare two IR images, taken before and after the cold stimulus, and the difference reveals an enhanced signal and noise ratio (SNR) at tumorigenesis sites since the contraction of capillaries under cold challenge is natural to healthy capillaries, except those newly built capillaries during angiogenesis (Folkman, Nature 1995). Concomitant with the genome and the phenome (molecular signaling by phosphor-mediate protein causing inflammation by*

**false alarm rate (FAR).***that transform cells from benign to malignant is the amplification of*

**platelet activating factor (PAF)***syntheses, a short-lived reactive oxygen species (ROS) that dilates regional blood vessels; superseding normal autonomic nervous system regulation. A rapidly growing tumor site might implicate accumulation of*

**nitric oxide (NO)***, for which*

**ROS***can rapidly stretch the capillary bed system usually having thinning muscular lining known as*

**NO***that could behave like*

**Neo-Angiogenesis (NA)***in response to cold challenge. To emphasize the state of art knowledge of*

**Leaky In-situ Faucet Effect (LIFE)***, we mentioned in passing the first generation of an anticapillary growth drug,*

**NA***by Genetech; it is an antibody protein that is injected for metastasis, while the second generation drug;*

**Avastin***by Bayers (2001) and*

**Sorafenib***by Pfizer (2000) both target molecular signaling loci to block receptor associated tyrosine kinase induced protein phosphorylation in order to reverse the angiogenesis. Differentiating benign from malignant in a straightforward manner is required to achieve the wellness protocol, yet would become prohibitively expensive and impossible to follow through. For example, given the*

**Sutent***about 0.1% over unspecified number of years (e.g. menopause years for breast cancer), one might need hundred thousand volunteers. We suggested a*

**probability of detection (PD)***(a private communication with Vatican) for gathering equivalent cancer symptom imagery from recovery histories of dozens of patients. We further mixed it with few % of recovered/non-sick cases for negative controls. Creating Virtual images and running videos of these, frame by frame, in two directions (forward and backward in time) resulted in identical*

**Time Reversal Invariant Paradigm (TRIP)***for both the computer Aided Target Recognition (AiTR) algorithm and the human radiological experts; namely*

**Receiver Operation Characteristics (ROC)***versus*

**PD***within the standard deviation; even though the physiology could be entirely different. Such a*

**FAR***would be true taken by any memory-less instantaneous imagery devices (IR, ultrasound, X-rays, MRI excluding magnetic hysteresis memory). In summary, such an*

**TRIP***can help monitor the transitioning from benign to malignant states of high-risk home alone seniors and also monitor the progress of home alone seniors treatment at home. Therefore, Smartphone equipped with a day camera having*

**affordable, harmless, and administrative, neo-angiogenesis metabolic biomarker***spectral filtering for a contact self imaging called joystick, when augmented with*

**IR***, may be suited for HAS homecare.*

**AHA NA MBM**^{3}) for N=10

^{2~3}known as functional f-EEG. The daily monitoring requires two areas of focus. Area #(1) to quantify the neuronal information flow under arbitrary daily stimuli-response sources. Approach to #1: (i) We have asserted that the sources contained in the EEG signals may be discovered by an unsupervised learning neural network called blind sources separation (BSS) of independent entropy components, based on the irreversible Boltzmann cellular thermodynamics(ΔS < 0), where the entropy is a degree of uniformity. What is the entropy? Loosely speaking, sand on the beach is more uniform at a higher entropy value than the rocks composing a mountain – the internal binding energy tells the paleontologists the existence of information. To a politician, landside voting results has only the winning information but more entropy, while a non-uniform voting distribution record has more information. For the human’s effortless brain at constant temperature, we can solve the minimum of Helmholtz free energy (H = E − TS) by computing BSS, and then their pairwise-entropy source correlation function. (i) Although the entropy itself is not the information per se, but the concurrence of the entropy sources is the information flow as a functional-EEG, sketched in this 2

^{nd}BOD report. Area #(2) applying EEG bio-feedback will improve collective decision making (TBD). Approach to #2: We introduce a novel performance quality metrics, in terms of the throughput rate of faster (Δt) & more accurate (ΔA) decision making, which applies to individual, as well as team brain dynamics. Following Nobel Laureate Daniel Kahnmen’s novel “Thinking fast and slow”, through the brainwave biofeedback we can first identify an individual’s “anchored cognitive bias sources”. This is done in order to remove the biases by means of individually tailored pre-processing. Then the training effectiveness can be maximized by the collective product Δt * ΔA. For Area #1, we compute a spatiotemporally windowed EEG in vitro average using adaptive time-window sampling. The sampling rate depends on the type of neuronal responses, which is what we seek. The averaged traditional EEG measurements and are further improved by BSS decomposition into finer stimulus-response source mixing matrix [A] having finer & faster spatial grids with rapid temporal updates. Then, the functional EEG is the second order co-variance matrix defined as the electrode-pair fluctuation correlation function C(s~, s~’) of independent thermodynamic source components. (1) We define a 1-D Space filling curve as a spiral curve without origin. This pattern is historically known as the Peano-Hilbert arc length a. By taking the most significant bits of the Cartesian product a≡ O(x * y * z), it represents the arc length in the numerical size with values that map the 3-D neighborhood proximity into a 1-D neighborhood arc length representation. (2) 1-D Fourier coefficients spectrum have no spurious high frequency contents, which typically arise in lexicographical (zig-zag scanning) discontinuity [Hsu & Szu, “Peano-Hilbert curve,” SPIE 2014]. A simple Fourier spectrum histogram fits nicely with the Compressive Sensing CRDT Mathematics. (3) Stationary power spectral density is a reasonable approximation of EEG responses in striate layers in resonance feedback loops capable of producing a 100, 000 neuronal collective Impulse Response Function (IRF). The striate brain layer architecture represents an ensemble <IRF< e.g. at V1-V4 of Brodmann areas 17-19 of the Cortex, i.e. stationary Wiener-Kintchine-Einstein Theorem. Goal#1: functional-EEG: After taking the 1-D space-filling curve, we compute the ensemble averaged 1-D Power Spectral Density (PSD) and then make use of the inverse FFT to generate f-EEG. (ii) Goal#2 individual wellness baseline (IWB): We need novel change detection, so we derive the ubiquitous fat-tail distributions for healthy brains PSD in outdoor environments (Signal=310°C; Noise=27°C: SNR=310/300; 300°K=(1/40)eV). The departure from IWB might imply stress, fever, a sports injury, an unexpected fall, or numerous midnight excursions which may signal an onset of dementia in Home Alone Senior (HAS), discovered by telemedicine care-giver networks. Aging global villagers need mental healthcare devices that are affordable, harmless, administrable (AHA) and user-friendly, situated in a clothing article such as a baseball hat and able to interface with pervasive Smartphones in daily environment.

*– a synergistic approach to LDA which combines traditional supervised, rule-based Machine Learning (ML) strategies to iteratively uncover hidden sources in large data, the artificial neural network (ANN) Unsupervised Learning (USL) at the minimum Helmholtz free energy for isothermal dynamic equilibrium strategies, and the Economic intuitions required to handle problems encountered when interpreting large amounts of Financial or Economic data. To make the ANN USL framework applicable to economics we define the temperature, entropy, and energy concepts in Economics from non-equilibrium molecular thermodynamics of Boltzmann viewpoint, as well as defining an information geometry, on which the ANN can operate using USL to reduce information saturation. An exemplar of such a system representation is given for firm industry equilibrium. We demonstrate the traditional ML methodology in the economics context and leverage firm financial data to explore a frontier concept known as behavioral heterogeneity. Behavioral heterogeneity on the firm level can be imagined as a firm's interactions with different types of Economic entities over time. These interactions could impose varying degrees of institutional constraints on a firm's business behavior. We specifically look at behavioral heterogeneity for firms that are operating with the label of ‘Going-Concern’ and firms labeled according to institutional influence they may be experiencing, such as constraints on firm hiring/spending while in a Bankruptcy or a Merger procedure. Uncovering invariant features, or*

**Augmented-LDA (A-LDA)***from observable firm data in an economy can greatly benefit the FED, World Bank, etc. We find that the ML/LDA communities can benefit from Economic intuitions just as much as Economists can benefit from generic data exploration tools. The future of successful Economic data understanding, modeling, simulation, and visualization can be amplified by new A-LDA models and approaches for new and analogous models of Economic system dynamics. The potential benefits of improved economic data analysis and real time decision aid tools are numerous for researchers, analysts, and federal agencies who all deal with increasingly large amounts of complex data to support their decision making.*

**behavioral data metrics***data-rich approach*suffering the curse of dimensionality and (ii)

*equation-rich*approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2

^{nd}order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10

*μm*and 10~12

*μm*) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one’s neighborhood free .

_{M,N }M(t) = K(t) Log N(t).

_{s}], rather than the traditional purely random compressive sensing (CS) matrix[Φ].

*Principal Component Analysis*(PCA) is an optimal method for approximating a set of vectors or images, which was used in image processing and computer vision for a number of tasks including face and object recognition. The computational complexity and its batch calculation nature have limited its applications. Here we discuss the two different effective solutions to sequentially calculate the principal bases in terms of the eigenvectors with respective eigenvalues using the covariance (or covariance estimate), which is faster in typical applications and is especially advantageous for image sequences. This principal component basis calculation is processed with much lower delay and allows for dynamic updating of image databases.

_{0}S ≥ 0, where T0 = 37°C, the Boltzmann Entropy S = K

_{B}1n(W), and U is the unknown internal energy to be computed.

*8mm to 2mm, and constantly oscillates in*

*1/2*second periodicity. Pupil dilation and contraction causes the iris texture to undergo nonlinear deformation with discrete components and minutia features. Thus, iris recognition must be scale invariant due to the pupil dynamics. We propose the Mandelbrot fractal dimension count of minutia iris details, at different intensity thresholds, in dilation-invariant wedge-boxes, formed at specific angular sizes, but spatially varying over 4

*90*° quadrants due to the cellular growth under the gravity. Despite the concentric dynamic, we have sought an invariant fractal dimensionality in the circular direction and discovered the non-isotropic effect, departed from the simple Richardson fractal law. Furthermore, we choose an optimum Rayleigh criterion λ/

*D*matching the robust fine resolution scale for the given lens aperture

*D*and the illumination wavelength λ for a potential application from a distant, with the help of comprehensive biometric including iris.

*X*collected from the frequency of user accesses, citations weighted by other sites' popularities, and modified by the financial sponsorship in a proprietary manner. The indexing determining the information to be retrieved by the public should be made responsible transparently in at least two ways. One shall balance the inbound linkages pointed at the specific i-th site called the popularity (see paper for equation) with the outbound linkages (see paper for equation) called the risk factor before the release of new information as environmental impact analysis. The relationship between these two factors cannot be assumed equivalent (undirected) as in the case of many mainstream Graph Theory (GT) models.

^{-9}meters, located at the mesocopic transition phase, which can take both classical mechanics (CM) and quantum mechanics (QM) descriptions bridging ten orders of magnitude phenomena, between the microscopic world of a single atom at 10

^{-10}meters with the macroscopic world at meters. However, QM principles aid the understanding of any unusual property at the nanotech level. The other major difference between nano-photonics and other forms of optics is that the nano-scale is not very 'hands on'. For the most part, we will not be able to see the components with our naked eyes, but will be required to use some nanotech imaging tools, as follows:

^{+}ISFET and H+ EGFET sensor array. The BSS extracts the concentration of individual ions using independent component analysis (ICA). The parameters of ISFET and EGFET sensors serve as a priori knowledge that helps solve the BSS problem. Using wireless transceivers, the ISFET/EGFET modules are realized as wireless sensor nodes. The integration of WSN technology into our electronic tongue system with BSS capability makes distant multi-ion measurement viable for environment and water quality monitoring.

*T cell mitogen*called lectin protein from the jack-beam Canavalia ensiformis Concanavalin A (Con_A) with dual activities,

*cytotoxicity*and

*immunomodulation*, we have shown it has a therapeutic effect on hepatoma. Injection of Con_A can eradicate the established malign tumor, because Con_A can induce tumor cell autophagic, cell-programmed death, as well as activate the effector T cells. Combined, in this paper, with the absorption exceeding the Carbon NanoTube (CNT) band-gap (εbg=~1/CNT diameter) with an active short wave near-infrared (SWIR) (1.2~1.5 micron wavelengths), which happened to be translucent to the irradiation upon animal skin, similar to that used in hospital fingertip-clamped Pulse Oxymetry. Once the Con_ACNT is guided to hepatoma cells, it is bonded and internalized into the mitochondria (MC) compartment, the cellular energy factory. Con_A has the higher specificity for tumor cells useful for targeting because of the abnormal glycosylation on tumor cells. When CNT hitch hike with Con_A, they can t together like a laser-denotable chemical missile surgically targeting at the tumor cells precisely by Con_A-guidance. We switch on SWIR laser, when the Con_A-CNT conjugated complex has been bonded and internalized to MC of malign cells and already commenced cellular programmed death. Thus, it might appear to casual readers that we have initiated an overkill, chemical drugged autophage followed with physical laser ablation, but what if we can eradicate hepatoma totally if no blue print is left behind inadvertently in case of a partial failure. We conclude that using Con_A-CNT conjugated complex targeting specifically at malign tumor cells is a novel targeted-laser-radiation therapy for tumors in mice.

^{2}>< and (ii) zero total area under the undulated wave amplitudes. If the radiated environment is linear (the natural scene), then the received signal also satisfies the admissibility condition.

_{CNT}~5%. Our CNT was made of the semiconductor at NIR wavelength E

_{BG}= 1.107 eV which can absorb any photon whose wavelength λ ≤ λ

_{NIR}=1.11 μm. This EBG is chosen due to the cutoff of Pb-Crown glass which happened put us in the equal amount of solar energy spectrum as silicon p-n junction SVC. Nevertheless, the exceeding of the Shockley and Queisser efficiency limit 30% might be due to the fact that we have much compact one-dimensional building block CNT of a tiny diameter 0.66 nm. It allows us to construct 3D structure, called volume pixel, "voxel," in a much efficient spiraling steps staircase fashion to capture the solar spectral energy spreading naturally by a simple focusing lens without occlusion. For real-estate premium applications, in Space or Ocean, we designed a volume pixel (Voxel) housing a stack of 16 CNTs steps spiraling 22° each like the fire house staircase occupying the height of 16 x d

_{CNT}=16 x 0.66nm= 10.56 nm and covering over 360°. The total SVC had the size 2x2 meter

^{2}, consisting of 100×100 lenslet array. Each lens was made of Pb-Crown glass which was inexpensive simple spherical lens having the diameter of D

_{lens}=2 cm and F#=0.7. It can focus the sunlight a millionth times stronger in a smallest possible focal spot size, λ

_{Yellow}=0.635 μm< λ

_{Max photons}<λ

_{Red}=0.73 μm, where the largest number of solar photons, 68%, according to the Plank radiation spectrum at 6000°K and the Lord Rayleigh diffraction limit. The solar panel seals individually such an array of 3D cavities of SVC enjoying theoretically from the UV 12% (wasted in passing through) visible 68% to the infrared 20% at a total of 16x5%~80% total QECNT per cell. The solar panel is made of light-weight carbon composite tolerating about 20% inactive fill factor and 10% dead pixels.

*see paper for equation*) where the query must be phrased in terms of the union of imprecise or partial set of 6w's denoted by the union of lower case w's. The upper case W's are the archival storage of a primer tree. A simplified humanistic representation may be called the 6W space (who, what, where, when, why, how), also referred to as the Newspaper geometry. It seems like mapping the 6W to the 3W (World Wide Web) is becoming relatively easier. It may thus become efficient and robust by rapidly digging for knowledge through the set operations of union, writing, and intersection, reading, upon the design of 6 W query searching engine matched efficiently by the 6W vector index databases. In fact, Newspaper 6D geometry may be reduced furthermore by PCA (Principal Component Analysis) eigenvector mathematics and mapped into the 2D causality space comprised of the causes (What, How, Why) and the effects (Where, When and Who). If this hypothesis of brain strategy were true, one must then develop a 6W query language to support a 6Wordered set storage of linkage pointers in high D space. In other words, one can easily map the basic 1st Gen. Google Web, 1-D statistical PageRanking databases, to a nested 6W tree where each branch of sub-6-W is stemming from the prime 6 W tree, using a system of automated text mining assisted by syntactic semantics to discern the properties of the 6W for that query. Goehl et al. has demonstrated previously that such is doable, but one may need more tools to support the knowledge extraction and automated feature reduction. In this paper, we have set out to demonstrate lossless down sampling using the 2nd Gen wavelet transform, the so-called "1-D Cartesian lifting processing of Swelden" adopted by JPEG 2000. "The loss of statistics, if any (including PageRanking and 1-D lifting), is the loss of geometry insights," such as 2-D vector time series, video, whose 1-D lifting Cartesian product will loss the diagonal changes insights.

^{7}cells in a breast cancer when it can be observed by X-ray mammogram. As contrast, the passive IR spectrogram proposed by Szu et al. was shown to be promising in detecting the breast cancer several months ahead of mammogram. With the energy readings from two IR cameras, one middle wavelength IR (MIR, 3 - 5μm) and one long wavelength IR (LIR, 8 - 12μm), the IR spectrogram may be computed by using the blind source separation (BSS) algorithms developed by Szu et al., which reveals the probability of being a cancer point on the breast surface. Two important tasks are involved in computing the IR spectrogram. One is an accurate estimate of the ground state energy in the Helmholtz free energy, H = E-T

_{0}S. The other is a correct pair-up of the points on the MIR and LIR images for a better estimation of IR spectrogram. To minimize the probability of making an erroneous estimate of the ground state energy inherent in the deterministic neighborhood-based BSS algorithm, a spatiotemporal approach is proposed in this paper. It takes into account not only the neighborhood information but also the temporal information in determining the probability of being a cancer point. Furthermore, a new sub-pixel super-resolution registration algorithm incorporating a third energy dimension is proposed to establish better correspondences between the points in the MIR and LIR images. Phantom study has confirmed that sub-pixel registration can be achieved by the proposed registration method. Human subject study further shows that the breast cancer may be detected by the proposed spatiotemporal approach via cross-referencing the IR spectrograms computed from the multiple pairs of MIR and LIR images taken at different times.

^{TM}, and correlated with various transcutaneous and invasive electrophysiological measurements.

*F = ma*for each interacting unit because the problem is mathematically intractable. Instead, one computes the partition function for the collection of interacting units and predicts statistical behavior from the partition function. Statistical mechanics was united with Bayesian inference by Jaynes [4]. As a continuation, Shannon [7] demonstrated that the partition function assignment of probabilities via the interaction Hamiltonian is the solution to Bayesian assignment of probabilities (based on the maximum entropy method with known means and standard deviations). Once this technique has been applied to a variety of problems and obtained a solution, one can, of course, solve the inverse problem of to determine the solution to an inverse problem to determine what interaction model gives rise to a given probability assignment [1] and [8]. The usage of statistical mechanics allows one can draw general inferences about any complex system including networks [5] by defining "energy", "heat capacity", "temperature", and other thermodynamic characteristics of most complex systems based on the common standard of the Helmholtz free energy. Principle has noted that the aspect of entropy used in reasoning with uncertainty may not be the most appropriate entropy for learning mechanisms [6]. Instead he has explored using Renyi entropy and derived a form of information learning dynamics that has some promising features [2]. To fully realize the potential of the usage of a more generalized entropy to the three aspects of survival, we suggest some connections to the free energy and learning. We also connect some aspects of sensing to probability distributions that suggest why certain search strategies perform better than others. In making these connections, we suggest a fundamental connection waits to be discovered between inference, learning, and related to the manner in which sensing mechanisms perform.

*&mgr;m*) and long wavelength IR (8 - 12

*&mgr;m*)) IR imaging cameras equipped with smart subpixel automatic target detection algorithms. According to physics, the radiation of high/low temperature bodies will shift toward a shorter/longer IR wavelength band. Thus, the measured vector data

**x**per pixel can be used to invert the matrix-vector equation x=As pixel-by-pixel independently, known as a single pixel blind sources separation (BSS). We impose the universal constraint of equilibrium physics governing the blackbody Planck radiation distribution, i.e., the minimum Helmholtz free energy,

*H*=

*E*-

*T*. To stabilize the solution of Lagrange constrained neural network (LCNN) proposed by Szu

_{o}S*et al.*, we incorporate the second order approximation of free energy, which corresponds to the second order constraint in the method of multipliers. For the subpixel target, we assume the constant ground state energy E

_{o}can be determined by those normal neighborhood tissue, and then the excited state can be computed by means of Taylor series expansion in terms of the pixel I/O data. We propose an adaptive method to determine the neighborhood to find the free energy locally. The proposed methods enhance both the sensitivity and the accuracy of traditional breast cancer diagnosis techniques. It can be used as a first line supplement to traditional mammography to reduce the unwanted X-rays during the chemotherapy recovery. More important, the single pixel BSS method renders information on the tumor stage and tumor degree during the recovery process, which is not available using the popular independent component analysis (ICA) techniques.

^{nd}Gen Discrete Wavelet Transform (DWT) of Swelden to the Next Generations (NG) Digital Wavelet Transform (DWT) preserving the statistical salient features. The lossless NG DWT accomplishes the data compression of "wellness baseline profiles (WBP)" of aging population at homes. For medical monitoring system at home fronts we translate the military experience to dual usage of veterans & civilian alike with the following three requirements: (i) Data Compression: The necessary down sampling reduces the immense amount of data of individual WBP from hours to days and to weeks for primary caretakers in terms of moments, e.g. mean value, variance, etc., without the artifacts caused by FFT arbitrary windowing. (ii) Lossless: our new NG_DWT must preserve the original data sets. (iii) Phase Transition: NG_DWT must capture the critical phase transition of the wellness toward the sickness with simultaneous display of local statistical moments. According to the Nyquist sampling theory, assuming a band-limited wellness physiology, we must sample the WBP at least twice per day since it is changing diurnally and seasonally. Since NG_DWT, like the 2

^{nd}Gen, is lossless, we can reconstruct the original time series for the physicians' second looks. This technique of NG_DWT can also help stock market day-traders monitoring the volatility of multiple portfolios without artificial horizon artifacts.

*X*). The spectral ratio will be independent of the depth and imaging environment. Similarly, we will take six times per pair saliva samples (

_{1}X_{2}*X*) inside the upper jaw for three meals daily, of which the dynamics is shown as a delayed mirror image of "blood glucose level". And for which we must design a portable lab "system on chip (SOC)," and the micro-fluidity of pair channels per chemical reactions. According to the same biochemical principle of spontaneity, we apply the identical algorithm to determine both the ratio of hidden malignant and benign heat sources (

_{1}X_{2}*s*,

_{1}*s*) and the blood glucose & other sources (

_{2}*s*,

_{1}*s*) leaking into the saliva. This is possible because of the Gibbs isothermal spontaneous process, in which the Helmholtz free energy must be minimized for the spontaneous thermal radiation from unknown mixing of malign and benign sources or the diffusion mixing of glucose

_{2}*s** and other sources

_{1}*s**. We have derived a general formula relating two equilibrium values, before and after, in order to design our devices. Daily tracking the spectrogram ratio and saliva glucose levels are, nevertheless, needed for a reliable prediction of individual malignant angiogenesis and blood glucose level in real time.

_{2}_{BG}~1/

*tuned at the few nanometer diameter*

**d***for the mid wave. To ascertain noise contribution, in this paper, we provided a simple derivation of frequency-dependent Einstein transport coefficient*

**d***D(k)*=

*PSD(k)*, based on Kubo-Green (KG) formula, which is convenient to accommodate experimental data. We conjectured a concave shape of convergence

**at α=-2 power law at optical frequency against the overly simplest 1-D noise model about 1/2 K**

*1/k*^{α}_{B}T, and the ubiquitous power law

**where α=1 gave a convex shape of divergence. Our formula is based on the Cauchy distribution [1+(**

*1/k*^{α}*kd*)

^{2}]

^{-1}derived from the Fourier Transform of the correlation of charge-carrier wave function been scattered against lattice phonons spreading over the tubular surface of the diameter

**, similar to the Lorentzian line shape in molecular spectral exp**

*d***(-|x|/**. According to the band gap formula of SWNT, a narrower tube of SWNT worked similarly as Field Emission Transistor (FET) can be tuned at higher optical frequencies revealing finer details of lattice spacing,

*d*)**and**

*a***. Experimental determination of our proposed multiple scales responses formula remained to be confirmed.**

*b*_{8~12mm}/ I

_{3~5mm}). This procedure proved to be effective in DCIS using the satellite-grade IR spectrum cameras for an early developmental symptom of the "angiogenesis" effect. Thus, we propose to augment the annual hospital checkup of, or biannual Colonoscopy, with an inexpensive non-imaging IR-Flexi-scope intensity measurement device which could be conducted regularly at a household residence without the need doctoral expertise or a data basis system. The only required component would be a smart PC, which would be used to compute the degree of thermal activities through the IR spectral ratio. It will also be used to keep track of the record and send to the medical center for tele-diagnosis. For the purpose of household screening, we propose to have two integrated passive IR probes of dual-IR-color spectrum inserted into the body via the IR fiber-optic device. In order to extract the percentage of malignancy, based on the ratio of dual color IR measurements, the key enabler is the unsupervised learning algorithm in the sense of the Duda & Hart Unlabelled Data Classifier without lookup table exemplars. This learning methodology belongs to the Natural Intelligence (NI) of the human brain, which can effortlessly reduce the redundancy of pair inputs and thereby enhance the Signal to Noise Ratio (SNR) better than any single sensor for the salient feature extraction. Thus, we can go beyond a closed data basis AI expert system to tailor to the individual ground truth without the biases of the prior knowledge.

*H = E - T*. In case of the point breast cancer, we can assume the constant ground state energy

_{o}S*E*to be normalized by those benign neighborhood tissue, and then the excited state can be computed by means of Taylor series expansion in terms of the pixel I/O data. We can augment the X-ray mammogram technique with passive IR imaging to reduce the unwanted X-rays during the chemotherapy recovery. When the sequence is animated into a movie, and the recovery dynamics is played backward in time, the movie simulates the cameras' potential for early detection without suffering the PD=0.1 search uncertainty. In summary, we applied two satellite-grade dual-color IR imaging cameras and advanced military (automatic target recognition) ATR spectrum fusion algorithm at the middle wavelength IR (3 - 5μm) and long wavelength IR (8 - 12μm), which are capable to screen malignant tumors proved by the time-reverse fashion of the animated movie experiments. On the contrary, the traditional thermal breast scanning/imaging, known as thermograms over decades, was IR spectrum-blind, and limited to a single night-vision camera and the necessary waiting for the cool down period for taking a second look for change detection suffers too many environmental and personnel variabilities.

_{o}**(m-UAV)**for surveillance and communication (Szu et al.

**SPIE Proc. V 5439 5439, pp.183-197, April 12, 2004**). In this paper, we wish to plan and execute the next challenge---- a team of m-UAVs. The minimum unit for a robust chain saw communication must have the connectivity of five second-nearest-neighbor members with a sliding, arbitrary center. The team members require an

*among a unit of five, in order to carry out a*

**authenticity check (AC)**

**jittering mosaic image processing****on-board for every m-UAV without gimbals. The JMIP does not use any NSA security protocol ("cardinal rule: no-man, no-NSA codec"). Besides team flight dynamics**

*(JMIP)**(*, several new modules:

**Szu et al "Nanotech applied to aerospace and aeronautics: swarming,' AIAA 2005-6933 Sept 26-29 2005)***are designed, and the*

**AOA, AAM, DSK, AC, FPGA***must develop their own control, command and communication system, safeguarded by the authenticity and privacy checks presented in this paper. We propose a*

**JMIP***, which has a Feistel structure similar to the Data Encryption Standard (DES) developed by Feistel et. al. at IBM in the 1970's; but DES is modified here by a set of chaotic*

**Nonlinear Invertible (deck of card) Shuffler (NIS) algorithm****dynamical shuffler Key (DSK)**, as re-computable lookup tables generated by every on-board Chaotic Neural Network (CNN). The initializations of CNN are periodically provided by the private version of RSA from the ground control to team members to avoid any inadvertent failure of broken chain among m-UAVs. Efficient utilization of communication bandwidth is necessary for a constantly moving and jittering m-UAV platform, e.g. the wireless LAN protocol wastes the bandwidth due to a constant need of hand-shaking procedures (as demonstrated by NRL; though sensible for PCs and 3

^{rd}gen. mobile phones). Thus, the chaotic

*must be embedded in a fault-tolerant Neural Network Associative Memory for the error-resilientconcealment mosaic image chip re-sent. However, the RSA public and private keys, chaos typing and initial value are given on set or sent to each m-UAV so that each platform knows only its private key.*

**DSK****among 5 team members are possible using a reverse**

*AC**protocol. A hashed image chip is coded by the sender's private key and nobody else knows in order to send to it to neighbors and the receiver can check the content by using the senders public key and compared the decrypted result with on-board image chips. We discover a fundamental problem of digital chaos approach in a finite state machine, of which a fallacy test of a discrete version is needed for a finite number of bits, as James Yorke advocated early. Thus, our proposed chaotic*

**RSA***for bits stream protection becomes desirable to further mixing the digital CNN outputs. The fault tolerance and the parallelism of Artificial Neural Network Associative Memory are necessary attributes for the neighborhood smoothness image restoration. The associated computational cost of O(N*

**NIS**^{2}) deems to be worthy, because the Chaotic version CNN of N-D can further provide the privacy only for the lost image chip (N=8x8) re-sent requested by its neighbors and the result is better performed than a simple 1-D logistic map. We gave a preliminary design of low end of FPGA firmware that to compute all on board seemed to be possible.

^{TM}by Xi and Szu, 2004. However, applying it, we must re-design a new bio-NanoRobot, consisting of two parts: (a) multiple resolution analysis (MRA) using AI to control a dual-resolution vision system: the soft-contact-vision AFM co-registered with a on-contact high resolution imaging; and (b) two cantilever arms capable to hold and enucleate a cell. The calibration and automation are controlled by AI Case-Based reasoning (CBR) together with AI Blackboard (BB) of the taxonomy, necessary for integrating different tool's tolerance and resolution at the same location. Moreover, keeping the biological sample in one place, while a set of tools rotates upon it similar to a set of microscopic lenses, we can avoid the non-real-time re-imaging, and inadvertent contamination. Applying an imposing electrical field, we can take the advantage of structure differences in smooth nuclear membranes inducing Van der Waal's forces versus random cytoplasm. (ii) The re-programming of transplanted cells to the ground state is unclear and usually relies on electrochemical means tested systematically in a modified 3D Caltech micro-fluidics. (iii) Our real-time MRA video-manipulator can elucidate the mitosis's tread-mill assembly mechanism in the development course of pluripotent stem cell differentiation into specialized tissue cell engineering. Such a combination bio-NanoRobot and micro-fluidic massive parallel assembly-line approach might not only replace the aspirating pipette with a self- enucleating Drosophila embryonic eggs, but also genetically reproduce a large amount of cloning embryonic eggs repeatedly for various re-programming hypotheses.

**attached to every readable-writable**

*T**: Passports ID, medical patient ID, biometric ID, driver licenses, book ID, library ID, etc. These avalanche phenomena may be due to the 3*

**Smart Card (SC)**^{rd}Gen phones seeking much more versatile & inexpensive interfaces, than the line-of-sight bar-code optical scan. Despite of the popularity of RFID, the lacking of

**protection restricted somewhat the wide spread commercial, financial, medical, legal, and militarily applications. Conventional APS approach can obfuscate a private passkey**

*Authenticity, Privacy and Security (APS)**of*

**K****with the tag number**

*SC**or the reader number*

**T****or both, i.e. only**

*R*,*or*

**T*K****or both will appear on them, where * denotes an**

*R*K**operation, e.g.*

**invertible****, but not limited to it. Then, only the authentic owner, knowing all, can inverse the operation, e.g.**

*EXOR**=*

**EXOR*EXOR****to find**

*I**. However, such an encryption could be easily compromised by a hacker seeking exhaustively by comparison based on those frequently used words. Nevertheless, knowing biological wetware lesson for power of pairs sensors and Radar hardware counter-measure history, we can counter the counter-measure DRFM, instead using one RFID tag per SD, we follow the Nature adopting two ears/tags, e.g. each one holding portions of the ID or simply two different ID's readable only by different modes of the interrogating reader, followed by brain central processor in terms of nonlinear invertible shufflers mixing two ID bits. We prefer to adopt such a hardware-software combined hybrid approach because of a too limited phase space of a single RFID for any meaningful encryption approach. Furthermore, a useful biological lesson is not to put all eggs in one basket,*

**K****. According to the Radar physics, we can choose the amplitude, the frequency, the phase, the polarization, and two radiation energy supply principles, the capacitance coupling (~6m) and the inductance coupling (<1m), to code the pair of tags differently. A casual skimmer equipped with single-mode reader can not read all. We consider near-field and mid-field applications each in this paper. The near-field is at check-out counters or the convey-belt inventory involving sensitive and invariant data. The mid-field search & rescue involves not only item/person identification, but also the geo-location. If more RF power becomes cheaper & portable for longer propagation distance in the near future, then a triangulation with pair of secured readers, located at known geo-locations, could interrogate and identify items/persons and their locations in a GPS-blind environment.**

*"if you don't get it all, you can't hack it"**jitter effect*in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum

*a-posteriori*probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) using standard-height cells. ICA is an algorithm that can solve BSS problems by carrying out the all-order statistical, decorrelation-based transforms, in which an assumption that neighborhood pixels share the same but unknown mixing matrix

**A**is made. In this paper, we continue our investigation on the design challenges of firmware approaches to smart algorithms. We think two levels of parallelization can be explored, including pixel-based parallelization and the parallelization of the restoration algorithm performed at each pixel. This paper focuses on the latter and we use ICA as an example to explain the design and implementation methods. It is well known that the capacity constraints of single FPGA have limited the implementation of many complex algorithms including ICA. Using the reconfigurability of FPGA, we show, in this paper, how to manipulate the FPGA-based system to provide extra computing power for the parallelized ICA algorithm with limited FPGA resources. The synthesis aiming at the pilchard re-configurable FPGA platform is reported. The pilchard board is embedded with single Xilinx VIRTEX 1000E FPGA and transfers data directly to CPU on the 64-bit memory bus at the maximum frequency of 133MHz. Both the feasibility performance evaluations and experimental results validate the effectiveness and practicality of this synthesis, which can be extended to the spatial-variant jitter restoration for micro-UAV deployment.

_{o}S, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.

_{s}resident in the image scene is estimated using a Neyman-Pearson hypothesis testing-based eigen-thresholding method. Next, an automatic searching algorithm will be applied to find the most distinct AFIs using the divergence as criterion, where the threshold is adjusted until the number of selected AFIs equals the n

_{s}estimated in the first stage. The experimental results using AVIRIS data shows the efficiency of the proposed post-processing technique in distinct AFI selection.

_{0}is temperature and S is the entropy. Free energy represents dynamic balance of an open information system with constraints defined by data vector. Solution was found through Lagrange Constraint Neural Network algorithm for computing the unknown source vector, exhaustive search to find unknown nonlinearity parameters and Cauchy Machine for seeking de-mixing matrix at the global minimum of H for each pixel. We demonstrate the algorithm capability to recover images from the synthetic noise free nonlinear mixture of two images. Capability of the Cauchy Machine to find the global minimum of the golf hole type of landscape has hitherto never been demonstrated in higher dimensions with a much less computation complexity than an exhaustive search algorithm.

^{2}) numerical complexity associated with the solution of the inverse problem required in the classical Lagrangian formulation. Trivial equal probability solution with uniformly distributed class vector s is avoided by introducing additional set of the inequality constraints. The unknown spectral reflectance matrix A is estimated blindly in non-parameterized form minimizing an LMS energy function. We apply the Riemannian metric to the gradient learning for reproducing the biological Hebbian rule in terms of a full rank vector outer product formula and demonstrate faster convergence than standard Euclidean gradient. Since the proposed Fast Lagrangian method has O(N) numerical complexity we have achieved a real time hyperspectral remote sensing capability as platform moves, samples and processes. A FPGA firmware implementation for massive pixel parallel algorithm has been fired for patent.

_{0}-1)((Sigma)

_{i}s

_{i}-1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma)

_{i}s

_{i}log s

_{i}as the Contrast function of unknown number of independent sources s

_{i}. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).

^{8}sensor excitations is understandable from computer vision viewpoint toward sparse edge maps. It is only recently derived using a truly unsupervised learning paradigm of artificial neural networks (ANN). In fact, the biological vision, Hubel- Wiesel edge maps, is reproduced seeking the underlying independent components analyses (ICA) among 10

^{2}image samples by maximizing the ANN output entropy (partial)H(V)/(partial)[W] equals (partial)[W]/(partial)t. When a pair of newborn eyes or ears meet the bustling and hustling world without supervision, they seek ICA by comparing 2 sensory measurements (x

_{1}(t), x

_{2}(t))

^{T}equalsV X(t). Assuming a linear and instantaneous mixture model of the external world X(t) equals [A] S(t), where both the mixing matrix ([A] equalsV [a

_{1}, a

_{2}] of ICA vectors and the source percentages (s

_{1}(t), s

_{2}(t))

^{T}equalsV S(t) are unknown, we seek the independent sources <S(t) S

^{T}(t)> approximately equals [I] where the approximated sign indicates that higher order statistics (HOS) may not be trivial. Without a teacher, the ANN weight matrix [W] equalsV [w

_{1}, w

_{2}] adjusts the outputs V(t) equals tanh([W]X(t)) approximately equals [W]X(t) until no desired outputs except the (Gaussian) 'garbage' (neither YES '1' nor NO '-1' but at linear may-be range 'origin 0') defined by Gaussian covariance <V(t) V(t)

^{T}>

_{G}equals [I] equals [W][A] <S(t) S

^{T}(t)greater than [A]

^{T}[W]

^{T}. Thus, ANN obtains [W][A] approximately equals [I] without an explicit teacher, and discovers the internal knowledge representation [W], as the inverse of the external world matrix [A]

^{-1}. To unify IC, PCA, ANN & HOS theories since 1991 (advanced by Jutten & Herault, Comon, Oja, Bell-Sejnowski, Amari-Cichocki, Cardoso), the LYAPONOV function L(v

_{1},...,v

_{n}, w

_{1},...w

_{n},) equals E(v

_{1},...,v

_{n}) - H(w

_{1},...w

_{n}) is constructed as the HELMHOTZ free energy to prove both convergences of supervised energy E and unsupervised entropy H learning. Consequently, rather using the faithful but dumb computer: 'GARBAGE-IN, GARBAGE-OUT,' the smarter neurocomputer will be equipped with an unsupervised learning that extracts 'RAW INFO-IN, (until) GARBAGE-OUT' for sensory knowledge acquisition in enhancing Machine IQ. We must go beyond the LMS error energy, and apply HOS To ANN. We begin with the Auto- Regression (AR) which extrapolates from the past X(t) to the future u

_{i}(t+1) equals w

_{i}

^{T}X(t) by varying the weight vector in minimizing LMS error energy E equals <[x(t+1) - u

_{i}(t+1)]

^{2}> at the fixed point (partial)E/(partial)w

_{i}equals 0 resulted in an exact Toplitz matrix inversion for a stationary covariance assumption. We generalize AR by a nonlinear output v

_{i}(t+1) equals tanh(w

_{i}

^{T}X(t)) within E equals <[x(t+1) - v

_{i}(t+1)]

^{2}>, and the gradient descent (partial)E/(partial)w

_{i}equals - (partial)w

_{i}/(partial)t. Further generalization is possible because of specific image/speech having a specific histogram whose gray scale statistics departs from that of Gaussian random variable and can be measured by the fourth order cumulant, Kurtosis K(v

_{i}) equals <v

_{i}

^{4}> - 3 <v

_{i}

^{2}>

^{2}(K greater than or equal to 0 super-G for speeches, K less than or equal to 0 sub-G for images). Thus, the stationary value at (partial)K/(partial)w

_{i}equals plus or minus 4 PTLw

_{i}/(partial)t can de-mix unknown mixtures of noisy images/speeches without a teacher. This stationary statistics may be parallel implemented using the 'factorized pdf code: (rho) (v

_{1}, v

_{2}) equals (rho) (v

_{1}) (rho) (v

_{2})' occurred at a maximal entropy algorithm improved by the natural gradient of Amari. Real world applications are given in Part II, (Wavelet Appl-VI, SPIE Proc. Vol. 3723) such as remote sensing subpixel composition, speech segmentation by means of ICA de-hyphenation, and cable TV bandwidth enhancement by simultaneously mixing sport and movie entertainment events.

_{ij}for the ith object and the jth band is either partially or difficult to measure in the outer space.

^{-1}[(ds/dt)/(-d

^{2}s/dt

^{2}] equals tan

^{-1}[((d(delta) (t/a)dt) (direct product) s)/((d

^{2}(delta) (t/a)/dt

^{2}) (direct product) s)] applied to a real world data of the Paraguay river levels.

^{-3}. No additional attempt is made to perform decompression restoration.

*H*) and low-pass (

*L*) filter bank coefficients via the quadrature mirror filter (QMF), a digital subband lossless coding. A linear combination of two special cases of the complete biorthogonal normalized (Cbi-ON) QMF [

*L*(

*z*),

*H*(

*z*),

*L*(

^{+}*z*),

*H*

^{+}(

*z*)], called α-bank and β-bank, becomes a hybrid

*a*α +

*b*β-bank (for any real positive constants

*a*and

*b*) that is still admissible, meaning Cbi-ON and lossless. Finally, the power of AWT is the implementation by means of wavelet chips and neurochips, in which each node is a daughter wavelet similar to a radial basis function using dyadic affine scaling.

^{*}with respect to the origin. When the filter (or the wavelet) is antisymmetric with respect to the origin, however, the filter coefficients converge in the order of 1/

*n*, producing a poor spatial domain localization. We show that the optimal axis of antisymmetry for the filter is located at the half-sample point to either side of the origin. We also present a scheme to adjust the degree of spatial filtering to balance between two conflicting factors of suppressing noise and preserving edges.