A VLSI implementable, massively parallel, digital, stochastic neural network architecture with on-chip learning is described which can be used to address video/image compression applications. Color information in images is usually encoded as a means to reduce the data needed to transmit high resolution video images over a lower bandwidth communication system. A neural net approach can be used for such a compression scheme since it can be trained to map a set of patterns from a k-dimensional space to a 1D space very easily. The training algorithm is implemented as a cross correlation between previously calculated weights and global errors. These simple calculations are performed by each processing unit with information available in their local memories, thus, no transfer of information between neurons is necessary. This feature allows for the synchronous updating of all the weights, which optimizes the inherent parallel nature of the neural network architecture, making the design an excellent candidate for VLSI implementation as an SIMD architecture. By incorporating the learning portion on-chip, an appreciable reduction in computing time is possible. The stochastic nature of the learning algorithm, and a simulated 'annealing' process, allows it to convergence to a global minimum. The architecture is synthesized from an HDL description using powerful design and analysis tools which allow for many performance trade- offs and fault coverage to be done by the compiler before the final design is sent for fabrication.