With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
The popularity of Graphic Processing Units (GPUs) opens a new avenue for general-purpose
computation including the acceleration of algorithms. Massively parallel computations using GPUs
have been applied in various fields by researchers. Arithmetic coding (AC) is widely used in lossless
data compression and shows better compression efficiency than the well-known Huffman Coding.
However, AC possesses much higher computational complexity due to frequent multiplication and
branching operations. In this paper, we implement the block-parallel arithmetic encoder on GPUs using
the NVIDIA GPU and the Computer Unified Device Architecture (CUDA) programming model. The
source data sequence is divided into small blocks. Each CUDA thread processes one data block so that
data blocks can be encoded in parallel. By exploiting the GPU computational power, a significant
speedup is achieved. We show that the GPU-based AC speedup result depends on data distribution and
size. It is observed that the GPU speedup increases with higher compression ratios, due to the fact that
higher compression ratio corresponds to smaller compressed data output which reduces the bit stream
concatenation time as well as the device-to-host transfer time. Applying to the selected test images in
the USC-SIPI image database, we obtain speedup values ranging from 26x to 42x while compression
ratios ranging from 1.4 to 2.7.
Hyperspectral images are of very large data size and highly correlated in neighboring bands, therefore, it is necessary to
realize the efficient compression performance on the condition of low encoding complexity. In this paper, we propose a
method based on both partitioning embedded block and lossless adaptive-distributed arithmetic coding (LADAC).
Combined with three-dimensional wavelet transform and SW-SPECK algorithm, LADAC is adopted according to the
correlation between the adjacent bit-plane. Experimental results show that our proposed algorithm outperforms
3D-SPECK, furthermore, our method need not take the inter-band prediction or transform into account, so the
complexity is small relatively.
We present an adaptive algorithm that finds the best block-matching results in a computationally constrained and varied environment. The conventional diamond search algorithm, though faster than most known algorithms, is not very robust for sequences with scene variations or significant global motion. To solve this issue, rather than only using one fast motion estimation algorithm, we devise a more adaptive selection of fast motion estimation algorithms. Our adaptive selection approach for fast block search (ASFBS) algorithm uses a diamond search and two new subalgorithms: a cross-three-step search algorithm for large moving images and an advanced cross-diamond search algorithm for small moving images. The proposed ASFBS adapts based on the length of the motion vector, the number of search points, and the matching criteria of the neighboring block. Experimental results show that ASFBS is much more robust; it is faster than other popular fast block-matching algorithms, with smaller distortions.
In this paper, we propose a method based on both 3D-wavelet transform and low-density parity-check codes to realize
the compression of hyperspectral images on the framework of DSC (Distributed Source Coding). The new approach
which combines DSC and 3D-wavelet transform technique makes it possible to realize low encoding complexity at the
encoder and achieve efficient performance of hyperspectral image compression. The experimental results for
hyperspectral image coding show that the new method can performs better than 3D-SPIHT and can outperform than
2D-SPIHT and JPEG2000.
Variable-length coding (VLC) is widely used in video coding to improve compression efficiency. However, suffering from loss of synchronization, the VLC bit stream is much more sensitive to random errors than a fixed-length coding (FLC) bit stream. Error-resilient entropy coding (EREC) is a valid tool combating random errors in a VLC bit stream. Due to its intrinsic property of error propagation, when EREC is applied to a video bit stream, those blocks placed later become much more likely to be lost. We propose a simple method to further improve the error robustness of a video bit stream by interleaving transform coefficients of blocks so that low-frequency information is always placed ahead of high-frequency information. Thus, low-frequency information of greater significance is less likely to be lost. Experimental results prove the superiority of the proposed method. In addition, block interleaving can also be used in a data-partitioned video bit stream with ease.