PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6683, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a lossless compressor for multispectral images that combines two classical tools: wavelets and neural
networks. Due to their huge dimensions, images are split into small blocks and the wavelet transform that maps
integers to integers is applied to each block -and each band- to decorrelate it. In order to increase even more the
compression rates achieved by the wavelet transform, coefficients in the two finest scales are predicted by means
of neural networks, which use causal information (ie, coefficients already coded) to get nonlinear estimates. In
this work, we add coefficients from other spectral bands to compute the prediction, besides those coefficients
belonging to the same band, which lie in a causal neighbourhood. The differences are then coded with a context
based arithmetic coder. Several options regarding initialization, training and architecture of the neural networks
are analyzed. Comparison results with other lossless compressors (with respect to the coding time and the
bitrates achieved) are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2005, the Consultative Committee for Space Data Systems (CCSDS) approved a new Recommendation (CCSDS 122.0-B-1) for Image Data Compression. This Recommendation defines a coding system for image-data compression applicable
to digital data from payload instruments, specifying both a file syntax to allow the transmission of the data in multiple
packets and techniques to control the compression ratio. In this paper we propose a new file syntax that provides scalability
by quality, spatial location, resolution and component. The main advantages of the proposed file syntax for the
Recommendation are: 1) the definition of multiple types of progression order, which enhances abilities in transmission
scenarios, and 2) the support for the extraction and decoding of specific windows of interest without needing to decode the
complete code-stream. This will enable the use of the Recommendation in interactive transmission scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the challenges in the development of a hyperspectral satellite is the extremely high data rate due to the huge data
volume generated on board, which exceeds the downlink capacity, and may quickly exhaust the onboard storage
capacity. To deal with this challenge the Canadian Space Agency (CSA) has been developing data compression
technologies for satellite imagery data for many years. Compression techniques for operational use have been developed.
Recently, two near lossless data compression techniques for hyperspectral imagery have been developed and
implemented in hardware. The CSA is considering a near lossless data compressor for use on-board a hyperspectral
satellite in order to reduce the requirement for on-board storage and to better match the available downlink capacity.
This invited paper is to review the research and development of satellite data compression for hyperspectral imager at
CSA, briefly summarize the two near lossless compression techniques, to address the application based assessment of
the impact of lossy or near lossless data compression on Earth observation applications, and to provide up-to-date status
of the hardware implementation of the on-board data compression technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future high resolution instruments planned by CNES for space remote sensing missions will lead to higher bit rates
because of the increase in resolution, dynamic range and number of spectral channels for multispectral (up to 16 bands)
and hyperspectral (hundreds of bands) imagery. Lossy data compression is then needed, with compression ratio goals
always higher and with low-complexity algorithm. For optimum compression performance of such data, algorithms must
exploit both spectral and spatial correlation. In the case of multispectral images, CNES (in cooperation with Thales
Alenia Space, hereafter TAS) studies have led to an algorithm using a fixed transform to decorrelate the spectral bands,
the CCSDS codec compresses each decorrelated band using a suitable multispectral rate allocation procedure. This low-complexity
decorrelator is adapted to hardware implementation on-board satellite and is under development. In the case
of hyperspectral images, CNES (in cooperation with TAS/TeSA/ONERA) studies have led to a full wavelet compression
system followed by zerotree coding methods adapted to this decomposition. We are investigating other preprocessors
such as Independent Component Analysis which could be used in both approaches. CNES also participates to the new
CCSDS Multispectral and Hyperspectral Data Compression Working Group.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NASA Geostationary Imaging Fourier Transform Spectrometer (GIFTS) represents a revolutionary step in remote
sensing of Earth's atmosphere that will demonstrate the technology and measurement concepts for future NOAA
geostationary operational environmental satellites (GOES). GIFTS consists of a 128 x 128 large focal plane array (LFPA)
imaging FTS with the spectral coverage from 685 to 1130 cm-1 and 1650 to 2250 cm-1. GIFTS was selected for flight
demonstration on NASA's New Millennium Program (NMP) Earth Observing 3 (EO-3) Satellite Mission. GIFTS
provides full disk global coverage obtained within an hour at moderate spectral resolutions (e.g. 1.2 cm-1) as well as
regional sounding of atmospheric temperature and absorbing gas profiles at high spectral resolution (e.g. 0.6 cm-1).
Given the unprecedented data volume produced by GIFTS, lossless data compression is critical for the overall success of
the GIFTS experiment where the data is to be disseminated to the user community in real-time and archived for scientific
studies and climate assessment. In this paper we will study lossless compression of GIFTS data that has been collected
as part of the calibration or Ground Based Tests that were conducted in 2006. Standard compression methods
JPEG-2000, JPEG-LS, and CCSDS IDC 9/7M & 5/3 are investigated for compression benchmarks. The bias-adjusted
reordering (BAR) preprocessing scheme is also investigated to improve their performance on GIFTS data compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment
from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements
from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial
and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission
and archiving. Examples of multispectral sensors we consider include the NASA 36 band MODIS imager,
Meteosat 2nd generation 12 band SEVIRI imager, GOES R series 16 band ABI imager, current generation
GOES 5 band imager, and Japan's 5 band MTSAT imager. Conventional lossless compression algorithms are
not able to reach satisfactory compression ratios nor are they near the upper limits for lossless compression
on imager data as estimated from the Shannon entropy. We introduce a new lossless compression algorithm
developed for the NOAA-NESDIS satellite based Earth science multispectral imagers. The algorithm is based
on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform
encoder. Our results are evaluated by comparison with current sattelite compression algorithms such the new
CCSDS standard compression algorithm, and JPEG2000. The algorithm as presented has been designed to
work with NOAA's scientific data and so is purely lossless but lossy modes can be supported. The compression
algorithm also structures the data in a way that makes it easy to incorporate robust error correction using FEC
coding methods as TPC and LDPC for satellite use. This research was funded by NOAA-NESDIS for its Earth
observing satellite program and NOAA goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar data is routinely transmitted in real-time from the coterminous United States (CONUS) radar sites and placed on
the Internet for incorporation into nowcasting, hydrology and modeling applications in near-realtime. Radar data are also
archived for on-demand retrieval by the National Climate Data Center (NCDC).
Data compression is used operationally to reduce bandwidth and storage requirements. There are several data compression
techniques used operationally on radar data. Custom compression techniques have been devised for radar data that
outperform generic techniques but the radar operations groups ultimately use off-the-shelf solutions.
The underlying ideas behind compressibity are useful beyond just reducing the amount of data for transmission and
archival. The compressibility of radar data has been found useful for devising quality control algorithms, especially for the
detection and removal of test patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Independent component analysis has been known for its success in blind source separation. It features a decorrelation
capability beyond second-order moments. Recently ICA has been used in lossy compression for target detection in
hyperspectral imager data where loss of unimportant features does not affect the detection result. For the ultraspectral
sounder data, it is better to be lossless compressed for the ill-posed retrieval of geophysical parameters. In this paper we
will try to use ICA in the lossless compression of ultraspectral sounder data. The compression result shows that ICA
compares favorably with JPEG2000, JPEG-LS, and CCSDS IDC 5/3 on the ten standard AIRS granules dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GEONETCast Americas is a regional contribution to a developing, global, near-real-time, environmental data
dissemination system in support of the Global Earth Observation System of Systems. It will be a contribution from the
United States National Oceanic and Atmospheric Administration whose goal is to enable enhanced dissemination,
application, and exploitation of environmental data and products for the diverse societal benefits defined by the Group
on Earth Observations, including agriculture, energy, health, climate, weather, disaster mitigation, biodiversity, water
resources, and ecosystems. GEONETCast Americas will serve North, Central, and South Americas beginning late in
2007 using inexpensive satellite receiver stations based on Digital Video Broadcast standards and will link with similar
regional environmental data dissemination systems deployed around the world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The International Telecommunication Union (ITU), a United Nations (UN) agency, is the agency that,
under an international treaty, sets radio spectrum usage regulations among member nations. Within the
United States of America (USA), the organization that sets regulations, coordinates an application for use,
and provides authorization for federal government/agency use of the radio frequency (RF) spectrum is the
National Telecommunications and Information Administration (NTIA). In this regard, the NTIA defines
which RF spectrum is available for federal government use in the USA, and how it is to be used. The
NTIA is a component of the United States (U.S.) Department of Commerce of the federal government.
The significance of ITU regulations is that ITU approval is required for U.S. federal government/agency
permission to use the RF spectrum outside of U.S. boundaries. All member nations have signed a treaty
to do so. U.S. federal regulations for federal use of the RF spectrum are found in the Manual of
Regulations and Procedures for Federal Radio Frequency Management, and extracts of the manual are
found in what is known as the Table of Frequency Allocations. Nonfederal government and private sector
use of the RF spectrum within the U.S. is regulated by the Federal Communications Commission (FCC).
There is a need to control "unwanted emissions" (defined to include out-of-band emissions, which are
those immediately adjacent to the necessary and allocated bandwidth, plus spurious emissions) to
preclude interference to all other authorized users. This paper discusses the causes, effects, and
mitigation of unwanted RF emissions to systems in adjacent spectra.
Digital modulations are widely used in today's satellite communications. Commercial communications
sector standards are covered for the most part worldwide by Digital Video Broadcast - Satellite (DVB-S)
and digital satellite news gathering (DSNG) evolutions and the second generation of DVB-S (DVB-S2)
standard, developed by the European Telecommunications Standards Institute (ETSI). In the USA, the
Advanced Television Systems Committee (ATSC) has adopted Europe's DVB-S and DVB-S2 standards
for satellite digital transmission. With today's digital modulations, RF spectral side lobes can extend out
many times the modulating frequency on either side of the carrier at excessive power levels unless
filtered. Higher-order digital modulations include quadrature phase shift keying (QPSK), 8 PSK (8-ary
phase shift keying), 16 APSK (also called 12-4 APSK (amplitude phase shift keying)), and 16 QAM
(quadrature amplitude modulation); they are key for higher spectrum efficiency to enable higher data rate
transmissions in limited available bandwidths. Nonlinear high-power amplifiers (HPAs) can regenerate
frequency spectral side lobes on input-filtered digital modulations. The paper discusses technologies and
techniques for controlling these spectral side lobes, such as the use of square root raised cosine (SRRC)
filtering before or during the modulation process, HPA output power back-off (OPBO), and RF filters after
the HPA. Spectral mask specifications are a common method of the NTIA and ITU to define spectral
occupancy power limits. They are intended to reduce interference among RF spectrum users by limiting
excessive radiation at frequencies beyond the regulatory allocated bandwidth.The focus here is on the communication systems of U.S. government satellites used for space
research, space operations, Earth exploration satellite services (EESS), meteorological satellite services
(METSATS), and other government services. The 8025 to 8400 megahertz (MHz) X band can be used to
illustrate the "unwanted emissions" issue. 8025 to 8400 MHz abuts the 8400 to 8450 MHz band allocated
by the NTIA and ITU to space research for space-to-Earth transmissions such as receiving very weak
Deep Space Network signals.
The views and ideas expressed in this paper are those of the authors and do not necessarily reflect
those of The Aerospace Corporation or The National Oceanic and Atmospheric Administration (NOAA)
and its National Environmental Satellite Service (NESDIS).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A theoretical investigation to evaluate the performance of optical code division multiple access (OCDMA) for
compressed video transmission is shown. OCDMA has many advantages than a typical synchronous protocol time
division multiple access (TDMA). A pulsed laser transmission of multi channel digital video can be done using various
techniques depending on whether the multi channel data are to be synchronous or asynchronous. A typical form of
asynchronous digital operation is wavelength division multiplexing (WDM) in which the digital data of each video
source are assigned a specific and separate wavelength. A sophisticated hardware such as accurate wavelength control of
all lasers and tunable narrow band optical filters at the receivers is required in this case. A major disadvantage with
CDMA is the reduction in per channel data rate (relative to the speeds available in the laser itself) that occurs in the
insertion of code addressing. Hence optical CDMA for the video transmission application is meaningful when individual
channel video bit rates can be significantly reduced and that can be done by compression of video data. In our work for
compression of video image standard JPEG is implemented where a compression ratio of about 60 % is obtained without
noticeable image degradation. Compared to the other existing techniques JPEG standard achieves higher compression
ration with high S/N ratio. Here we demonstrated the auto and cross correlation properties of the codes. We have shown
the implementation of bipolar Walsh coding in OCDMA system and their use in transmission of image/video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Geostationary Imaging Fourier Transform Spectrometer (GIFTS), as part of NASA's New Millennium Program,
is an advanced instrument to provide high-temporal-resolution measurements of atmospheric temperature and water
vapor, which will greatly facilitate the detection of rapid atmospheric changes associated with destructive weather events,
including tornadoes, severe thunderstorms, flash floods, and hurricanes. The Committee on Earth Science and
Applications from Space under the National Academy of Sciences recommended that NASA and NOAA complete the
fabrication, testing, and space qualification of the GIFTS instrument and that they support the international effort to
launch GIFTS by 2008. Lossless data compression is critical for the overall success of the GIFTS experiment, or any
other very high data rate experiment where the data is to be disseminated to the user community in real-time and
archived for scientific studies and climate assessment. In general, lossless data compression is needed for high data rate
hyperspectral sounding instruments such as GIFTS for (1) transmitting the data down to the ground within the bandwidth
capabilities of the satellite transmitter and ground station receiving system, (2) compressing the data at the ground
station for distribution to the user community (as is traditionally performed with GOES data via satellite rebroadcast),
and (3) archival of the data without loss of any information content so that it can be used in scientific studies and climate
assessment for many years after the date of the measurements. In this paper we study lossless compression of GIFTS
data that has been collected as part of the calibration or ground based tests that were conducted in 2006. The predictive
partitioned vector quantization (PPVQ) is investigated for higher lossless compression performance. PPVQ consists of
linear prediction, channel partitioning and vector quantization. It yields an average compression ratio of 4.65 on the
GIFTS test data, which significantly outperforms the standard compression methods such as JPEG-2000, JPEG-LS, and
CCSDS IDC 9/7M & 5/3.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports a comparative study of lossless compression algorithms for MODIS data. MODIS, The
Moderate Resolution Imaging Spectroradiometer, is a 36 band Visible and IR multispectral imager aboard the
Terra and Aqua satellites, having spatial resolution ranging from 0.250 to 1 kilometer and spectral resolution
ranging from 0.405 -0.420 to 4.482-4.549 microns. MODIS data rates are 10.6 Mbps (peak daytime); and 6.1
Mbps (orbital average). Faced with such an enormous volume of data on a current generation imager, this study
provides a comparison of current compression algorithms as a baseline for future work. The Hierarchical Data
Format (HDF) is standard format selected for data archiving and distribution within the Earth Observing System
Data and Information System (EOSDIS). Currently this system handles over one terabyte of data daily, and this
volume continues to increase over time. With growing satellite Earth science multispectral imager volume data
compression, it becomes increasingly important to evaluate which compression algorithms are most appropriate
for data management in transmission and archiving. This comparative compression study uses a wide range
standard implementations of the leading lossless compression algorithms. Examples include image compression
algorithms such as PNG and JPEG2000, and widely-used file compression formats such as BZIP2 and 7z. This
study includes a comparison with the Consultative Committee for Space Data Systems (CCSDS) most recent
recommended compression standard. by a significant margin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an error-prone environment the compression of ultraspectral sounder data is vulnerable to error propagation. The
Tungstall coding is a variable-to-fixed length code which compresses data by mapping a variable number of source
symbols to a fixed number of codewords. It avoids the resynchronization difficulty encountered in fixed-to-variable
length codes such as Huffman coding and arithmetic coding. This paper explores the use of the Tungstall coding in
reducing the error propagation for ultraspectral sounder data compression. The results show that our Tunstall approach
has a favorable compression ratio compared with JPEG-2000, 3D SPIHT, JPEG-LS, CALIC and CCSDS IDC 5/3. It
also has less error propagation compared with JPEG-2000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The minimum-redundancy prefix-free code problem is to determine an array l = {l1 ,..., fn} of n integer codeword
lengths, given an array f = {f1 ,..., fn} of n symbol occurrence frequencies, such that the Kraft-McMillan inequality
[equation] holds and the number of the total coded bits [equation] is minimized. Previous minimum-redundancy
prefix-free code based on Huffman's greedy algorithm solves this problem in O (n) time if the input array f is sorted; but
in O (n log n) time if f is unsorted. In this paper a fast algorithm is proposed to solve this problem in linear time if f is
unsorted. It is suitable for real-time applications in satellite communication and consumer electronics. We also develop
its VLSI architecture that consists of four modules, namely, the frequency table builder, the codeword length table
builder, the codeword table builder, and the input-to-codeword mapper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote Sensing Data Archiving, Management, and Distribution
NASA, NOAA, and USGS collections of Earth science data are large, federated, and have active user
communities and collections. Our experience raises five categories of issues for long-term archival:
*Organization of the data in the collections is not well-described by text-based categorization
principles
*Metadata organization for these data is not well-described by Dublin Core and needs attention to data
access and data use patterns
*Long-term archival requires risk management approaches to dealing with the unique threats to
knowledge preservation specific to digital information
*Long-term archival requires careful attention to archival cost management
*Professional data stewards for these collections may require special training.
This paper suggests three mechanisms for improving the quality of long-term archival:
*Using a maturity model to assess the readiness of data for accession, for preservation, and for future
data usefulness
*Developing a risk management strategy for systematically dealing with threats of data loss
*Developing a life-cycle cost model for continuously evolving the collections and the data centers that
house them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an novel scheme to FGS is presented, namely, the fine granularity scalable video coding based on
EBCOT (FGSBE in short). The proposed coding scheme can not only provide improved coding efficiency compared
with PFGS but also prevent the error accumulation and eliminate the block effect. In the FGSBE scheme, to improve the
coding efficiency, DCT coefficients of enhancement layer were rearranged and transformed properly to prepare for
EBCOT coding. Because of rearranging of whole enhancement layer, block effect was degraded greatly. Our
experimental results show the FGSBE scheme can improve video quality up to 0.5 dB over the FGS scheme and 0.1 dB
over simplify PFGS in average PSNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite tremendous efforts to avoid them, stripes are a re-occurring problem for many remote imaging sensors.
Much work has focused on suppressing or eliminating them in order to recover accurate observed radiances.
Beyond the obvious need to eliminate stripes to obtain accurate scientific measurements, stripes can also significantly
impact the performance of compression algorithms. Many compression algorithms are based on linear
representations of image space or assume the data to be relatively smooth. In contrast stripes produce nonlinearities
in the data as well as sharp discontinuities which make it seem necessary to describe the images with
many parameters. Yet the sources and nature of the stripes are often not well known, they could come from
specific irregularities with the sensors. If the a priori construction of the sensor is accounted for, and the stripe
statistically modeled, it is possible to transmit the stripe parameters separately along with de-striped images.
The de-striped images have image statistics whose assumptions are much closer to those for which standard
compression algorithms are optimized. As an example, we show this yields a significant boost in the performance
of these algorithms when applied to the de-striped MODIS images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image compression based on lossless or nearly lossless region of interest (ROI) means to lossless compress the
interesting regions and loss compress the uninteresting regions in an image. The technology both may obtain the high
quality image information and maintain the high compression ratio, which solves the contradiction between the image
quality and the image compression ratio. Applying ROI, we can compress image with different accuracy in different
region, which make the important parts of an image be coded with better quality than rest image. In the test project of the
shooting range, a great many target images will be saved, however the tester only is interested in the target region and
not interested the background information. According to the situation, a ROI compress method based on target image is
proposed in the paper. The experiment results show that the method can greatly reduce the image data storage,
meanwhile remain the target information perfectly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As we known that under the same image reconstruction quality, the encoding efficiency of H.264 video standard is about
50% higher than that of H.263 standard. But at the same time, this methodology improving the encoding efficiency based
on the rate distortion optimizing (RDO) and the multi patterns motion estimating; as a result it is greatly increasing the
calculation complexity of encoder, and greatly affects the video compression speed. Therefore, when we estimate motion
pattern of a video sequence, if the best predicting pattern can be decided as quickly as possible, and unnecessary
predicting pattern searching can be limited or costly computation can be cut down, then it is possible to speed up the
motion estimation. Under these thought, a fast inter-frames pattern selecting and motion pattern estimating algorithm was
proposed in the paper. Based on the texture feature and local motion character of current macro block (MB), a set of
effective predicting patterns were selected. This selection can effectively reduce searching time of follow up predicting
pattern; so it can obviously decrease the operation complexity of the encoder, and improve the searching speed for the
motion estimating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that in the H.264 video coding standard the image inner spatial correlation had been made full use in its
intra-frames prediction algorithm, so the encoding efficiency can be improved. However, in H.264 video coding standard,
no matter prediction frames (P-frame or B-frame) or intra-frames (I-frame), all of them need to carry on predicting at
each macro block inside the frame, and the algorithm utilizes the surrounding pixels to predict, therefore it improves the
encoding efficiency at the cost of increasing calculation complexity of the encoder. For the sake of real time video
communications, it is necessary to employ a new algorithm to reduce calculating complexity for intra-frames predicting
and speed up encoding procedure. For these reasons, a modified fast intra-frames prediction pattern decision algorithm
was proposed in this paper. Using directional information of luminance pattern and chrominance pattern to decide the
candidate pattern, the limited of edge direction can restrict the number of the candidate pattern, so the algorithm can
greatly decrease the number of candidate patterns, therefore it can obviously cut down calculating amount. At the same
time, the proposed algorithm also adopts early termination of 4×4 block pattern selection strategy to choose the most
possible pattern. The proposed algorithm has the ability of obviously reducing the pattern selecting calculation
complexity, which has been approved by the experimental result. So the algorithm is possible to be used in the real time
video communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.