Satellite images are important sources of
information for meteorologists to predict rapid
weather changes, for example storms, now and in
the near-future (Nowcasting). It is not possible to
use traditional numerical weather forecasts for
this purpose since these are computed with a
time-lag of several hours. This means that the
most recent weather changes are not taken into
account.
This paper presents a method to compute
synthetic satellite images from simulated forecast
files. The cloud information in numerical forecast
data sets is of much more interest if it can be
visualized with a well-known representation like
the satellite image.
The proposed method uses artificial neural
network technology to construct a model which is
trained with data from numerical forecasts and
classified satellite data captured at the same
points in time. The cloud cover parameters in the
forecast data set are tied to the cloud
classification in the satellite image using a
point-to-point representation. The results show
that this is a useful method to compute synthetic
satellite images. The level of detail in the
resulting images is lower than in a real satellite
image, but detailed enough to provide information
about the principal features of the cloud cover.
KEYWORDS: High dynamic range imaging, Principal component analysis, Neural networks, Camera shutters, Visualization, Artificial neural networks, Data modeling, Clouds, Cameras, Neurons
The appearance of the sky has a fundamental
effect on the way human beings perceive an
environment. This paper presents a method to
compute synthetic high-dynamic-range fisheye
images from weather parameter data sets. These
images can then be used in global-illumination
systems (e. g. Radiance) to define the lighting
conditions at an arbitrary weather state.
Applications of this technology can be found in
flight simulators and in architectural visualization.
The method combines artificial neural networks
and principal component analysis to associate
the appearance of the sky with the state of a
weather parameter vector. A model is trained
with examples of sky images and weather data
from a period of seven months. This model is
then used to generate artificial sky images
corresponding to a specific weather parameter
vector. This is a novel method which contrary to
many previous methods is able to synthesize a
sky image which varies with the current weather
state. The results show that, although it is not
possible to represent the cloud details, it is
possible to distinguish between different weather
states.
Visualizing a weather prediction data set by actually synthesizing an image of the sky is a difficult problem. In this paper we present a method for synthesizing realistic sky images from weather prediction and climate prediction data. Images of the sky are combined with a number of weather parameters (like pressure and
temperature) to train an artificial neural network (ANN) to predict the appearance of the sky from certain weather parameters. Hourly measurements from a period of eight months are used. The principal component analysis (PCA) method is used to decompose images of the sky into their eigen components -- the eigenskies. In this way
the image information is compressed into a small number of coefficients while still preserving the main information in the image. This means that the fine details of the cloud cover cannot be
synthesized using this method. The PCA coefficients together with measured weather parameters at the same time form a data point
that is used to train the ANN. The results show that the method gives adequate results and although some discrepancies exist, the main appearance is correct. It is possible to distinguish
between different types of weather. A rainy day looks rainy and a sunny day looks sunny.
Understanding the properties of time-varying illumination spectra is of importance in all applications where dynamical color changes due to changes in illumination characteristics have to be analyzed or synthesized. Examples are (dynamical) color constancy and the creation of realistic animations. In this article we show how group theoretical methods can be used to describe sequences of time changing illumination spectra with only few parameters. From the description we can also derive a differential equation that describes the illumination changes. We illustrate the method with investigations of black-body radiation and measured sequences of daylight spectra.
We present a projective geometry framework for color invariants using the Extended Dichromatic Reflection Model, in which more realistic and complicated illuminations are considered. Many assumptions which have been used by other methods are relaxed in our framework. Specifically some of the proposed invariants do not require any additional assumption except the ones assumed by the Extended Dichromatic Reflection Model. By putting the color invariance into the projective geometry framework, we can generate different types of invariants and clarify the assumptions under which they are valid. Experiments are presented that illustrate the results derived within our framework.
KEYWORDS: Image retrieval, Error analysis, Databases, Information operations, Statistical analysis, RGB color model, Solar thermal energy, Content based image retrieval, 3D image processing, Image processing
Color is widely used for content-based image retrieval. In these applications the color properties of an image are characterized by the probability distribution of the colors in the image. These probability distributions are very often estimated by histograms although the histograms have many drawbacks compared to other estimators such as kernel density methods.
In this paper we investigate whether using kernel density estimators instead of histograms could give better descriptors of color images. Experiments using these descriptors to estimate the parameters of the underlying color distribution and in color based image retrieval (CBIR) applications were carried out in which the MPEG7 database of 5466 color images with 50 standard queries are used as the benchmark. Noisy images are also generated and put into the CBIR application to test the robustness of the descriptors against the noise. The results of our experiments show that good density estimators are not necessarily good descriptors for CBIR applications. We found that the histograms perform better than kernel based methods when used as descriptors for CBIR applications.
In the second part of the paper, optimal values of important parameters in the construction of these descriptors, particularly the smoothing parameters or the bandwidth of the estimators, are discussed. Our experiments show that using over-smoothed bandwidth gives better retrieval performance.
We present a framework to compute the distance between color distributions based on differential geometry. We investigate more detailed the case when color distributions are described as linear combinations of a set of pre-computed basic functions. Experiments in our color based image retrieval system, which were done on 1000 images from Corel image database, show the advantage of our method based on the new distance measure and color descriptor.
KEYWORDS: Databases, Color vision, Fourier transforms, Human vision and color perception, Inspection, Pattern recognition, Reflectivity, Feature extraction, Visual process modeling, Chromium
Color processing methods can be divided into methods based on human color vision and spectral based methods. Human vision based methods usually describe color with three parameters which are easy to interpret since they model familiar color perception processes. They share however the limitations of human color vision such as metamerism. Spectral based methods describe colors by their underlying spectra and thus do not involve human color perception. They are often used in industrial inspection and remote sensing. Most of the spectral methods employ a low dimensional (three to ten) representation of the spectra obtained from an orthogonal (usually eigenvector) expansion. While the spectral methods have solid theoretical foundation, the results obtained are often difficult to interpret. In this paper we show that for a large family of spectra the space of eigenvector coefficients has a natural cone structure. Thus we can define a natural, hyperbolic coordinate system whose coordinates are closely related to intensity, saturation and hue. The relation between the hyperbolic coordinate system and the perceptually uniform Lab color space is also shown. Defining a Fourier transform in the hyperbolic space can have applications in pattern recognition problems.
This paper gives an overview of operator-based models that seem to be especially interesting for laser scanning microscopy. These methods were developed by Nazarathy et. al. in a series of papers about 10 years ago. In these papers they demonstrated that both the operators and Gaussian beams can be represented by matrices. We implemented the operator algebra in Mathematic and we also show by some simple examples how to analyze paraxial systems. A number of empirical experiments have also been performed to verify the validity of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.