Electro-Optical (EO) and Infra-Red (IR) sensors have been jointly deployed in many surveillance systems. In this
work we study the special characteristics of optical flow in IR imagery, and introduce an optical flow estimation
method using co-registered EO and IR image frames. The basic optical flow calculation is based on the combined
local and global (CLG) method (Bruhn, Weickert and Schnorr, 2002), which seeks solutions that simultaneously
satisfy a local averaged brightness constancy constraint and a global flow smoothness constraint. While CLG
method can be directly applied to IR image frames, the estimated optical flow fields usually manifest high level
of random motions caused by thermal noise. Furthermore, IR sensors operating at different wavelengths, e.g.
meddle-wave infrared (MWIR) and long-wave infrared (LWIR), may yield inconsistent motions in optical flow
estimation. Because of the availability of both EO and IR sensors in many practical scenarios, we propose to
estimate optical flow jointly using both EO and IR image frames. This method is able to take advantage of the
complementary information offered by these two imaging modalities. The joint optical flow calculation fuses the
motion fields from EO and IR images using a cross-regularization mechanism and a non-linear flow fusion model
which aligns the estimated motions based on neighbor activities. Experiments performed on the OTCBVS
dataset demonstrated that the proposed approach can effectively eliminate many unimportant motions, and
significantly reduce erroneous motions, such as sensor noise.
We consider the problems of placing cameras so that every point on a
perimeter, that is not necessarily planar, is covered by at least
one camera while using the smallest number of cameras.
This is accomplished by aligning the edges of the cameras' fields
of view with points on the boundary under surveillance.
Taken into consideration are
visibility concerns, where features such as mountains must not be
allowed to come between a camera and a boundary point that would
otherwise be in the camera's field of view. We provide a general
algorithm that determines optimal camera placements and orientations.
Additionally, we consider double coverings, where every boundary point
is seen by at least two cameras, with selected boundary points
and cameras situated such that the average calibration errors between
adjacent cameras is minimized. We describe an iterative algorithm
that accomplishes these tasks. We also consider a joint optimization
algorithm, which strikes a balance between minimizing calibration
error and the number of cameras required to cover the boundary.