Progressive cerebral atrophy is a physical component of the most common forms of dementia - Alzheimer's disease, vascular dementia, Lewy-Body disease and fronto-temporal dementia. We propose a phenomenological simulation of atrophy in MR images that provides gold-standard data; the origin and rate of progression of atrophy can be controlled and the resultant remodelling of brain structures is known. We simulate diffuse global atrophic change by generating global volumetric change in a physically realistic biomechanical model of the human brain. Thermal loads are applied to either single, or multiple, tissue types within the brain to drive tissue expansion or contraction. Mechanical readjustment is modelled using finite element methods (FEM). In this preliminary work we apply these techniques to the MNI brainweb phantom to produce new images exhibiting global diffuse atrophy. We compare the applied atrophy with that measured from the images using an established quantitative technique. Early results are encouraging and suggest that the model can be extended and used for validation of atrophy measurement techniques and non-rigid image registration, and for understanding the effect of atrophy on brain shape.
KEYWORDS: Data modeling, Machine vision, 3D modeling, Error analysis, Statistical analysis, Statistical modeling, Image processing, Visual process modeling, Data processing, Binary data
This paper describes a design methodology for constructing machine vision systems. Central to this is the use of empirical design techniques and in particular quantitative statistics. The approach views both the construction and evaluation of systems as one and is based upon what could be regarded as a set of self-evident propositions;
(1) Vision algorithms must deliver information allowing practical decisions regarding interpretation of an image.
(2) Probability is the only self-consistent computational framework for data analysis, and so must form the basis of all algorithmic analysis processes.
(3) The most effective and robust algorithms will be those that match most closely the statistical properties of the data.
(4) A statistically based algorithm which takes correct account of all available data will yield an optimal result. Where the definition of optimal can be unambiguously defined by the statistical specification of the problem.
Machine vision research has not emphasized the need for (or necessary
methods of) algorithm characterization, which is unfortunate, as the
subject cannot advance without a sound empirical base. In general this problem can be attributed to one of two factors; a poor understanding of the role of assumptions and statistics, and a lack of appreciation of what is to be done with the generated data.
The methodology described here focuses on identifying the statistical
characteristics of the data and matching these to the assumptions of the underlying techniques. The methodology has been developed from more than a decade of vision design and testing, which has culminated in the construction of the TINA open source image analysis/machine vision system [htt://www.tina-vision.net].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.