We evaluate the Pre-Whitening Matched Filter (PWMF), “Eye-Filtered” Non-Pre-Whitening (NPWE) and Sparse-Channelized Difference-of-Gaussian (SDOG) models for predictive performance, and we compare various training and testing regimens. These include “training” by using reported values from the literature, training and testing on the same set of experimental conditions, and training and testing on different sets of experimental conditions. Of this latter category, we use both leave-one-condition-out for training and testing as well as a leave-one-factor-out strategy, where all conditions with a given factor level are withheld for testing. Our approach may be considered a fixed-reader approach, since we use all available readers for both training and testing.
Our results show that training models improves predictive accuracy in these tasks, with predictive errors dropping by a factor of two or more in absolute deviation. However, the fitted models are not fully capturing the effects apodization and other factors in these tasks.
In the 3D tasks, the image display we use allows subjects to freely scroll through a volumetric image, and a localization response is made through a mouse-click on the image. The search region has a relatively modest size (approx. 8.8° visual angle). Localization responses are considered correct if they are close to the target center (within 6 voxels). The classification image methodology uses noise fields from the incorrect localizations to build an estimate of the weights used by the observer to perform the task. The basic idea is that incorrect localizations occur in regions of the image where the noise field matches the weighting profile, thereby eliciting a strong internal response.
The efficiency results indicate differences between 2D and 3D search tasks, with lower efficiency for large target in the 3D task. The classification images suggest that this finding can be explained by the lack of spatial integration across slices.
View contact details