Scientific data files have been increasing in size during the past decades. In the medical field, for instance,
magnetic resonance imaging and computer aided tomography can yield image volumes of several gigabytes.
While secondary storage (hard disks) increases in capacity and its cost per megabyte slumps over the years,
primary memory (RAM) can still be a bottleneck in the processing of huge amounts of data. This represents
a problem for image processing algorithms, which often need to keep in memory the original image and a copy
of it to store the results. Operating systems optimize memory usage with memory paging and enhanced I/O
operations. Although image processing algorithms usually work on neighbouring areas of a pixel, they follow
pre-determined paths through the image and might not benefit from the memory paging strategies offered by
the operating system, which are general purpose and unidimensional. Having the principles of locality and pre-determined
traversal paths in mind, we developed an algorithm that uses multi-threaded pre-fetching of data
to build a disk cache in memory. Using the concept of a window that slides over the data, we predict the next
block of memory to be read according to the path followed by the algorithm and asynchronously pre-fetch such
block before it is actually requested. While other out-of-core techniques reorganize the original file in order to
optimize reading, we work directly on the original file. We demonstrate our approach in different applications,
each with its own traversal strategy and sliding window structure.