Utilization of human participants as "soft sensors" is becoming increasingly important for gathering information related
to a wide range of phenomena including natural and man-made disasters, environmental changes over time, crime
prevention, and other roles of the "citizen scientist." The ubiquity of advanced mobile devices is facilitating the role of
humans as "hybrid sensor platforms", allowing them to gather data (e.g. video, still images, GPS coordinates), annotate
it based on their intuitive human understanding, and upload it using existing infrastructure and social networks.
However, this new paradigm presents many challenges related to source characterization, effective tasking, and
utilization of massive quantities of physical sensor, human-based, and hybrid hard/soft data in a manner that facilitates
decision making instead of simply amplifying information overload.
In the Joint Directors of Laboratories (JDL) data fusion process model, "level 4" fusion is a meta-process that attempts
to improve performance of the entire fusion system through effective source utilization. While there are well-defined
approaches for tasking and categorizing physical sensors, these methods fall short when attempting to effectively utilize
a hybrid group of physical sensors and human observers. While physical sensor characterization can rely on statistical
models of performance (e.g. accuracy, reliability, specificity, etc.) under given conditions, "soft" sensors add the
additional challenges of characterizing human performance, tasking without inducing bias, and effectively balancing
strengths and weaknesses of both human and physical sensors. This paper addresses the challenges of the evolving
human-centric fusion paradigm and presents cognitive, perceptual, and other human factors that help to understand,
categorize, and augment the roles and capabilities of humans as observers in hybrid systems.