Chen & Spence found in a previous study the time-courses and categorical specificity of cross-modal semantic congruency effects in pictures and printed words. To explore whether the time-courses and semantic consistency of audio and pictures can affect human’s visual processing unconsciously, we developed a Python-based audio-visual semantic consistency program to study the impact of auditory cues on the breakthrough timing of printed words under the Two-Alternative Forced Choice continuous flash suppression (2AFC-CFS) paradigm. Specifically, auditory cues were presented at 5 different (-1000ms, -750ms, -500ms, -250ms, 0ms) stimulus onset asynchronies (SOAs) with respect to the visual targets. In addition, there were 5 different match types in our auditory cues and printed words: congruent, incongruent, correlated, noisy and no-sound. The results of the study shows that SOA and congruency have the main effect in the unconscious condition. At the same time, spoken words produced greater facilitation than naturalistic sounds auditory stimuli. When leading by 500ms or more, spoken words induced a slowly emerging congruency effect in the congruent. In all cases, however, auditory stimuli sped up recognition compared to no sound. These results therefore suggests that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system.
Community search is a widely used technique in graph data mining that aims to find communities containing a given query node. While existing works have mainly focused on homogeneous information networks, most real-world networks are heterogeneous. To address this, this paper proposes a weighted k-core community search method designed for heterogeneous information networks. Firstly, the influence of the association weight between nodes based on meta-paths on the community search results is considered, and a weighted k-core community model (k, P)-Wcore is established, thereby improving the accuracy of community search. Subsequently, in order to improve search efficiency, an optimization algorithm OptWcore based on graph traversal search space is designed. This algorithm can effectively reduce redundant calculations and reduce the depth of path search, thereby improving search efficiency. Finally, experiments conducted on four real-world heterogeneous information network datasets demonstrate the effectiveness and efficiency of the proposed method.
It is well known that animals have a special meaning for humans, and in biology humans have coexisted with animals for a long time. From the ancestors of human beings there has been an inseparable relationship with animals. And this relationship also makes the human visual system seem to have a more special visual processing mechanism for animals than other targets. To find out whether this mechanism exists only in animals, we performed an experiment. The experiment was a two forced choice (2-AFC) task. Since the scene has an effect on object recognition, we will use animal stimuli without background for comparison experiments with non-animal stimuli. We mixed animal stimulus images and non-animal stimulus images in a disordered manner to form stimulus image sets (50 images each), all of which were without background. Our results showed that subjects had faster reaction times for the animal stimulus pictures than for the non-animal stimulus images, with 524 ms for the animal stimulus pictures and 547 ms for the nonanimal stimulus pictures. The subjects' correct judgment rate for animal stimulus images was higher than that for nonanimal stimulus images, with 96.1% for animal stimulus images and 91.6% for non-animal stimulus images.
Humans can quickly and efficiently extract information from a complex natural scene. Rapid detection of animals is such an example, which is fast and accurate. We can see that animals have gender differences, and human beings also have gender differences, and they all appear in our real life. Therefore, we will use a two-alternative forced-choice paradigm (2AFC) to investigate the gender differences between the two targets. In our experiment, we balanced the various factors that could be taken into account and subjected the images to histogram equalization. We analyzed the reaction time of the subjects to stimuli of the target gender (male or female). We report two main findings. First, when the type of target (human or animal) was not considered, subjects had faster reaction times to male targets than to female targets. Second, gender differences were only significant for animals when the kind of object (human or animal) was considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.