KEYWORDS: Education and training, Data modeling, Random forests, Transformers, Process modeling, Machine learning, Parallel processing, Feature selection, Detection and tracking algorithms, Performance modeling
Browser fingerprinting has been used as a user-tracking technique in recent years. As a long-term tracking technique, it requires not only obtaining unique browser fingerprints but also linking fingerprints from the same browser instance in that browsers change rapidly and frequently. To improve the efficiency of linking the evolving browser fingerprints, in this paper, we propose a browser fingerprint linking method based on Transformer-encoder. Transformer-encoder utilizes an attention mechanism to focus on certain parts of the input sequence, enabling it to capture complex connections and interactions within the data more efficiently. To make the most of the parallel processing mechanism of the Transformer-encoder, we combine multiple fingerprint comparison vectors into an input vector to train the model. We conduct extensive experiments on a public dataset to evaluate our proposed model. The experimental results show that our model outperforms some existing models, which proves the effectiveness of the Transformer-encoder in linking browser fingerprints.
Traditional rumor detection methods that only focus on text content have achieved certain results. However, with the rapid development of social platforms, graphic information has occupied a large proportion. In this scenario, traditional detection methods cannot make full use of picture information for rumor detection. Aiming at the above scenarios, a rumor detection model integrating multi-modal features is proposed. Firstly, text features and visual features as well as their hidden states are extracted by using the pre-trained deep learning model, and then the preliminary fusion features are obtained by integrating the hidden states of text and image through the attention mechanism. Then, the text features, preliminary fusion features and social features are spliced, and the image features, preliminary fusion features and social features are spliced to obtain two final fusion features. Then the two features are input into different full connection layers to get their respective prediction results. Finally, the two prediction results are integrated to obtain the final detection results. Experimental results show that the proposed model is effective in detecting multimodal rumor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.