Skip to main content
Loading Events

« All Events

  • This event has passed.

Dissertation Defense - Anu Shrestha

March 8, 2022 @ 10:00 am MST

Characterization and Mitigation of False Information on the Web

Anu Shrestha – Computer Science

Abstract:

Social media and Web sources have made information available, accessible, and shareable any time and anywhere nearly without friction. This information can be truthful, falsified, or can only be the opinion of the writer as users in such platforms are both information creators and consumers. In any case, it has the power to affect the decision of an individual, the beliefs of the society, activities, and the economy of the whole country. Thus, with the opportunity of public interactions i.e., sharing information and opinions provided by these platforms comes great responsibility. The responsibility of combating the effects of false information that are ubiquitous across the Web and social media. Therefore, the main goal of this dissertation is to proactively combat false information by defining three objectives. First, analyzing the reason behind the success of its motive, second, recognize and quantify the impacts made on information systems, and third, develop novel ways of identifying false information and the actors responsible for creating and spreading them. The achievement of these three objectives will enhance our understanding of false information and will help in mitigating this phenomenon.

Despite several studies on identifying false information and mitigating its spread, it still persists on online platforms and has become a more serious problem than ever before. Considering this problem, we study people’s ability to identify false information in social media, what are the factors that they rely on when they discern false information, and their reasoning behind their decision to share them. We also compared human performance to a machine learning-based approach to detect false information and found that people are poor at identifying false information than automated detection.
In addition, we focus on understanding who are the potential victims of such false information and quantifying the impact on them. For this, we consider the task of quantifying the effect of shilling attacks on recommender systems. Our analysis shows that in the presence of malicious users, recommender systems are not uniformly robust for all types of benign users. There exist different categories of users in opinion-based platforms beyond binary class depending on the ways they contribute, and their input could range from highly informative to noisy or even malicious. Similarly, the veracity of news articles in the news ecosystem may depend on the quality of news contents specified by the amount of false information present such as mostly true, mostly false, etc. Therefore, we argue that existing works on identifying false information and malicious users considering classical binary conditions are not sufficient.
To tackle these challenges, we analyze the characteristics of malicious actors and false information through the lens of their behavioral attributes, temporal activities, network-based attributes, psycho-linguistic properties, and multi-modality including associated images. We build an unsupervised deep recurrent neural network-based model to identify trustworthy reviewers from fraudulent and uninformative/unreliable reviewers in an opinion-based system. We used this concept of multi-class scenario to address the problem of inferring trustworthiness degrees of entities like social media users, news publishers, and pieces of news in the news ecosystem using a graph neural network-based approach.

Overall, this dissertation presents our research on in-depth analysis of malicious entities, their impact in the information ecosystem, and the models we build to accurately detect different malicious entities like fraudulent reviewers, fake news, fake news spreaders in real-world scenarios. We show that each of our methods outperforms the existing state-of-the-art methods in the detection of false information and malicious actors in real-world opinion-based and fact-based systems.

Committee:

Francesca Spezzano, Ph.D., Computer Science (Chair)

Edoardo Serra, Ph.D., Computer Science

Maria (Sole) Pera, Ph.D., Computer Science