Authors:
(1) Yigit Ege Bayiz, Electrical and Computer Engineering The University of Texas at Austin Austin, Texas, USA (Email: [email protected]);
(2) Ufuk Topcu, Aerospace Engineering and Engineering Mechanics The University of Texas at Austin Austin, Texas, USA (Email: [email protected]).
Deterministic Baseline Policies
Temporally Equidistant Prebunking
Abstract—The growing reliance on social media for news consumption necessitates effective countermeasures to mitigate the rapid spread of misinformation. Prebunking, a proactive method that arms users with accurate information before they come across false content, has garnered support from journalism and psychology experts. We formalize the problem of optimal prebunking as optimizing the timing of delivering accurate information, ensuring users encounter it before receiving misinformation while minimizing the disruption to user experience. Utilizing a susceptible-infected epidemiological process to model the propagation of misinformation, we frame optimal prebunking as a policy synthesis problem with safety constraints. We then propose a policy that approximates the optimal solution to a relaxed problem. The experiments show that this policy cuts the user experience cost of repeated information delivery in half, compared to delivering accurate information immediately after identifying a misinformation propagation.
Social media have become an integral part of modern communication, with more than 70% of adults in the U.S. using at least one social media service [1]. At least 60% of these users use social networking platforms, a type of social media platform in which users build social networks and communicate with other users who share similar interests, opinions, or backgrounds. These platforms allow their users to easily share information and their opinions.
However, the ease of information sharing on social networking platforms has also led to the rise of misinformation and fake news. Misinformation can spread faster and more broadly than accurate information, causing significant harm to public discourse. The consequences of misinformation can range from confusion to substantial damage to public health [2], or political manipulation [3]. The widespread adoption of social networking platforms and the growing reliance on social networking services as sources of news consumption exacerbates this issue, often resulting in significant portions of the population believing in unsubstantiated or provably incorrect claims, making it difficult to correct these false beliefs later through debunking.
Prebunking has emerged as a promising solution for mitigating the spread of misinformation. Unlike debunking, which aims to correct false information after its spread, prebunking exposes people to factual information before they encounter false claims. The idea is to inoculate the public against misinformation, akin to a vaccine [4]. Studies indicate that this preemptive approach has the potential to reduce the spread of misinformation [5], without relying on media censorship.
The effectiveness of prebunking depends on the timing of delivery of the factual information. It is clear that for prebunking to work as intended the user has to see the factual information before seeing the related misinformation. Furthermore, the effectiveness of prebunking depends on the time gap between the delivery of the factual information and the arrival of the misinformation. Research in characterizing this relation has been sparse, leaving a significant gap in understanding the optimal timing for successful prebunking interventions [6]. Another consideration is the effect of prebunking on the userexperience. Too frequent delivery of factual information can push users to other social networking platforms that do not utilize prebunking, rendering prebunking ineffective.
We formalize optimal prebunking as a mathematical optimization problem over the timing of factual information delivery. The primary objective is to ensure that users encounter the factual information before being exposed to corresponding misinformation while minimizing any disruptions to the userexperience. We model the effect of factual information on user-experience by adopting a leaky bucket cost, that is an exponentially decaying cumulative count over the number of prebunking related factual informations the user receives. Minimizing this cost discourages repeated and frequent delivery of factual information, minimizing the impact of prebunking on user-experience.
We use the susceptible-infected (SI) epidemiological model to characterize misinformation propagation and frame optimal prebunking as a policy synthesis problem with constraints to ensure the user receives the factual information before the corresponding misinformation. Building on this framework, we propose a locally optimal policy that attempts to minimize the leaky bucket cost, while ensuring that the user receives factual information prior to misinformation that is propagating over the social network.
• We present the problem of optimally delivering pregenerated factual information for the purpose of prebunking as a policy synthesis problem.
• We analyze two baseline methods that provide guaranteed delivery of factual information before its corresponding misinformation.
• We provide a third method that approximates the optimal solution to a relaxed problem and provides the same guarantees as the previously mentioned methods with less cost.
This paper is available on arxiv under CC 4.0 license.