
Center for Design | Twitter
Research on the role of User Interface design in misinformation
The Center for Design is Northeastern University’s platform for interdisciplinary design research.
As a Product Designer and HCI Researcher, I contributed early design thinking to what eventually became an earlier subset of X’s Community Notes (formerly known as Twitter’s Birdwatch).

Timeline
February - August 2022

Platforms
Web, Apple iOS

Tools
Figma, User Zoom, LaTeX, Apple Xcode, Google Workspace, Microsoft 365

Role
UX Research, UI Prototyping, Literature Review, User Testing, Research Writing
My significant contribution was:

HCI Research
Conducted research on content consumption, interface design, and industry approaches to mitigate misinformation, identified gaps & opportunities for interventions.

UI Prototyping
Developed 5 prototypes incorporating visual cues (colors, symbols, percentages) to highlight misinformation in tweets using alerts, community groups, and more.

User Testing
Recruited participants, conducted user tests, collected qualitative and quantitative evaluations. Used thematic analysis to assess critical thinking patterns.

HCI Research
Goal
Design and evaluate interventions on Twitter that foster critical reflection through visual cues and community-based reporting. The project aimed to understand the impact of visual elements on users’ news consumption and the prevention of engagement with fake news.
Abstract
Social media’s role in rapid spread of misinformation has grown exponentially. This project aimed to investigate how interface design influences misinformation spread by affecting users’ consumption behavior.
Visual Principles
I analyzed the role of various visual attributes (size, symbols, colors, etc.) and their impact on user behavior, concluding with design implications and recommendations for the social media’s interface interventions.
Research Material
The literature review references spanned academic studies on human behavior, misinformation impact, cognitive reflection, public trust, algorithm monitoring, design influence, interaction psychology, and content verification practices.
Ideation
Building on research that showed how visualizing filter bubbles can increase transparency and influence consumption behavior, we explored design directions focusing on transparency, third-party fact-checking, collaboration, and article tracking. We tested 5 prototypes to evaluate Twitter users’ reactions to visual prompts and the potential impact of community-based reporting and third-party fact-checkers.

UI Prototyping
Variable 0
Baseline test with the current Twitter interface, including user-verification badges and retweet/quote retweet ratios.
Variable 1.1
Misinformation alerts submitted by individual Twitter users, displayed dynamically. Included traffic light colored alerts for fact (green), not entirely accurate (yellow), and misleading claims (red).
Variable 1.2
Similar to Variable 1.1 but with alerts in a fixed order. Aimed at testing ease of navigation and information consumption.
Variable 2
Misinformation alerts submitted by group lists on Twitter, with variations in alert size, percentages, and visual prominence.
Variable 3
Misinformation alerts assessed by third-party fact-checkers. Compared community-based reporting with third-party verification.

User Testing
User Tests
To assess the effectiveness of these variables, the Project Manger and I moderated user tests involving think-aloud protocols, surveys, and clickthrough prototypes. Participants (ages 20–30) were regular Twitter users and digital news consumers.
With a success rate of 80%, there was a strong preference for Variable 1 and its traffic light colored icons that helped reflect on content accuracy. Percentages and numbers attracted user attention and sparked curiosity. There was also a preference for bold text and straightforward explanations.
Testing Analysis
Overall, there was a huge desire for transparency and credibility that Variable 1.1 and 1.2 offered using visual, traffic light colored icons/alerts. My assumption was that people would highly prefer friction-less content consumption without disruption but I was fascinated to learn that participants were willing to trade off speed if additional labels aids reflection. This bottom-up approach would empower people to collaborate and add context to potentially misleading posts. While third-party verification (Variable 3) seemed credible, users expressed skepticism around neutrality. Group-based alerts (Variable 2) introduced confusion and raised concerns about moderation and bias.
