top of page


An extension provides users with an easy-to-use way of assessing the reliability of information on social media platforms.



My Roles and Responsibilities

I led and ideated products from scratch and improve features within a user-centered design process since it's a brand new product concept that I brought for the global competition.


During the research phase, I was in charge of interviewing potential users and creating personas using Figma. I also contributed to the literature review and competitive analysis.

After brainstorming the concept, My teammate and I created low-fidelity prototypes and moderated user feedback sessions. We then designed the high-fidelity user interface based on the feedback.

During the evaluation phase, I moderated several usability testing sessions, and was in charge of analyzing & synthesizing the data we collected and improving the design iteratively from them.

Team Members

Yu-Hsuan Liu, Jenny John


January 2022 - June 2022

Problem Statement

Misinformation, as a problem, does not have its origins in contemporary times. The current information dissemination landscape consisting mainly of social media platforms has transformed misinformation into one of the defining problems of the information age. This brings us to the question - what makes people believe in fake news and why is it such a hard problem to tackle effectively.


It’s challenging to address using technological intervention as digital fact-checking tools are often not sufficient by themselves to change a person’s mind. Additionally, fact-checking tools can be biased which can further erode user trust in tools. Hence, we incorporated those insights and designed a solution by building a news credibility evaluation tool.

Product Demo


1. Domain Research

To better understand the problem space, we conducted a literature review, and a competitive analysis.

Literature Review

Habits & Behaviors
across social media platforms

For our literature review process, we analyzed Pew Research Center data examining the news consumption habits across Social Media platforms in detail.


Competitive Analysis

Look at existing tools

We did the market study with 9 fact checking or credibility assessment tools to collect and compare the features. This helped us find a potential area of improvement.


From the analysis, we learned that:

  • Most of tools (6 of 9) need user input before checking the news (User need to take the initiative to click button/input the information). This may create a lack of motivation for people to check the facts.

  • Most of tools provide quick login for Google or Facebook accounts, and this save steps for users.

  • Generally, existing tools are lack of lack transparency and explanation in their assessment algorithms.

  • There is room for discussion on designing (1) user feedback section (2) additional source.

For the two tools with higher transparency, we do a more in-depth analysis: 


2. User Interview

To further clarify our problem & concept statement, we conducted semi-structured interviews with 10 participants. They were recruited based on their indicated interest in our initial screener. 

The aim of this study is to understand 

  1. Participants' news consumption habits on social media.

  2. How often people evaluate news and what motivates them to do so.

  3. Insights about their current news evaluation methods and challenges in using them.

  4. Gather expectations for a news evaluation tool.

Research Methodology 

No. of users interviewed: 10

  • Age group: 18-29 

  • 4 male and 6 female users

  • All regular consumers of news on social media platforms


  • The interview took place in person or over call. 

  • Each interview lasted for 45 mins - 1 hour. 

  • The audio was recorded and notes were taken.




on the problem of misinformation

All the participants agreed that Misinformation is a “crucial issue” which is “complex to solve”.

A primary challenge in solving this problem as noted by 5 out of the 10 interview participants is that “there is just too much information online” and “information is subjective” which can make it hard to “distinguish the truth and what is right or wrong”.

Current Strategies

on the evaluation

Google Search was mentioned as the most common method for evaluating a piece of news.


A few participants mentioned that they heavily relied on knowledge based websites like Wikipedia and Youtube for further news evaluation.


towards evaluating information

Motivation and interest were two majorly cited reasons which pushed people to critically evaluate news that they consume online.

Concerns / Expectations

from a News Assessment Tool

“I would not trust any tool by itself 100%”

 “These tools themselves could be biased”

Participants raised concerns about how such a tool would have been influenced by the developer’s or designer’s own biases, political agendas or general outlook on contentious issues. 

They also reiterated the need for a tool to present them with multiple viewpoints or coverages of the same issue. 

Refining our vision and revisiting our goals


Discover the challenges and barriers that users face in the adoption and sustained use of fact-checking tools.


Create a prototype of a fact-checking tool that would integrate seamlessly into the user’s news consumption environment

3. Observational Study 

The aim of this study is to

  1. Understand what tools people use to evaluate news on social media and how they use the tool.

  2. Gather people's feedback and concerns on choosing & using the tools.

We expected the study result to provide us the first blueprint for designing our prototype of the news credibility assessment tool.

Research Methodology 

No. of users interviewed: 6 

  • Age group: 18-29 

  • Both gender users

  • All regular consumers of news on social media platforms

  • Have ever evaluated information on social media platforms.


  • The interview took place in person or over call. 

  • Each interview lasted for 30 - 40 mins. 

  • The audio was recorded and notes were taken.



Evaluate an article of their choice using their preferred method for assessing news articles.


Evaluate the same article from Task 1 using the evaluation tools we provided (The Factual or Newstrition).


Tools people use to evaluate news on social media

  1. Of the 6 participants, 5 answered they use Google Search for assessing news articles or finding out more related articles, and 1 answered she always looks at the related sources that the news cited.

  2. On the flip side, participants pointed out that the one thing they dislike about Google is the overload of information.

Factors people look for in news articles

Through our interviews, we learned about how people look for in building trust in news articles. The top 5 concerns interviewees identified were:

1. Unfamiliar publication/sources

2. Unrelated photos

3. Unprofessional layouts

4. Overly-opinionated tone of an article

5. Content is too less

Comparison of the tools we provided

In the previous research step, we found that The Factual and Newstrition are the two tools with higher transparency on the assessment content. Therefore, we wanted to figure out pros and cons on both indicators and the type of interfaces. We anticipate that the challenges users face now with these tools may be the same ones we will encounter in the future.

Screen Shot 2022-09-05 at 20.52.59.png

Advantage / Something we can learn from:


  • Color section design (The Factual)

  • Smooth user experience  (The Factual)

  • Simple layout (The Factual)

  • Understandable language (Newstrition)

Issues / Need to be improved:


  • Lack of accessibility  

  • Lack of transparency  

  • Lack of explainability

4. Assessment Factor Survey

During the observational study, we discovered the factors people look for in building trust in news articles; however, we wanted to look deeper to figure out the weight of importance of each factor.

The survey asked the participants about their thoughts on the indicators they really care about while evaluating the news. Since we want to derive the results effectively, the survey mostly consisted of single choice questions. That said, just in case there is a missing indicator, we designed an open-ended question were participants could add other indicators they felt were important. We got 38 responses in total with about an even distribution of males and females.

Screen Shot 2022-09-05 at 14.23.14.png
Screen Shot 2022-09-05 at 14.22.36.png

Result & Our Calculation

Screen Shot 2022-09-04 at 16.48.14.png
Screen Shot 2022-09-05 at 23.16.10.png

#First Design

Concept Design

Concept 1 - The recommended result / score

  1. According to the previous research, we used the top factors -  sources, tone of articles, title, date of publication, evidence cited, source bias, and meaningful quotes, which had over 90% of positive responses. 

  2. For date of publication, source bias and meaningful quotes which are not easy to get a quantified assessment result on indicators. We decided to show the factors as additional information.

  3. Both the title and tone of the article can be evaluated by sentiment analysis, so we decided to combine two factors into an overall Article Tone Rating.

  4. The percentage of each factor is determined by the weighted score that we previously calculated in the result analysis of the Assessment Factor Survey.

Screen Shot 2022-09-06 at 17.33.24.png

Concept 2 - User flow and the content

According to the result of the observational study, participants like The Factual’s thumbnail and they favored the simple and intuitive flow of the extension. We decided to follow a similar flow as The Factual as follows:

  1. The user sees the thumbnail which shows quick recommended results.

  2. When they hover over the thumbnail, a small information box will pop up next to the thumbnail which shows three main evaluation criteria and total score / recommended result.

  3. The user can click the view summary button to see the complete assessment content.

User Flow.png


We created the following wireframes to make our ideas more concrete, and to make sure that the page structure, layout, information architecture, user flow, and functionality we derived make sense.


User Testing 

User Testing 1 

Purpose: Evaluate the usefulness and efficiency of our proposed design solution so as to make any changes or improvements as required.

  • Low-Medium Fidelity Prototype 

  • 5 tasks 

  • Moderated virtual usability testing

  • 6 participants

  • 30-40 mins

👍🏼 Positive Feedback

  • Smooth User Flow.

  • Hover information is concise and clear.

  • The site bias is good for different perspective readers.

  • Instructions make the evaluation clear.

  • Like the user feedback function.


👎🏼 Negative Feedback / Suggestions

  • 3 said the top section of the summary should also show the the score.

  • The score of external sources and meaningful quotes are confusing.

  • The related articles and the related media should be in the separate pages.

  • The scale (L,C,R) used in site bias was not clear.

  • The credibility score should not appear in the related article page.

  • Expected to see the source and search for keywords of the articles.

User Testing 2

Purpose: Further understand user’s preferences for the layout of information in a news credibility assessment tool and fine-tune our sample prototype accordingly.

  • High Fidelity Prototype 

  • 7 tasks, including 2 A/B testing tasks

  • Moderated virtual usability testing

  • 6 participants

  • 45-60 mins


Feedback & Design Iterations

  • Removed the "Date Posted".

  • Listed all the factors affecting the score calculation. 

  • Changed the color of the button to make it clearer for users.

  • ​Added an instruction to the site bias.

  • Relocated the user feedback function, moved it to the bottom.​

  • Changed the format of the final recommendation result.

  • ​Added the score percentage to each criteria result. ​

  • ​Renamed the "External Sources" and made the style of main criteria be consistent.

  • Redesigned "Meaningful Quote" section since it's an additional factor of assessment.

  • Removed the recommended result and added Google source to the top section.

  • The page only includes articles in the Google search result.

  • The number of articles was increased from 5 to 10

  • User can search a specific article by using the filter.

  • Redesigned the format of the political bias and added assessment of each related article.

  • Removed the recommended result and added Google source to the top section.

  • The page only includes articles in the Google Video result.

  • The number of videos was increased from 5 to 10

  • User can search a specific video by using the filter.

  • Removed the political bias and added assessment of each related video.


Final Design

InfoClinic provides users with an easy-to-use way of assessing the reliability of 

information. It ranks the content on a 0-100 scale and presents a recommended

result instead of only showing credible or not credible. Also, it displays crucial

information such as the political spectrum for readers’ reference. Every assessment

criterion is obtained through thorough research. 

1. Thumbnail and the hover box

2. Color representation for the recommended results


3. Summary Page


4. Related Articles

Related Article-pro.png

5. Related Videos

5. Related Videos

Related Video-pro.png

6. Instructions for the assessment factors


Complete User Flow Demo

My Learning

1. Design consideration for a Human-AI system

We believe considering the Human-AI design guidelines can serve as a resource for us working on the design of applications and features since advances in artificial intelligence frame opportunities and challenges for user interface design. Here are the Human AI Interaction principles which have been incorporated into our design:


  • Explainability:  We have included explanations throughout the app to clarify the algorithmic process - for both the total scoring and for each individual indicator. We will also provide users the option to read more by linking to our external website which will have our detailed methodology and will also include model cards.

  • Transparency: For transparency, the result calculation process is clearly displayed using a pie chart. We will also present a breakdown of the score for each article that is evaluated so as to ensure maximum transparency in the scoring process.

  • Fairness and Accountability: For accountability, we provide users the option to give feedback in case of an unfair assessment by the algorithm. Having this recourse to human intervention in case of an unfair decision by an algorithm will allow for more accountability and will help foster trust in our solution.

2. Attention to detail

In this project, we did deep research and many iterations of redesign. In the process, I found even a small thing affect user experience a lot.  Therefore, it's crucial for us to pay attention to detail to maintain efficiency while producing accurate work. 

Whether creating a checklist that breaks down all small parts of a task or setting up structured time to review our work, our ability to ensure no detail gets left behind would make us get the job done promptly.

bottom of page