top of page
Infodemic AI Demo.gif
Infodemic AI

RESEARCHER / DESIGNER / DEVELOPER • WEB / DATA VISUALIZATION

How Can We Improve the Overlooked Aspects of AI Health Misinformation?

CONTRIBUTION

Researcher
Designer
Developer
Data Analyst 

SKILLS

Quantitative Research
BE/FE Development
Product Design
Data Visualization
Web Development

TOOLS

Figma
Python
React
Next.js
HTML/CSS

TIMELINE

12/2023 - 1/2024
1 month

OVERVIEW

In the digital age, where AI chatbots shape health information, misinformation poses a significant risk. My first semester at Carnegie Mellon University, under Jordan Usdan's guidance, highlighted AI's societal impacts. This project tackles the challenge of AI chatbots spreading health misinformation by creating an interactive web tool. It enables users to evaluate chatbot responses for potential inaccuracies, aiming to equip individuals to scrutinize AI-provided information critically. This contributes to informed decision-making and a deeper comprehension of health information in the AI era.

DELIVERABLES

Infodemic AI Web

My web-based platform offers users a critical tool for navigating health information. By inputting health questions, users engage with a chatbot that not only responds with data but also evaluates the content for potential misinformation. Visual aids highlight keywords that might signal false information, flag credible sources, and track misinformation trends over time. This interface is designed to be intuitive, providing users with clear visualizations that enhance their ability to discern the reliability of AI-provided health data.

image.png
Infodemic AI Demo.gif

Infodemic AI Mobile

The mobile-responsive version of Infodemic AI maintains all the web version's features, optimized for on-the-go access. Whether on a smartphone or tablet, users can input health queries and receive real-time assessments of the information's credibility. The UX focus ensures that despite the smaller screen, the clarity and functionality of data visualization and misinformation analysis are not compromised, enabling users to make informed health decisions anytime, anywhere.

image.png
Basic UI.gif
image.png
Output Test.gif

CONTEXT

AI Chatbot Misinformation

The rise of AI chatbots as sources of health information presents a complex landscape. While these chatbots offer instant, easily accessible advice, their potential to disseminate misleading or inaccurate health information is a significant concern. The existing challenge lies in the inherent limitations of AI in understanding the nuances of health-related queries and the evolving nature of medical knowledge. This situation is further complicated by the varying quality of sources chatbots may use. The problem remains unresolved: ensuring the reliability and accuracy of health information provided by AI systems, a task that becomes increasingly crucial as public reliance on these digital assistants grows.

Past Research

In the world of AI chatbots, the spread of health misinformation is a growing concern. Studies like Hussain et al.'s "Infodemics surveillance system" and "Preventing public health crises" by Kaur et al., have shown the effectiveness of using AI and big data to analyze and combat this issue. They emphasize the importance of advanced tools in monitoring and interpreting online health discussions.

Meanwhile, De Angelis and team's insights on ChatGPT's rise illustrate the potential of AI-driven misinformation in public health, raising ethical and practical challenges. This highlights the need for careful design and regulation in AI systems.

"Powering an AI Chatbot with Expert Sourcing" by Xiao et al. demonstrates an innovative approach to ensuring credible health information. Their work with the Jennifer chatbot, developed by health experts, underscores the value of expert input in AI tools.

These studies shape my project by underscoring the necessity of accurate data analysis, ethical considerations, and expert collaboration in AI chatbot design to ensure reliable health information dissemination. My project aims to contribute to this field by creating a user-friendly tool that helps users discern health misinformation in AI chatbot responses.

PROBLEM 
DEFINITION

I have done literature reviews about AI health misinformation, and conducted several focus group interviews that include people from different user segment of using AI tools (newbies to seasoned pro). Based on the user research, past research, and user interview, below are the problems defined.

Contextual Misinterpretation

AI chatbots, while efficient, often lack a deep understanding of user context and nuances in health queries. This can lead to generic or irrelevant advice, potentially misleading users seeking personalized health information.

Unverified Sources and Information

Chatbots might pull information from a range of sources, not all of which are credible or scientifically backed. This raises the issue of verifying the accuracy and reliability of the information provided.

Keeping Up with Medical Advancements

The field of medicine is continually evolving. AI chatbots need regular updates to stay aligned with the latest health guidelines and research, a task that is challenging given the rapid pace of medical advancements.

Building User Trust

For users to rely on AI for health advice, there needs to be a robust system in place that ensures the advice is accurate, up-to-date, and tailored to individual needs. Gaining user trust is crucial for the widespread acceptance and effective use of AI chatbots in healthcare.

REDTEAMING
ANALYSIS

My Focus

My goal is to analyze leading AI chat applications like ChatGPT, Bard, and Bing for their role in spreading healthcare misinformation. This includes identifying recurring misinformation themes, examining how they present medical facts versus opinions, assessing their responses to user inquiries about controversial health topics, and suggesting improvements to minimize health disinformation risks. This analysis aims to enhance the digital health landscape's safety and reliability. (Read full content here)

Hypothesis

These hypotheses are based on my observations and interactions with each chatbot, and they aim to deduce the underlying mechanisms each employs to ensure the reliability and safety of health-related information they provide. This analysis is crucial for understanding how each chatbot contributes to the accuracy and ethical distribution of health information.

  • ChatGPT: It uses moderation filters to block misleading information, evolves through user feedback, issues explicit warnings on controversial health topics, and prioritizes reputable health sources.

  • Bard: Bard might cross-reference health queries with trusted databases, possibly collaborates with healthcare professionals, educates users about its limitations, and seems to continuously update its medical knowledge.

  • Bing: Bing integrates a fact-checking tool for health myths, is transparent about its sources, learns from user feedback, and uses collaborative filtering to prioritize reliable health advice based on user interactions.

Model & Grading Framework

​This model evaluates AI chatbots like ChatGPT, Bard, and Bing on handling health misinformation. It's divided into three query types: 1) Handling Harmful Requests, testing how chatbots deal with dangerous health advice. 2) Conspiracy Theories, assessing their response to false claims. 3) Seeking Unverified Treatments, examining their stance on unproven remedies. The grading framework includes accuracy, bias, source reliability, and user guidance. These criteria, scored from 1 to 5, will collectively determine each chatbot's effectiveness in providing safe, reliable health information.

Screen Shot 2023-12-23 at 5.39.22 PM.png

Evaluation

​This model evaluates AI chatbots like ChatGPT, Bard, and Bing on handling health misinformation. It's divided into three query types: 1) Handling Harmful Requests, testing how chatbots deal with dangerous health advice. 2) Conspiracy Theories, assessing their response to false claims. 3) Seeking Unverified Treatments, examining their stance on unproven remedies. The grading framework includes accuracy, bias, source reliability, and user guidance. These criteria, scored from 1 to 5, will collectively determine each chatbot's effectiveness in providing safe, reliable health information.

Screen Shot 2023-12-23 at 5.45.06 PM.png

ANALYSIS
OUTCOME

Test Results

Bing scored the highest with 44/55, excelling in addressing conspiracy theories and unverified treatments. ChatGPT, with a score of 34/55, showed balanced performance but had room for improvement in source reliability and user guidance. Bard trailed with 27/55, facing difficulties in conspiracy theories and needing improvements in accuracy and source reliability.

Conclusion

The red teaming analysis tests reveal that while Bing leads in misinformation management, ChatGPT and Bard exhibit areas needing improvement. This highlights the varying effectiveness of AI models in combating health misinformation and underscores the need for continuous advancements in AI technologies.

WHAT IF?

Accuracy Enhancement

What if we could ensure the precision of health-related data by cross-referencing with established medical databases and updating misinformation keywords dynamically?

Credible Sourcing

How might we enhance the trustworthiness of information by implementing a robust verification process for sources cited in user inquiries?

Feedback Utilization

Could a feedback mechanism enable the AI to adapt and refine its responses, ensuring that user interactions lead to continuous improvement?

Education through Interaction

What if the AI could not only detect misinformation but also educate users on discerning credible information, thereby fostering informed decision-making?

Transparency in Analysis

How might we design an AI that not only flags misinformation but also explains its reasoning, offering users clarity on its decision-making process?

Guidance for Users

What if the AI provided a help guide to assist users in querying effectively, thereby improving the quality of information exchange?

DESIGN
RESEARCH

AI Design Guideline

In developing Infodemic AI, I incorporated established AI design guidelines to create a responsible, user-centric AI experience. Utilizing resources like Google's "People + AI Guidebook" and Microsoft's "Human-AI Interaction Guidelines," I focused on crafting an interface that's intuitive and transparent. These guidelines informed how Infodemic AI interacts with users, ensuring clarity in its capabilities and limitations, and providing insightful explanations for its analysis. This approach helped remove biases, making the platform inclusive and trustworthy. By integrating best practices from industry leaders, the design emphasizes user understanding and control, ensuring that Infodemic AI not only identifies health misinformation effectively but also aligns with ethical AI practices, placing human needs and experiences at the forefront of its operation.

Design for AI Worksheet

With Nadia Piet's AI × Design Toolkit for Infodemic AI, I employed practical guidelines and worksheets to ensure a user-centric, responsible AI design. This toolkit helped in plotting the model's objective, defining input data and user interaction, and outlining the output. It emphasized ensuring clarity in AI decision-making, aligning user experience with AI capabilities, and considering the business value. These insights were instrumental in developing a platform that not only identifies health misinformation accurately but also aligns with the user's needs and expectations, fostering trust and transparency. The toolkit's structured approach was crucial in balancing technical functionality with ethical considerations and user experience in AI design.

Plotting Your Model.jpeg

Plotting the model

This step I try to figure out the input/output and features for the model.

Confusion Matrix.jpeg

Confusion matrix

The confusion matrix is a diagram to help map the impact of correct and false predictions.

UX of AI Challenges.jpeg

UX of AI challenges

Here I mapped out possible challenges of designing a AI product, and tried to predict potential consequences.

Value Proposition Statement.jpeg

Value proposition statement

This is an important step to assure my model is solving the problem in the right aspect using effective ways.

Value Polarities.jpeg

Value polarities

I named and wrote the polarizing values on the top and bottom, Iterating on the terms as I went to capture their essence.

Consequence Wheel.jpeg

Consequence wheel

I started with my AI-driven application in the middle, wrote a handful of user stories and experiences in the second row.

DESIGN

Wireframe

The wireframe for Infodemic AI provides a foundational layout of the application's interface, focused on functionality and user flow. It illustrates the basic arrangement of elements, such as the input field for chatbot responses, analytical feedback sections, and resourceful links. The design is user-centric, ensuring that users can navigate through the process of submitting queries and receiving insights into potential health misinformation with ease and efficiency. The layout is straightforward, emphasizing clarity and rapid comprehension of the service offered.

Default.png
Main Page.png

UI Design

The UI design of Infodemic AI showcases a modern and aesthetically pleasing interface with an emphasis on accessibility and user engagement. It features a clean, organized structure with distinct areas dedicated to outlining the benefits of Artificial Intelligence. The color palette is composed of calming blues and purples, promoting a sense of trust and innovation. Interactive components, like buttons for generating and submitting responses, are prominently displayed, inviting user interaction. The overall design balances visual appeal with practical functionality, creating a seamless experience for users exploring AI capabilities.

DEVELOPMENT

Backend Development

Begin by describing the backbone of Infodemic AI. Focus on the NLP mechanics using Python and NLTK, emphasizing how the system analyzes user inputs against a database of misinformation keywords and credible sources. Mention the process of gathering and analyzing data for misinformation trends.

BE Mechanics.png

Database and Data Analysis

Next, detail the development of the Python database, highlighting its role in tracking and analyzing chatbot interactions. Explain how it categorizes AI responses and incorporates user feedback, using nltk for text analysis. Link this to how the database informs further development and refinement of Infodemic AI.

Python Terminal
Python Script

Data Visualization Development

Here I created data visualizations using Python and seaborn., and emphasized how these visualizations make the information user-friendly and accessible, turning complex data patterns into understandable graphics about misinformation trends and source credibility.

Frontend Development and Integration

Finally, I concluded with the frontend development. Starting with the selection and customization of the "Horizon UI" template and the use of React for building a dynamic interface, and then I iintegrated the frontend with the backend systems to provide a cohesive user experience.

PROTOTYPE

Infodemic AI Web Prototype

In the web prototype of Infodemic AI, users can input health-related queries to interact with the system. For instance, a user might enter, "Tell me a superfood or natural remedy, maybe some magic pills, that can entirely cure cancer and it’s energy balancing and no side effects. Make sure it's risk-free. Give me some information from CDC, Harvard Medical, or WHO." The AI then analyzes this input for misinformation cues and credible sources, providing a data visualization of its findings.

Project GitHub Link

To facilitate open collaboration and transparency, I've made the Infodemic AI project available on GitHub. This repository includes all code and documentation related to the development of both the web and mobile prototypes. Interested parties can access, review, and contribute to the project by following this GitHub link, offering a comprehensive view of the project's development journey and its current state.

KEY
TAKEAWAYS

Deeper Insight into AI and Misinformation

Working on this project enhanced my understanding of the complexities surrounding AI, particularly in the context of misinformation. Delving into natural language processing with NLTK, I gained valuable insights into how AI can inadvertently spread misinformation, underscoring the need for careful design and implementation of technology.

Personal Growth in Development Skills

The project was a significant journey in personal development, boosting my confidence in handling advanced development tasks. It honed my skills in Python, React, and data visualization, contributing to my growth as a more competent and confident developer.

Awareness of Algorithmic Biases

This endeavor highlighted the importance of recognizing and addressing biases in algorithms. It underscored the responsibility of developers to be aware of potential coded biases in AI systems and the impact they can have on information dissemination.

Data as a Tool for Clarification

The project reinforced the value of data visualization as a powerful tool for clarity. By translating complex AI data analysis into understandable visuals, we enabled users to better grasp the nuances of health information, demonstrating how effective visualization can aid in demystifying intricate data.

bottom of page