It’s not about the score. It’s about the conversation.
When your family disagrees on an answer, that disagreement is the point.
The conversation that follows is where the real learning happens.
Results
It’s not about the score. It’s about the conversation.
When your family disagrees on an answer, that disagreement is the point.
The conversation that follows is where the real learning happens.
Scenario: A mom sees a headline that says, “You Won’t BELIEVE What This Senator Just Said About Your Retirement.” She feels a rush of curiosity and almost clicks immediately. What type of bait is this headline using?
#1. Clickbait & Emotions
Answer: B) Information bait
Why we asked this: Research by Shin, DeFelice, and Kim (2025) identifies three distinct types of clickbait, each pulling a different psychological lever.
Information bait (the correct answer) works by exploiting psychologist George Loewenstein’s information gap theory. When we sense a gap between what we know and what we want to know, we experience genuine psychological discomfort, like a mental itch we feel compelled to scratch. Headlines like this one are engineered on purpose to manufacture that feeling and force you to click.
Rage bait (option A) is more dangerous. Instead of targeting curiosity, it targets your anger and moral outrage, making you more likely to share without thinking.
Engagement bait (option C) is the most technically sneaky of all. It uses commands like “Comment YES if you agree!” to trick you into clicking platform buttons, which then trains the algorithm to flood your entire feed with that creator’s content permanently.
Question: Have you ever been the victim of clickbait? Which type of bait was it?
Scenario: A teenager comes to dinner furious, saying, “Did you see this post? It says the other political party is trying to destroy everything we believe in!” He already shared it with 12 friends. Which of the following is most likely true?
#2. Rage Bait & Sharing
Answer: C) His anger may have been deliberately triggered to boost the post’s engagement
Why we asked this: Research by Shin and colleagues (2025) found that rage bait headlines generate nearly three times more shares than non-rage bait headlines. Anger is what researchers call an approach emotion. It pushes people toward action (clicking, sharing, commenting) rather than reflection, which is exactly what publishers are counting on.
Option A is worth examining. Research consistently shows that an angry reader is more likely to share a headline without reading the full article, making option A the opposite of what usually happens.
Option B feels intuitive but is misleading. Strong emotion is not evidence that something is true. It is often a signal to slow down, not speed up.
Option D compounds the problem. Sharing rage-driven content rewards the algorithm and ensures more people are exposed to it, regardless of whether it is accurate.
Question: Can you think of a time when you or someone you know shared something online while angry? What happened next?
Scenario: During a heated local election, a grandfather notices that hundreds of accounts are flooding a community Facebook group with the exact same phrases about one candidate, all posted within minutes of each other at 3 a.m. He thinks, “Wow, a lot of people must really feel strongly about this.” Is he right?
#3. Online Bots
Answer: B) Not necessarily; this pattern is consistent with coordinated bot activity
Why we asked this: Researchers Alessandro Bessi and Emilio Ferrara analyzed the 2016 U.S. presidential election and found that bots accounted for approximately one-fifth of the entire online political conversation, yet they were nearly invisible to the average user.
Options A and C reflect a very human instinct. We are social creatures who look to others to understand what is normal and popular. Bots exploit that tendency directly. They do not need to convince you. They just need to make a fringe position look like a massive grassroots movement.
Option D might feel reassuring, but the research says otherwise. A study of the 2017 Catalan independence referendum in Spain found that nearly one in three accounts in the discussion behaved like a bot. The clues your grandfather noticed (identical phrases, coordinated timing, and posting through the night) are classic warning signs worth pausing on.
Question: Have you ever looked at a heated online discussion and later wondered if some of the accounts were real? What tipped you off?
True or False: An account that posts 80 times a day, only ever talks about one political topic, has a randomly generated username, and replies to people with generic scripted phrases is always a bot and should be reported immediately.
#4. Spotting a Bot Account
Why we asked this: None of the warning signs alone prove an account is a bot. Real people can also post frequently, have sparse profiles, or focus on one dominant interest. What research on bot detection says is that when several of these signals appear together, it is worth pausing before trusting or sharing that account’s content.
It is also worth knowing that not all bots are bad. Research on the streaming platform Twitch found that many bots there perform genuinely helpful functions like moderating chat, welcoming new users, and answering common questions. The technology itself is neutral. It is the intent behind the bot that matters.
The “report immediately” part of this statement is the real trap. Jumping to conclusions without enough evidence is itself a form of the impulsive, unverified reaction that media literacy is designed to slow down.
Question: What questions would you ask about an account before deciding whether to trust or share what it posts?
Scenario: A mom finds a health website claiming a common herb cures a serious disease. The site looks professional, has an impressive “About Us” page with a doctor’s photo and credentials, and ends in “.org.” She spends 10 minutes carefully reading through the entire site. Is this the best way to evaluate whether it’s trustworthy?
#5. Lateral Reading
Answer: C) No; carefully reading a site you do not yet know you can trust can actually make misinformation more convincing
Why we asked this: What happened here has a name in research: vertical reading. It means staying within a single source, scrolling top to bottom, and evaluating its internal features. Stanford researchers Sam Wineburg and Sarah McGrew (2019) found that even PhD historians defaulted to vertical reading and were far slower and less accurate than professional fact-checkers as a result.
Option A describes vertical reading exactly, and researchers Espina and Spracklin put the problem with it plainly: vertical reading “plays right into the intent of disinformation, to capture the reader’s deep attention and misconstrue their perspective.”
Option B is a widespread misconception. Anyone with a few dollars can register a .org domain. It carries no reliable signal of credibility.
Option D swings too far in the other direction. Blanket cynicism (treating all sources as equally untrustworthy) is actually the enemy of good information evaluation, because it leads people to make decisions based on personal identity rather than evidence.
Question: When you want to know if something online is trustworthy, what do you usually do first?
Scenario: A 16-year-old comes home and says a viral video “proves” a major news story is a hoax. You want to check it together. What should be your first step?
#6. Lateral Reading in Practice (SIFT)
Answer: A) Stop, and before doing anything else, search for what independent sources say about the account or outlet that posted it
Why we asked this: This question introduces the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims), developed by digital literacy educator Mike Caulfield. SIFT’s first move is always to stop before reacting, especially when content triggers a strong emotion.
Option A feels responsible, but watching the video carefully before knowing who made it is a form of vertical reading. You are evaluating the content on its own terms, on the manipulator’s home turf.
Option B is particularly important to discuss as a family. The number of likes and shares a post has can be artificially inflated by bots and engagement bait, the very tactics covered earlier in this quiz. Popularity is not the same as accuracy.
Research by Breakstone and colleagues (2021) found that before instruction in lateral reading, only 3 out of 87 college students used it at all. After just four hours of training, 67 out of 87 did, and their assessment scores nearly doubled. This is a learnable skill, and practicing it together at home is one of the most effective ways to build it.
Question: Try it right now. Pick any website or social media account you’re not sure about and open a new tab to search what other sources say about it. What did you find?
Scenario: A dad notices that his Facebook feed lately has been almost entirely posts that agree with his political views, and almost nothing from people who think differently. He says, “I guess most people just agree with me.” What is the most accurate explanation?
#7. Social Media Algorithms
Answer: B) His feed may be showing him a filter bubble shaped by the algorithm, not an accurate picture of public opinion
Why we asked this: The filter bubble is one of the most consequential and least-discussed features of how social media platforms work. Algorithms analyze your past behavior and serve you more of what you have already engaged with, not because it is true or good for you, but because it keeps you on the platform longer.
Option A reflects exactly the illusion the filter bubble creates. A filter bubble does not feel like a bubble from the inside. It feels like reality.
Option C is a common misconception. Platforms are not neutral pipelines. They are recommendation systems uniquely shaped by each user’s individual history, which means two people in the same household can have dramatically different feeds.
Option D is worth addressing directly. Research by Munger (2022) found that younger generations are just as susceptible to algorithmically curated content as older users, particularly when it aligns with their existing social identity. Filter bubbles do not discriminate by age.
Question: Does anyone in your family feel like their social media feed shows mostly one kind of viewpoint? What would happen if you deliberately searched for a perspective you do not usually see?
True or False: Young people who have grown up using smartphones and social media since childhood are naturally better at spotting misinformation and algorithmic manipulation than older adults.
#8. The Digital Native
Answer: False
Why we asked this: The idea that tech-fluent young people are naturally media-literate is so widespread that researchers gave it a name: the digital native, coined by educator Marc Prensky in 2001. Prensky argued that growing up surrounded by digital technology gave young people fundamentally different cognitive styles, but he never backed the claim with empirical research.
Extensive research reviews by Neil Selwyn (2009), Margaryan and colleagues (2010), and Bullen and colleagues (2011) all reached the same conclusion: the digital native is a myth. A large-scale Stanford study found that most students could not reliably distinguish between native advertising and real news, could not identify basic signals of source bias, and accepted search engine rankings as proof of credibility.
Being fluent in an interface (knowing how to swipe, post, and filter) is not the same as being critically literate about what that interface is doing to you. Speed of use is not wisdom about use. The good news is that these skills can be explicitly taught and they work across every age group.
Question: Did anyone in the family assume the younger members already knew how to spot misinformation? Where did that assumption come from?
Scenario: At a family gathering, grandma shares a political article with the caption “Everyone is saying this is true.” Your younger cousin says, “Grandma just does not understand the internet.” Who does research say is actually more vulnerable to misinformation online?
#9. Generational Vulnerabilities
Answer: C) Both groups face different vulnerabilities, and neither is to blame
Why we asked this: Options A and B are both partially supported by research, which is exactly why C is the right answer.
Research by Moore and Hancock (2022) supports option A in part. Older adults are more likely to share false information online, not because they are less intelligent, but because they grew up in a media environment where being published conferred credibility. Those instincts do not automatically update for an internet where anyone can publish anything with the same visual polish as a major news outlet.
Research by Munger (2022) supports option B in part. Younger generations are just as susceptible to algorithmically curated outrage and emotionally driven content, especially when it aligns with their existing social identity.
Option D might feel hopeful, but research is clear that no level of general intelligence makes anyone automatically immune. These are specific, learnable skills. Neither group is at fault, and both groups benefit from learning alongside each other.
Question: Can you think of a time when a younger person and an older person in your family each caught something the other one missed online?
Scenario: A parent says, “I’ll leave the media literacy education to the school; they’re better equipped to teach it than I am.” According to the research, is this the best approach?
#10. Where Media Literacy Actually Happens
Answer: B) Partly; schools help, but research says the home is where these habits are most likely to stick
Why we asked this: Option A is not entirely wrong. Classroom instruction works, and schools play an important role. But two separate research teams (Ito and colleagues in 2013, and Rasi and colleagues in 2019) both concluded that media literacy must be understood within the social context of individuals and families, not only as an individual skill developed in isolation. The reason is simple: media consumption does not happen at school. It happens on the couch, in the car, at the kitchen table, and in bed before sleep.
Option C sells parents short. Research on media socialization consistently identifies parents and guardians (not teachers, not peers, not influencers) as the most influential forces in shaping how young people engage with digital content.
Option D sets an impossible standard. Rasi and colleagues found that older adults who engage in media literacy learning within supportive family networks show stronger gains in digital confidence than those who learn alone. Everyone has something to bring to this conversation. You do not need to have all the answers. The habit of asking the questions together is what matters.
Question: What is one question your family could start asking together whenever a headline or post feels designed to make you feel something strong?