In the last article, we explored lateral reading as a tool for evaluating what is in front of you online. But here is a question worth sitting with before we go any further: How did that content get in front of you in the first place? The answer has less to do with what you searched for and more to do with decisions made by a small number of enormously powerful corporations, decisions driven not by what is true, accurate, or good for you, but by what keeps you engaged the longest.
Platforms Are Not Neutral
When most people think about social media, they imagine an open town square where anyone can speak and anyone can listen. The reality is more like a privately owned shopping mall. The space may feel public, but every corner of it is designed, managed, and monetized by a company whose primary obligation is to its shareholders, not its users.1 Facebook, YouTube, TikTok, and X are not in the business of informing you. They are in the business of capturing your attention and selling it to advertisers. The longer you stay on the platform, the more ads you see, and the more revenue the company generates.
This business model has a direct consequence for the information you encounter. Platforms use recommendation algorithms, automated systems that analyze your behavior and decide what to show you next, to maximize the time you spend on the platform.1 These algorithms do not ask whether content is accurate or healthy. They ask whether it is engaging. And as research has consistently shown, content that triggers strong emotions, particularly outrage and anxiety, is dramatically more engaging than content that is measured, nuanced, or calm.2
What the Algorithms Reward
Research by Rathje, Van Bavel, and Van der Linden examined engagement patterns on social media and found that content expressing hostility toward out-groups, meaning people of a different political party, religion, or identity, consistently generated more shares, likes, and comments than content that did not.2 This was true across both political parties and across multiple platforms. The algorithm does not care about the politics. It cares about the reaction. Content that makes you angry about the other side performs better, so the algorithm serves you more of it.
Robertson and colleagues studied negativity in online news consumption and found a similar pattern: negative news content attracted more clicks and longer engagement than neutral or positive content across a wide range of topics.3 The result is a feedback loop. Users who engage with negative or outrage-driven content are shown more of it. Platforms that reward this content with wider distribution see more of it created. Creators and publishers who want reach learn quickly what the algorithm favors and produce accordingly. None of this requires a conspiracy or a deliberate decision to spread misinformation. It is simply what the incentive structure produces.
Who Owns the Platforms and Why It Matters
Understanding who owns these platforms adds another layer to the picture. Social media is not a diverse ecosystem of competing voices. It is dominated by a small number of companies, and increasingly, by individual owners whose personal values, political views, and business interests shape platform-wide decisions about content moderation, algorithmic design, and what speech is permitted or amplified.1
Munger’s research on generational media consumption and political engagement argues that the consolidation of media power into a few large platforms has fundamentally changed the relationship between citizens and information.1 When a single platform controls the information diet of hundreds of millions of people, changes to its algorithm, its moderation policies, or its advertising priorities can shift public discourse at a scale that no single newspaper, television network, or radio station ever could. The decisions made in a handful of boardrooms and server rooms now shape what billions of people believe is happening in the world.
Filter Bubbles and the Illusion of Consensus
One of the most discussed consequences of algorithmic curation is the filter bubble, a term used to describe the personalized information environment each user inhabits based on their past behavior and the platform’s predictions about what they want to see.1 If you click on conservative political content, the algorithm shows you more conservative content. If you engage with content skeptical of vaccines, you will see more of it. Over time, your feed can come to look like a world where everyone agrees with you, where your views are constantly confirmed, and where opposing perspectives are invisible or fringe.
This matters for media literacy because a filter bubble does not feel like a bubble from the inside. It feels like reality. When every source you encounter reinforces the same narrative, it becomes genuinely difficult to imagine that credible people hold different views. Feio’s review of news consumption and political participation among young people found that algorithmically curated media environments can reduce exposure to diverse perspectives and reinforce partisan identity, with measurable effects on political engagement and civic knowledge.4 The platform is not just showing you content. It is shaping your sense of what is normal, what is disputed, and who counts as a trustworthy voice.
The Particular Challenge for Older and Younger Users
Research has found that different age groups face different vulnerabilities in algorithmically driven information environments. Moore’s work on digital media literacy and older adults found that adults over 60 are more likely to share false information online, not because they are less intelligent, but because they came of age in a media environment where publication itself conferred a degree of credibility, and those instincts do not automatically update for an internet where anyone can publish anything.5 Younger users face a different but related challenge. Munger’s research found that while younger generations are often assumed to be more digitally savvy, they are just as susceptible to algorithmically curated outrage and emotionally driven content as older users, particularly when that content aligns with their existing social identities.1
Neither group is to blame for these vulnerabilities. They are the predictable result of information systems designed by engineers optimizing for engagement, not by educators optimizing for understanding. Recognizing this is the first step toward navigating it more intentionally.
What Families Can Do Together
The most important thing a family can do in response to social media’s power structures is to make the invisible visible. Platforms want their algorithms to feel seamless and natural, like a window onto the world rather than a carefully constructed frame. Naming the frame, talking out loud about why certain content keeps appearing, asking whose interests a platform’s design serves, and noticing when a feed seems to show only one kind of perspective are all ways of reasserting your agency as a user rather than a product.
You can also experiment with disrupting the algorithm deliberately. Search for a perspective you do not normally encounter. Follow a source you disagree with for a week and notice how the feed shifts. Try accessing news through a direct visit to a publication’s homepage rather than through social media, where the algorithm has already decided what you should see. These small acts of intentional navigation are not solutions to structural problems, but they are ways of practicing the kind of conscious, critical media engagement that this series has been building toward from the beginning.
Understanding that the information environment is designed, owned, and monetized is not a reason for despair. It is a reason for awareness. And awareness, built together across generations, is exactly what media literacy is for.
Take the Conversation Further
Want to go deeper on all of it? We have covered clickbait, online bots, lateral reading, social media power structures, and much more in the Literacy2k Podcast. Download episodes directly from our site or find us on your preferred podcast app. Whether you are listening with your family on a road trip or catching up on your own, each episode is designed to make these conversations accessible, engaging, and genuinely useful for navigating digital life in the twenty-first century.
References
1 Munger, K. (2022). Generation gap: Why the Baby Boomers still dominate American politics and culture. Columbia University Press.
2 Rathje, S., Van Bavel, J. J., & Van der Linden, S. (2021). Out-group animosity drives engagement on social media. PNAS, 118(26), e2024292118. https://doi.org/10.1073/pnas.2024292118
3 Robertson, C. E., Prollochs, N., Schwarzenegger, K., Lutzow, P., Van Bavel, J. J., & Feuerriegel, S. (2023). Negativity drives online news consumption. Nature Human Behaviour, 7, 812-822. https://doi.org/10.1038/s41562-023-01538-4
4 Feio, G. (2023). News consumption and political participation of young people in new media: A narrative literature review. Journal of Youth Studies. https://doi.org/10.1080/13676261.2023.2171041
5 Moore, R. C., & Hancock, J. T. (2022). A digital media literacy intervention for older adults improves resilience to fake news. Scientific Reports, 12, 6008. https://doi.org/10.1038/s41598-022-08437-0