So What Exactly Is a Bot?
A social bot is a software program designed to operate an account on a social media platform. Bots can post content, reply to other users, retweet or reshare posts, and even join conversations, all automatically and at a pace no human could match.1 On the surface, many bots look like ordinary accounts. They may have profile pictures, follower counts, and posting histories. The difference is that a person is not sitting behind them making decisions. A program is. Bots are not new, and they are not all bad. Researchers studying the livestreaming platform Twitch found that many bots there serve genuinely helpful functions, including welcoming new users, moderating chat, answering questions, running community games, and sharing information about the streamer.1 In those spaces, bots act more like helpful community assistants than manipulators. They communicate at a much higher rate than human users, but the community generally knows they are there and benefits from what they do.1 The problem is that not all bots are playing that helpful role. When bots enter political conversations, the picture changes significantly.Bots and the 2016 U.S. Presidential Election
Researchers Alessandro Bessi and Emilio Ferrara analyzed Twitter activity surrounding the 2016 U.S. presidential election and found something striking: a large portion of the accounts joining the conversation were not human at all.2 Using bot-detection algorithms, they found that suspected bots accounted for about one fifth of the entire online election discussion, a staggering share of a conversation that millions of real voters were also participating in.2 What makes this finding so significant is not just the volume, but the distortion. Bots supporting one candidate generated overwhelmingly positive content about that candidate while producing negative content about the opposing candidate, creating an artificial impression of grassroots enthusiasm that did not reflect real public opinion.2 When a real voter scrolled through their feed and saw wave after wave of positive posts about a candidate, they had no way of knowing that a large portion of those posts were machine-generated. The appearance of popular support was, in part, manufactured.Bots and the 2017 Catalan Independence Referendum
A second landmark study, conducted by Massimo Stella, Emilio Ferrara, and Manlio De Domenico, examined Twitter during the 2017 Catalan independence referendum in Spain. Analyzing nearly 3.6 million tweets posted by roughly 523,000 users, they found that nearly one in three accounts in the discussion behaved like a bot, and bots generated about 23.6% of all posts during the event.3 But what the bots did with that activity is what should concern us most. Rather than just adding noise to the conversation, bots in this study strategically targeted the most influential human users, the accounts with the most followers and the most reach.3 And the content they directed at those influencers was not neutral. Bots bombarded supporters of Catalan independence with violent, inflammatory, and negative messages, using hashtags associated with concepts like “fight,” “shame,” “dictatorship,” and “police violence.”3 These negative associations came exclusively from bots, not from human users in the conversation. In other words, the bots were not just talking. They were deliberately stoking fear and conflict, and doing it strategically by targeting the people most likely to amplify those emotions to large audiences.Bots Can Make Fringe Views Look Mainstream
One of the most powerful things a bot network can do is create the illusion of consensus. When hundreds or thousands of accounts are all pushing the same hashtag, sharing the same story, or expressing the same outrage, it can feel like an organic groundswell of public opinion. Research on the 2016 election shows that bots were active in promoting specific hashtags and creating the appearance of widespread enthusiasm, or widespread hostility, where little may have actually existed among real people.2 This matters because humans are social creatures. We look to others to understand what is normal, acceptable, and popular. When a message appears to have massive support, we are more likely to take it seriously, share it, or update our own views. Bots exploit that tendency. They do not need to convince you directly. They just need to make a message look popular enough that you convince yourself.How to Spot a Bot
The good news is that bots often leave clues. Here are some signs to watch for when an account feels off: Posting frequency: Does the account post constantly, dozens or hundreds of times per day, at all hours, including the middle of the night? Human users take breaks. Bots generally do not.23 Repetitive content: Does the account share the same links, phrases, or hashtags over and over? Bot networks are often coordinated, so similar content from multiple accounts at the same time is a red flag.2 Narrow focus: Is the account’s entire history about one political topic or one candidate, with nothing else? Real people have varied interests. An account that exists solely to push one message deserves a closer look.23 Account details: Does the account have a randomly generated-looking username, a default or stock profile photo, very few followers, and almost no personal posts? These are common features of automated accounts.1 Interaction style: Does the account reply with generic phrases, automated-sounding messages, or content that does not quite respond to what the other person said? Bots are designed to mimic conversation, but they often do so in scripted, impersonal ways.1 None of these signs alone proves an account is a bot. Real people can also post frequently or have sparse profiles. But when several of these signals appear together, it is worth pausing before trusting or sharing that account’s content.What Families Can Do Together
Recognizing bots is a skill the whole family can practice, and it gets easier the more you do it. The next time a political post shows up in your feed that feels designed to make you angry, try this: before reacting, click on the account that posted it. Scroll through the history. Ask these questions together. Does this account look like a real person? How often does it post? Does it talk about anything besides this one topic? You can also slow down before sharing content from accounts you do not recognize. Look for independent confirmation from a trusted news source before passing along a claim that came from an unfamiliar account with high volume and little personal detail. Treating anonymous, high-activity accounts with extra caution is simply good digital hygiene, the same way you might double-check an unknown number before answering it. It is also worth having a broader conversation with your family about what bots mean for democracy. When a large share of the political conversation online is generated by software rather than citizens, the health of that conversation is at risk. Being a media-literate person today means not just asking whether information is true, but also asking whether the voices promoting it are real. That kind of critical awareness, applied together across generations, is exactly what a more just and informed digital life looks like.Up Next: Lateral Reading
Spotting a suspicious account is a great first step, but what do you do once you find a claim you want to verify? That is where our next article comes in. Lateral reading is a research-backed strategy that teaches you how to check the credibility of a source by stepping away from it and searching for what others say about it. It is the same technique used by professional fact-checkers, and it is something every member of your family can learn. Read on to find out how.References
1 Seering, J., Flores, J. P., Savage, S., & Hammer, J. (2018). The social roles of bots: Evaluating impact of bots on discussions in online communities. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), Article 157. https://doi.org/10.1145/3274426
2 Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 U.S. presidential election online discussion. First Monday, 21(11). https://doi.org/10.5210/fm.v21i11.7090
3 Stella, M., Ferrara, E., & De Domenico, M. (2018). Bots increase exposure to negative and inflammatory content in online social systems. PNAS, 115(49), 12435-12440. https://doi.org/10.1073/pnas.1803470115