- A claim I wanted to actually test is that algorithms and social media curation play a major role in why misinformation spreads. I was skeptical because I had seen this claim repeated often without explanation. I have seen this idea a lot, but usually people just say it without really explaining how it works. At first, I thought it might just be people not thinking critically enough, but after going through this process, I realized there’s a lot more going on. So instead of just assuming, I walked through a step-by-step process to verify this claim myself!!! (Per assignment instructions of course).
Step I: Search the Claim
I started by searching misinformation algorithms on social media within the Google platform to see what came up… Right away, I saw a mix of sources talking about how platforms influence what people see! This told me the claim is definitely being discussed, but just because something shows up a lot does not mean that it is true, so I needed to actually verify it further!

Step II: Check a Source (SIFT)
Next, I clicked into an article and focused on understanding how people actually consume information. One key idea I found was about passive news consumers.
This showed that a lot of people are not actively searching for information anymore—they are just scrolling and taking in whatever shows up… That matters because if people are not choosing information, then platforms and algorithms are doing that for them!

Step III: Lateral Reading
Instead of staying on one source, I opened a new tab and searched does social media spread misinformation research. This step is known as lateral reading, where you compare multiple sources instead of relying on just one.
What stood out to me is that several different types of sources—like academic research, institutional sites, and studies—were all pointing to similar conclusions. Many of them explained that social media platforms make it easier for misinformation to spread because of how quickly content is shared and how algorithms prioritize engagement.
Seeing this pattern across multiple sources made the claim more credible, because it was not just one article making the argument—it was being supported across different perspectives and fields.

Step IV: Strong Evidence (Research Article)
To go beyond general articles, I wanted to see if there was actual research supporting this claim, so I opened a peer-reviewed article from an academic journal. The article focused on the spread of misinformation and public health.
What stood out to me is that this source was not just giving an opinion—it was based on research and data. The article explained that social media plays a significant role in spreading misinformation on a global scale, and that this can have real-world consequences, such as influencing people’s decisions and reducing trust in reliable institutions.
This step was important because it strengthened the claim using credible, research-based evidence rather than just general discussion. It showed that misinformation is not only spreading online, but that it has measurable impacts beyond social media itself.
By including a scholarly source, I was able to confirm that the role of algorithms and social media in spreading misinformation is supported by academic research, not just public opinion or media narratives.

Step V: Check Credibility Tools
Finally, I looked at how we can actually evaluate sources using tools. One example is NewsGuard credibility ratings.
This helped me understand that not all sources online are equally reliable, and that misinformation can spread more easily when people don’t check credibility. It also connects back to algorithms because content that gets more attention can be pushed more, even if it’s not accurate.

Conclusion:
After going through this process step-by-step, I can confidently say the claim is supported, but it is more complex than I originally thought. At first, I assumed misinformation spreads mostly because people don’t think critically, but using lateral reading and actually checking sources showed me that it is also about how information is structured and delivered. Algorithms and social media platforms are constantly prioritizing content that gets attention, which means misleading information can spread just as fast, or even faster, than accurate information.
What really stood out to me is how passive most of this process is. People are not always actively searching for information—they are scrolling, and content is being pushed to them. That changes everything, because it means what we see is not always something we choose, but something selected for us. Once I realized that, it made a lot more sense why misinformation spreads so easily.
Going through this honestly shifted how I look at information online. I’m not just thinking about whether something is true anymore—I’m thinking about why it showed up for me in the first place, what source it is coming from, and whether I should trust it. Using steps like lateral reading and checking credibility tools actually makes a difference, because it forces me to slow down and verify instead of just accepting what I see…
Leave a Reply