Category: Uncategorized

  • Why Misinformation Feels True: How Social Media Trains Our Brains Without Us Noticing


    Before you keep scrolling… pause for a second

    Be honest—how many times have you:

    • Liked something without really reading it
    • Shared something because it “felt right.”
    • Seen the same post so many times that it just seemed true

    This is not random. This is not just “people being dumb.”

    This is how social media is designed.

    And once you see it, you cannot unsee it… Trust!

    This post is designed for college students and young adults who regularly use platforms like TikTok, Instagram, and X.


    What is actually happening to you online

    Social media platforms are not neutral. They are powered by algorithms optimized for engagement, meaning their job is to keep you scrolling, liking, and interacting as long as possible! One important step is to investigate the source of what you are seeing, rather than accepting it at face value.

    The more you interact with something, the more you see it.

    Over time, this creates a feedback loop where:

    • You see similar ideas
    • Those ideas feel familiar
    • Familiar starts to feel true

    This is where misinformation starts to take hold—not because you are careless, but because the system is working exactly as designed. In this course, we learned that digital platforms are not neutral spaces—they are designed systems that shape how information is distributed, repeated, and interpreted by users. This means what you see online is not a random reflection of reality—it is a curated experience.

    This is not just a theory. Internal research from companies like Meta has shown that these platforms are aware of how their systems affect users. In fact, internal findings revealed that Instagram could negatively impact mental health for some teenage users, showing how engagement-driven design can have real psychological effects.


    Your brain is wired for this: Confirmation Bias

    One of the biggest reasons misinformation spreads is something called confirmation bias. In our course, this is understood as a cognitive bias—one of the mental shortcuts (heuristics) people use to process information quickly, especially in fast-paced digital environments.

    This means you are more likely to:

    • Believe information that matches what you already think
    • Ignore or question information that challenges you

    On social media, this shows up constantly. This is amplified online because platforms reward engagement, not accuracy.

    If you already believe something—even slightly—you are more likely to:

    • Like posts that agree with it
    • Watch those videos longer
    • Engage with similar content
    • IMAGE Number 2: This visualization illustrates cognitive processing through mental shortcuts, where confirmation bias influences how individuals interpret and reinforce information that aligns with existing beliefs.

    And guess what happens next?

    The platform gives you more of it.

    So it starts to feel like:

    “Everyone is saying this… so it must be true.”

    But really, you are just seeing more of what you already leaned toward.


    Repetition = Truth (even when it is not)

    This is called the illusory truth effect. In our course, this concept is explained as a cognitive processing effect, where repeated exposure increases familiarity, and familiarity is often mistaken for truth.

    Here is what that means in simple terms:

    The more times you are exposed to something, the more true it feels.

    Not because it is accurate—but truly because it is familiar.

    Think about viral posts, trending sounds, or repeated claims.

    Even if you were unsure at first, after seeing it:

    • Once
    • Twice
    • Ten times

    It starts to feel normal. Expected. True.

    For example, a widely shared image claimed that The New York Times supported bullying unvaccinated children—but fact-checking revealed the image was digitally altered and not real.

    This is why misinformation spreads so fast—it does not need to be correct, it just needs to be repeated.

    • IMAGES 3 & 4 Below Express: Both of these show the same idea—when we see something over and over, it starts to feel true, even if it is not. That is the illusory truth effect.

    You are not seeing “everything” — you are in an Echo Chamber

    Another major factor is something called echo chambers.

    This happens when you are mostly exposed to:

    • People who think like you

    • Content that supports your views

    • Ideas that are not challenged

    Because of algorithmic feeds, your social media is personalized to you.

    Research shows that people do not receive information from a single source, but through multiple overlapping “curated flows,” where algorithms, social networks, and personal choices all shape what content is seen.

    That means:

    • Two people can search the same topic
    • See completely different “realities.”

    Inside an echo chamber:

    • Your beliefs are constantly reinforced
    • Opposing views are filtered out
    • Misinformation becomes harder to recognize

    It does not feel like bias. It feels like the truth.


    So what is the actual problem?

    The issue is not just misinformation itself.

    The issue is how:

    • Confirmation Bias
    • Illusory Truth Effect
    • Echo Chambers
    • Algorithmic Feeds

    ALL work together!!!

    Research from our course materials supports this. A large-scale study found that even small changes in what people are exposed to online can influence what they continue to engage with over time. As exposure increases, people are more likely to seek out similar content, reinforcing their existing beliefs and shaping how they interpret information.

    This YouTube video shows how algorithms learn from your behavior and continuously recommend similar content, reinforcing what you see and making certain ideas feel more true over time.

    This creates a system where:

    • False information spreads easily
    • Repeated ideas feel true
    • Your beliefs get reinforced without you realizing it

    And the most important part?

    You think you are thinking for yourself.

    These systems are designed to prioritize content that keeps people engaged, even if that content is misleading or emotionally charged. This means that what spreads the fastest is not always what is most accurate—but what gets the strongest reaction.


    Pause and check yourself (seriously)

    Take a second and ask yourself:

    • When was the last time I checked a source before sharing?
    • Do I ever see content that challenges my beliefs?
    • Am I liking things because they are true—or because they feel right?

    This is where awareness starts.

    This approach is supported by strategies such as the SIFT method, which emphasizes pausing to verify sources before accepting or sharing information.


    What can you actually do about it

    You do not need to become a fact-checking expert.

    You just need to SLOW DOWN your automatic reactions.

    Try this:

    • Pause before liking or sharing
    • Ask “Where did this come from?”
    • Use strategies like the SIFT method to quickly evaluate information
    • Be aware that what you see is curated—not neutral

    Even small changes CAN break the cycle.


    Once you see it… It changes everything

    Misinformation is powerful not because people are unintelligent, but because it is built on how humans naturally think.

    Social media just amplifies it.

    Now you know:

    • Why things feel true
    • Why do you keep seeing the same ideas
    • How does your own behavior play a role

    And that awareness alone puts you ahead of most people scrolling right now.


    What This Helps You Do

    After reading this, you should be able to:

    • Recognize when familiarity is influencing what feels true
    • Notice how algorithms are shaping what you see
    • Pause before engaging with content automatically
    • Apply simple strategies like the SIFT method to evaluate information

    About This Project

    • Target Audience:
      Young adults and college students who actively use platforms like TikTok, Instagram, and X, and regularly engage with content through scrolling, liking, and sharing.
    • Geographic Scope:
      While misinformation is a global issue, this project focuses on social media use in the United States, where algorithm-driven platforms heavily shape information exposure.
    • Why This Format:
      A blog-style format was chosen because it allows complex ideas to be broken into smaller, easy-to-follow sections using visuals and real-world examples. This mirrors how the target audience already consumes content, making the message more engaging and easier to understand.
    • Purpose:
      This project is designed to help readers understand how misinformation works on both a psychological and algorithmic level, so they can become more aware of their own behavior online.

    Sources / Learn More

    Course Materials (ASU)

    These modules build the core concepts used throughout this post:


    Research & Real-World Sources

    These sources provide real-world evidence of how misinformation spreads and how platforms shape what you see:


    The goal is not to stop using social media—it is to stop letting it think for you.

    You do not need to be perfect online.
    You just need to be more intentional.

    Pause.


    Question what you see.
    And remember—what feels true is not always what is true.

  • Misinformation on Social Media: Why Platform Solutions Still Fall Short


    Introduction:

    At this point, it is almost impossible to scroll through social media without encountering some form of misinformation. Platforms like X (Twitter) and Facebook have developed systems that aim to address the issues from automated moderation tools to third-party fact-checking systems. While these approaches appear increasingly sophisticated, a closer look shows that misinformation is not just a context problem–it is a structural issue tied to how platforms operate. Despite their efforts, both X and Facebook rely heavily on reactive systems, allowing misinformation to spread before meaningful intervention occurs.


    X (Twitter): Controlling Reach Instead of Removing Content

    X has taken a unique approach by focusing less on removing content and more on limiting how far it spreads. One of its key tools, Introducing Safety Mode, uses automation to detect harmful or misleading behavior and temporarily restrict accounts engaging in it. This reflects an attempt to intervene earlier in harmful interventions

    X’s broader philosophy, Freedom of Speech, Not Reach, explains that most content will not be removed but instead downranked to reduce visibility. This allows the platform to prioritize user expression while still attempting to limit the spread of harmful content.

    X also relies on Community Notes as a system that allows users to add context to potentially misleading posts. For example, Community Notes often appear under ciral posts related to elections or public health claims, where users collaboratively add clarifying information or corrections. While this promotes transparency, it also means misinformation can circulate widely before corrections are visible.

    From my own experience using X, I have seen posts gain significant traction and engagement before any Community Note is added, which reinforces how quickly misinformation can spread compared to how slowly it is corrected.

    I believe that X’s strategy shows that the platform is not trying to eliminate misinformation entirely but instead focusing on managing how far it travels…


    Facebook: Structured Moderation Through Fact-Checking

    Facebook presents a more structured system for addressing misinformation through its Combating Misinformation efforts. Its approach focuses on removing harmful content, reducing the spread of false information, and providing users with context through labels and warnings.

    A key component of this system is its fact-checking process, which relies on third-party organizations to review any content. For an example, when a post is labeled as false, Facebook may place a warning label over the content and significantly reduce how often it appears in users’ feeds. Users attempting to share the content may also receive a notification warning them about its accuracy. Repeat offenders may face penalties that limit their ability to distribute content.

    However, research suggests that these efforts have limitations. Studies indicate that Facebook’s platform design itself contributes to the spread of misinformation, as its algorithms prioritize engagement and interaction. There are also concerns that changes in Meta’s moderating strategies may weaken the consistency of its approach over time.

    From my own experience using Facebook in the past, and similar platforms, I have noticed misleading posts often appear repeatedly before ANY warning label is added, suggesting that the system does not act quickly enough to prevent internal exposure!


    Do These Policies Actually Work?

    While both platforms have implemented systems to address misformation, their effectiveness is limited. Overall, these policies do not fully solve the problem–they only reduce its visibility after it has already begun spreading.

    A major issue is that both X and Facebook are designed to maximize engagement. Content that is emotional, controversial, or misleading tends to spread faster than accurate information. Research has also shown that false information spreads more rapidly and widely than accurate information online. As a result, misinformation often goes viral before moferatin systems can respond. Studies also show that many users struggle to identify misinformation online, which makes platform intervention even more important.

    On X, limiting reach instead of removing content allows more information to remain visible and continue circulating. Community Notes depend on users correcting misinformation after is had already spread, making the system reactive rather than preventive.

    Facebook’s approach, while more structured, faces similar challenges. Fact-checking takes time, and during that time, misinformation continues to spread. Even when content is labeled or downranked, many users have seen or shared it. While there are moderation systems in place, platforms continue to face criticism for failing to fully control the spread of misleading content.

    This would suggest to me that while these policies can reduce the impact of misinformation, they are not strong enough to prevent it from spreading in the first place.


    What Is Missing and How Platforms Can Improve

    One of the biggest gaps in my opinion in both platforms’ approaches is the lack of early intervention. By the time action is taken, misinformation has often already reached a very, very wide audience.

    Both platforms could improve transparency in how their algorithms prioritize content. Without this, users cannot fully understand why certain information appears in their feeds.

    Consistency in enforcement is another issue. Changes in policy and moderating practices can reduce effectiveness and weaken user trust.

    Platforms should also rely less on users to correct misinformation… and take more responsibility for PREVENTING it. While systems like Community Notes are helpful, they should not be the primary method of addressing misinformation.

    I think that platforms could learn from each other by combining approaches–using both automated detection and structured fact-checking systems earlier in the content lifecycle.


    Conclusion

    Despite increasingly advanced moderation strategies, X and Facebook demonstrate that misinformation is not simply a content issue but a structural problem rooted in how platforms function. As long as engagement remains the priority, misinformation will continue to spread faster than it can be controlled. Addressing this issue will require not just stronger policies, but deeper changes to how information is distributed and amplified online!

  • Why Misinformation Spreads: It’s Not Just People — It’s the System

    • A claim I wanted to actually test is that algorithms and social media curation play a major role in why misinformation spreads. I was skeptical because I had seen this claim repeated often without explanation. I have seen this idea a lot, but usually people just say it without really explaining how it works. At first, I thought it might just be people not thinking critically enough, but after going through this process, I realized there’s a lot more going on. So instead of just assuming, I walked through a step-by-step process to verify this claim myself!!! (Per assignment instructions of course).

    Step I: Search the Claim

    I started by searching misinformation algorithms on social media within the Google platform to see what came up… Right away, I saw a mix of sources talking about how platforms influence what people see! This told me the claim is definitely being discussed, but just because something shows up a lot does not mean that it is true, so I needed to actually verify it further!


    Step II: Check a Source (SIFT)

    Next, I clicked into an article and focused on understanding how people actually consume information. One key idea I found was about passive news consumers.

    This showed that a lot of people are not actively searching for information anymore—they are just scrolling and taking in whatever shows up… That matters because if people are not choosing information, then platforms and algorithms are doing that for them!


    Step III: Lateral Reading

    Instead of staying on one source, I opened a new tab and searched does social media spread misinformation research. This step is known as lateral reading, where you compare multiple sources instead of relying on just one.

    What stood out to me is that several different types of sources—like academic research, institutional sites, and studies—were all pointing to similar conclusions. Many of them explained that social media platforms make it easier for misinformation to spread because of how quickly content is shared and how algorithms prioritize engagement.

    Seeing this pattern across multiple sources made the claim more credible, because it was not just one article making the argument—it was being supported across different perspectives and fields.


    Step IV: Strong Evidence (Research Article)

    To go beyond general articles, I wanted to see if there was actual research supporting this claim, so I opened a peer-reviewed article from an academic journal. The article focused on the spread of misinformation and public health.

    What stood out to me is that this source was not just giving an opinion—it was based on research and data. The article explained that social media plays a significant role in spreading misinformation on a global scale, and that this can have real-world consequences, such as influencing people’s decisions and reducing trust in reliable institutions.

    This step was important because it strengthened the claim using credible, research-based evidence rather than just general discussion. It showed that misinformation is not only spreading online, but that it has measurable impacts beyond social media itself.

    By including a scholarly source, I was able to confirm that the role of algorithms and social media in spreading misinformation is supported by academic research, not just public opinion or media narratives.


    Step V: Check Credibility Tools

    Finally, I looked at how we can actually evaluate sources using tools. One example is NewsGuard credibility ratings.

    This helped me understand that not all sources online are equally reliable, and that misinformation can spread more easily when people don’t check credibility. It also connects back to algorithms because content that gets more attention can be pushed more, even if it’s not accurate.


    Conclusion:

    After going through this process step-by-step, I can confidently say the claim is supported, but it is more complex than I originally thought. At first, I assumed misinformation spreads mostly because people don’t think critically, but using lateral reading and actually checking sources showed me that it is also about how information is structured and delivered. Algorithms and social media platforms are constantly prioritizing content that gets attention, which means misleading information can spread just as fast, or even faster, than accurate information.

    What really stood out to me is how passive most of this process is. People are not always actively searching for information—they are scrolling, and content is being pushed to them. That changes everything, because it means what we see is not always something we choose, but something selected for us. Once I realized that, it made a lot more sense why misinformation spreads so easily.

    Going through this honestly shifted how I look at information online. I’m not just thinking about whether something is true anymore—I’m thinking about why it showed up for me in the first place, what source it is coming from, and whether I should trust it. Using steps like lateral reading and checking credibility tools actually makes a difference, because it forces me to slow down and verify instead of just accepting what I see…

  • How Easy Is It to Manipulate Information Online? A Look at RumorGuard and Harmony Square!

    Misinformation has become one of the biggest challenges online because false or misleading information can spread quickly and influence how people think, react, and make decisions. For my blog post, I explored RumorGuard and Harmony Square to better understand how misinformation works, how it spreads, and how interactive tools can help people recognize them as well. Both tools showed me that misinformation is not always obvious, which is why media literacy is so important!!!

    Understanding Misinfromation Through -RumorGuard

    RumorGuard is a tool created by the News Literacy Project that helps people recognize and understand misinformation by breaking down real examples of viral posts and claims online. When I used the site, I noticed that it does not just say whether something is true or false. Instead, it explains why a claim is misleading by walking through specific factors like the source, the evidence, the context, and whether the information is authentic. The homepage shows current examples of misinformation, and when you click on one, it gives a clear explanation along with a “takeaway” that helps you apply what you learned to other situations! Honestly, I found this really helpful because there are times I see things online and just stop paying attention since I am not sure what is real or not. RumorGuard makes it easier to stay informed without feeling overwhelmed because it shows you how to actually break things down instead of just guessing.

    What stood out to me is that RumorGuard teaches you how to think, not just what to believe. It uses a system of credibility factors—like checking if the source is reliable, if there is actual evidence, and if the information is taken out of context—to help you evaluate information on your own. This made it feel more practical because I could see how I would use those same steps when scrolling through social media. Overall, I would say RumorGuard is effective because it turns misinformation into something you can actually break down and understand instead of just reacting to it!


    RumorGuard

    These examples from RumorGuard show how a viral claim—like the one about Iran releasing a list of U.S. target cities—is analyzed step by step. The platform starts by presenting the claim, then breaks it down by labeling it as false and explaining why it is misleading, and finally applies credibility factors like source, evidence, and context to show how the information should be evaluated!!!

    From using it myself, I can see how RumorGuard is effective because it does not just tell you something is false—it shows you how to figure that out on your own. That made it easier for me to understand how misinformation works and actually apply those same steps when I see things online!!!


    Learning How Misinformation Spreads Through Harmony Square

    From actually playing Harmony Square, I can see how effective it is in teaching participants about misinformation because it puts you in a position where you are actively creating it instead of just learning about it. For example, when I was rating posts about vaccines, the World Cup, and Bitcoin, I had to decide what looked believable and what didn’t, which made me realize how easy it is to confuse people with simple wording or strong claims. The game shows that misinformation spreads more when it sounds emotional, urgent, or controversial, even if it is not accurate.

    Harmony Square is an interactive game that teaches how misinformation spreads by putting you in the role of someone creating it. At the beginning, I was introduced as a “Chief Disinformation Officer,” which already showed that the goal of the game is to intentionally create chaos and influence people. As I played, I had to interact with posts and decide how believable they were, and later I was given choices to escalate situations, create conflict, and gain attention from others in the community.

    What stood out to me is how the game uses realistic scenarios, like posts about vaccines, the World Cup, or Bitcoin, to show how people react to information online. I noticed that when posts were more emotional, dramatic, or controversial, they were more likely to get attention and reactions. The game also showed how quickly things can turn into arguments or “flame wars” when misinformation spreads, especially when I chose to escalate situations instead of resolving them.

    From actually playing it, I learned that misinformation is not always about being completely false, but about how information is presented to influence people’s emotions and reactions. Seeing my follower count increase as I created more division made it clear how effective these tactics can be in real life. Overall, Harmony Square is effective because it lets you experience how misinformation works instead of just reading about it, which made it easier for me to understand and remember!

    What made this effective for me is that I wasn’t just reading about misinformation—I was interacting with it and seeing how quickly it can influence decisions. That made it more memorable and realistic compared to just learning definitions. Because of this, I think Harmony Square is very effective in teaching participants how misinformation works and how easily people can be influenced if they don’t stop and think critically.


    The following sources below provide additional support and context for how misinformation spreads and why tools like these are important.

    https://www.politifact.com

    These sources back up what the tools showed by explaining how misinformation spreads and how people can actually check if something is real or misleading! Super cooooool!

    HAPPY EASTER SUNDAY 05 April, 2026 Sunday

  • 24-Hour Media Diary: How Misinformation Is Framed in Everyday Media

                                        Blog post #1: 24-Hour Media Diary
                            
                                  UNDERLINED WORDS ARE HYPERLINKS

    To complete this assignment, I tracked my media consumption over 24 hours to really see what I am actually taking in throughout the day. Honestly, I did not expect how much of it would bring such intense emotion… especially once I slowed down and paid attention! We have been talking about misinformation and how it shows up in everyday media, and I can already tell my mindset has shifted.

    What stood out to me through this assignment is that misinformation is not always about something being completely false. A lot of what we see can have some level of truth, but it is often incomplete, emotionally framed, or repeated in a way that shapes how we interpret it, or even maybe just simply to what is most comfortable to somone. That is what made me realize how easily information—accurate or not—blends into my daily routine without me even noticing.


    Media Log

    5:00 a.m:

    Wake up and immediately check my email, (I know, I know… I even wake up before my alarm (EVERY DAY), then open TikTok (like clockwork at this point). Right away I’m seeing videos about Erika Kirk—specifically commentary about her being a widow who is publicly grieving while also stepping into leadership of a multi-billion-dollar company and going on tour. What stood out immediately is how polarizing the content is. Some creators frame her as strong and resilient, while others frame the exact same situation as inappropriate or suspicious.

    A much more muted tweet from Erika Kirk.

    That is where it started to click for me—this is not just information, it is framing. Both sides are using emotional language to push a narrative, but neither is really showing full context or reliable sources. It made me pause and think, am I forming an opinion based on facts, or just how it’s being presented to me? This is exactly what we have talked about in class—misinformation does not always mean something is false, but that it is shaped in a way that influences how we interpret it.

    If I wanted to verify what I’m seeing, I would need to step outside of TikTok and look at credible news sources or original reporting about her situation. I would also compare how different outlets are framing the same story, because the differences in tone alone show how easily perception can be shaped!!!


    • 5:25–6:00 a.m:
      • Go on my usual 5K run—this has become part of my everyday routine!!! It has had a really strong impact on my mental health—especially because even when everything else feels chaotic, this routine keeps me grounded.

    6:00 a.m:

    Still scrolling TikTok. Now my feed shifts into Christian-based content about the end times, including references to texts like the Book of Enoch and connections to current world events. This is content I personally connect with—I was raised Christian, and I do believe in these teachings and the meaning behind them. When I hear these interpretations, they do not feel random to me—they feel grounded in something real, especially when they align with what I already understand and believe.

    At the same time, stepping back with what I have learned in this course, I can recognize that this type of content is still being presented as fact in a very strong and emotional way, even though it is based on interpretation and belief. That does not make it false to me, but it does mean that not everyone will view it the same way or accept it as verified information. So now I find myself thinking in two ways at once—this is something I believe to be true, but also how is this being presented, and how might it influence someone who does not share that background? That awareness is something I did not have before, and it helps me better separate personal belief from how information is communicated in media.

    Even though I personally believe in this content, I recognize that verifying it would look different than verifying news or data. It would involve understanding the historical context of the text, how different scholars or denominations interpret it, and recognizing that belief-based content is often presented as truth without the same type of evidence expected in other areas.


    8:00 a.m:

    Start seeing multiple “First Amendment audit” videos. These usually show someone standing in a public space, often wearing a face covering and holding recording equipment near entrances or buildings. I understand why people feel uncomfortable in that situation—that reaction is real—but at the same time, these videos highlight a lack of awareness about public recording laws. From what I understand, if someone is in a public space, they generally have the right to record.

    What stands out to me is how quickly people react emotionally instead of stepping back or removing themselves from the situation. Instead of recognizing their option to disengage and maintain their own privacy, people often confront the person filming, which escalates the situation. Watching this, I find myself thinking, you might feel uncomfortable, but that does not automatically mean something illegal is happening.

    At the same time, applying what I have learned in this course, I can also see how these videos are presented in a very one-sided way. The person filming is usually shown as calm and knowledgeable, while others are shown reacting emotionally or appearing uninformed. That creates a clear narrative for the viewer, even though we are likely not seeing the full context of what led up to the interaction. So while the legal aspect may be real, the way the situation is framed can still influence how we interpret who is “right,” which is where media presentation becomes important.

    To actually verify what is happening in these situations, I would need to look at the laws themselves or credible legal sources, not just rely on edited clips. Watching full, uncut footage or reading about similar cases would give a more accurate understanding than a short, emotionally charged video.


    10:00 a.m:

    Now, I am seeing health-related content—what to eat, what to avoid, what’s considered “toxic” versus “healthy.” This is one area where I actually do believe there is truth behind some of what is being said. Taking care of your body, eating well, and being intentional about what you consume matters. That part is not the issue.

    What stands out to me, though, is how extreme and one-sided the messaging becomes. A lot of these videos push the idea that you should only eat a certain way strictly for function or optimization, almost removing the idea of enjoyment entirely. It becomes less about balance and more about control—like food is only for performance, not for experience or moderation.

    Watching this, I find myself thinking, yes, your body needs proper fuel—but it also needs balance depending on your lifestyle, movement, and daily demands. Not everything can be reduced to “good” or “bad” the way these videos make it seem. That kind of framing feels misleading because it simplifies something that is actually more complex.

    From what I’ve learned in this course, this is another example of how information can be presented in a way that feels authoritative but lacks full context. There is truth in the idea of health, but the way it is delivered—without nuance, evidence, or individual variation—can easily influence people to adopt extreme views…

    To fact-check this type of information, I would need to compare it with research-based sources like medical websites or nutritional guidelines, rather than relying on influencers. A lot of these claims sound convincing, but without; evidence, context, or common sense they can easily be misleading…


    1:00 p.m:

    Scrolling again through TikTok / YouTube / Instagram and notice the same types of content repeating—commentary, emotional reactions, health advice. What really stood out here is how repetition starts to make things feel more believable. Even if I questioned something earlier, seeing it multiple times makes it feel more familiar and “true.”

    This directly connects to what we have learned about how misinformation spreads—not just through false claims, but through repetition and exposure. It is honestly kind of concerning to realize how subtle that effect is in real time…

    I started realizing that if someone is constantly consuming the same type of content—whether it is about lifestyle, health, politics, or even beliefs—they can begin to build their identity around information that may not even be fully accurate. It is not just about being “right” or “wrong” in one video, but how all of that content adds up over time and influences the way a person sees the world.

    That made me reflect on my own feed. It is not just random videos—it’s patterns. And those patterns can shape opinions, behaviors, and even how someone presents themselves. That is honestly a little concerning, because it shows how easily misinformation or incomplete information can go from something you watch to something you actually believe and live by.

    Verifying information here would mean not just accepting repeated claims, but actively checking whether those claims are supported by reliable sources. Just because something is seen multiple times does. not make it accurate, so it would require intentional effort to confirm what is actually true.


    7:00 p.m:

    End the day back on TikTok. Same themes, same topics, just different creators; human or artificial, saying similar things. At this point, it is clear this is not random—it is the algorithm feeding me content based on what I’ve already engaged with.

    This creates a cycle where certain perspectives—especially more emotional or extreme ones—get amplified. That is what makes the content feel so convincing, even when it lacks evidence. It also reinforces polarization, because I am mostly seeing similar types of viewpoints rather than a balanced range of information.

    To break out of that cycle and verify information, I would need to intentionally seek out different perspectives and sources, rather than relying on what the algorithm shows me. Otherwise, I am only seeing one VERSION of reality.


    Evaluating Credibility

    Throughout the day, I noticed that a lot of the content I consumed was not completely false, but it could sometimes be incomplete and emotionally driven. The Erika Kirk content showed how the same situation can be framed in completely different ways depending on the narrative being pushed. The religious content blurred the line between belief and ACTUAL fact. The First Amendment videos showed how editing can shape perception, and the health content demonstrated how easily unsupported claims can be presented as truth!

    To verify these types of information, I would need to look at multiple sources, especially credible and research-based ones; we all should! This includes checking for evidence, comparing perspectives, and identifying whether the content is informational or persuasive. Truly, common sense and intuition will serve MYSELF and everyone.


    Reflection

    What stood out to me most from this assignment is not just how much media I consume, but how consistently it follows patterns. It is not random—I am being shown similar types of content, similar tones, and similar perspectives throughout the day.

    The biggest shift for me is realizing that misinformation is not always about something being completely false. A lot of what I saw had some level of truth to it, but it was incomplete, emotionally framed, or presented in a way that pushes a specific interpretation. That is what actually makes it convincing.

    I also became more aware of how quickly people—including myself—form reactions. Whether it is the First Amendment videos or opinion-based content, people respond emotionally first and question later, if at all. That reaction alone can shape how a situation is understood, even before all the facts are clear.

    Another thing I noticed is how repetition builds belief over time. Seeing the same type of content throughout the day makes it feel more familiar and more valid, even if nothing new is being proven. That is where I can see how someone could slowly build their identity or worldview around information that has not really been verified.

    Because of this, I do not feel like I can just passively consume media anymore. I catch myself stopping and thinking about where something is coming from, what might be missing, and how it is being presented. (I will QUICKLY skip a video when my INTUITION alarms)! ha ha ha That awareness is probably the biggest takeaway for me—it is not about rejecting everything, but about not accepting everything at face value either!!!

  • Not Just Another Blog

    Honestly, after going through my courses at ASU, I’ve started noticing stuff in media that I never really paid attention to before. Like the way people are portrayed and how certain ideas just keep showing up over and over.

    So this is kinda where I slow down and actually think about that.

    Some posts will be for assignments, while others may connect course concepts to real-world examples, current events, and media I encounter in daily life, but it’s also me just trying to make sense of what I’m learning!

    Once you start seeing these patterns, you really can’t unsee them.

    It’s not meant to be perfect or anything…