In today’s world, algorithms shape so much of what we see and do. When I watch videos or listen to music, the content that pops up next isn’t random—it’s based on recommendation algorithms tracking what I’ve previously enjoyed. This is incredibly convenient, often leading me to find new favorite tracks or binge-worthy videos without needing to search endlessly. But recommendation systems go beyond simple entertainment.
Take Megan, who stumbled upon a YouTube rabbit hole of “mommy bloggers” that eventually led her to the LDS Church. A few clicks, a few sermons later, and she ended up requesting a Book of Mormon and getting baptized. Similarly, my friend Jake found himself on a different path. After returning from a mission for the same church, YouTube recommended videos of ex-members. This opened up a new perspective for him, eventually leading him to leave the church. Both Megan and Jake felt “hopeful and free,” yet they reached completely opposite conclusions.
The key point is that both of their journeys were influenced heavily by YouTube’s algorithm. Whether you see these outcomes as good or bad depends on your perspective. For YouTube, it’s a success—more engagement, more ad views. But these decisions also raise questions: Are these outcomes genuinely good? What values are being promoted through these algorithms?
Sometimes recommendation engines don’t just point us to the next song or video. They shape opinions, reinforce beliefs, and can lead us to make significant life decisions. Whether for better or worse, algorithms have a profound impact on our choices. So, as these systems get smarter and more persuasive, it’s critical to question not only how they work but also whose interests they serve.
When it comes to the politics, it is also faced with the same problem. Take political polarization as an example. Some researches show that conservative and liberal Facebook users not only consume different types of news, but they also frequent vastly different sources. AI plays a key role in this process by ensuring that users see content from groups, pages, and accounts that resonate with their ideological stance. This process may seem innocuous, but its implications are significant.
By continuously feeding users politically homogeneous content, AI contributes to what scholars call “affective polarization”—a situation where political opponents are seen not just as wrong, but as enemies. This kind of polarization fuels distrust, alienation, and, in extreme cases, even violence. For example, the algorithms that prioritized engagement during the 2020 U.S. election didn’t just show people election news—they showed highly partisan, often misleading content that reinforced pre-existing views. This, in turn, fanned the flames of distrust in the democratic process, contributing to events like the January 6th Capitol riot.
Some tech companies claim that algorithms are “neutral” because they are data-driven and simply reflect user preferences. However, algorithms are far from morally neutral. As the article points out, recommendation algorithms are editorial in nature—developers make ethical choices when deciding which objectives to prioritize and what data to include in training. For instance, when YouTube shifted from optimizing for clicks to optimizing for watch time, it inadvertently encouraged the creation of longer, more engaging videos. This shift demonstrates what can be called design determinism, where the design of a system decides what kinds of content or outcomes are promoted.
In virtue ethics, philosophers stress the importance of the motivations and values behind actions. If an algorithm is designed to maximize user engagement without considering its psychological or societal impact, this raises ethical concerns. The algorithm’s aim is not the well-being of the user but rather the company’s commercial interests. Is that a moral “good”?
A really interesting blog! It really is fascinating how much algorithms can affect our lives. On one side they are convenient, like helping us find new music or videos, on the other side it shows how much they can shape our beliefs and decisions, like in Megans and Jake’s stories. And the part about political polarization makes you really think, are these algorithms helping us, or are they making things worse?
I don’t think AI is neutral. It amplifies and mirrors our own choices based on patterns we have already shown. I don’t think AI is entirely to blame.. it is up to us to recognize its influence and take responsibility for our actions. While algorithms may push us in certain directions, eventually the final decision still rests with the individual.