Is AI really moral neutral?

In today’s world, algorithms shape so much of what we see and do. When I watch videos or listen to music, the content that pops up next isn’t random—it’s based on recommendation algorithms tracking what I’ve previously enjoyed. This is incredibly convenient, often leading me to find new favorite tracks or binge-worthy videos without needing to search endlessly. But recommendation systems go beyond simple entertainment.

Take Megan, who stumbled upon a YouTube rabbit hole of “mommy bloggers” that eventually led her to the LDS Church. A few clicks, a few sermons later, and she ended up requesting a Book of Mormon and getting baptized. Similarly, my friend Jake found himself on a different path. After returning from a mission for the same church, YouTube recommended videos of ex-members. This opened up a new perspective for him, eventually leading him to leave the church. Both Megan and Jake felt “hopeful and free,” yet they reached completely opposite conclusions.

The key point is that both of their journeys were influenced heavily by YouTube’s algorithm. Whether you see these outcomes as good or bad depends on your perspective. For YouTube, it’s a success—more engagement, more ad views. But these decisions also raise questions: Are these outcomes genuinely good? What values are being promoted through these algorithms?

Sometimes recommendation engines don’t just point us to the next song or video. They shape opinions, reinforce beliefs, and can lead us to make significant life decisions. Whether for better or worse, algorithms have a profound impact on our choices. So, as these systems get smarter and more persuasive, it’s critical to question not only how they work but also whose interests they serve.

When it comes to the politics, it is also faced with the same problem. Take political polarization as an example. Some researches show that conservative and liberal Facebook users not only consume different types of news, but they also frequent vastly different sources. AI plays a key role in this process by ensuring that users see content from groups, pages, and accounts that resonate with their ideological stance. This process may seem innocuous, but its implications are significant.

By continuously feeding users politically homogeneous content, AI contributes to what scholars call “affective polarization”—a situation where political opponents are seen not just as wrong, but as enemies. This kind of polarization fuels distrust, alienation, and, in extreme cases, even violence. For example, the algorithms that prioritized engagement during the 2020 U.S. election didn’t just show people election news—they showed highly partisan, often misleading content that reinforced pre-existing views. This, in turn, fanned the flames of distrust in the democratic process, contributing to events like the January 6th Capitol riot.

Some tech companies claim that algorithms are “neutral” because they are data-driven and simply reflect user preferences. However, algorithms are far from morally neutral. As the article points out, recommendation algorithms are editorial in nature—developers make ethical choices when deciding which objectives to prioritize and what data to include in training. For instance, when YouTube shifted from optimizing for clicks to optimizing for watch time, it inadvertently encouraged the creation of longer, more engaging videos. This shift demonstrates what can be called design determinism, where the design of a system decides what kinds of content or outcomes are promoted.

In virtue ethics, philosophers stress the importance of the motivations and values behind actions. If an algorithm is designed to maximize user engagement without considering its psychological or societal impact, this raises ethical concerns. The algorithm’s aim is not the well-being of the user but rather the company’s commercial interests. Is that a moral “good”?