Innovations Breaking Ethics
Humanity is inherently progressive.
As we grow older, we gain wisdom and knowledge, and this collective accumulation drives our society forward. Throughout history, across every era we can measure, there have always been transformative breakthroughs—the top cultural and technological revolutions—that have impacted everyone’s life, directly or indirectly.
In many cases, society has recognized these innovations as the defining achievements of their time, often without much debate. In modern time list of The invention of the automobile, the helicopter, the nuclear bomb—these were milestones that reshaped the world.
Today, in the 21st century, the latest “top dog” is the rise of large language models (LLMs). You may know them by their commercial names—ChatGPT, DeepSeek, and many others.
The impact of such innovations is almost hardwired into our minds, as they offer immense advantages in our daily lives—and will continue to do so in the future.
When we think about these groundbreaking achievements today, we tend to view them mostly in a positive light. Cars and airplanes have transformed transportation. Nuclear power generates vast amounts of energy. Large language models make it easier than ever to access and process information.
But hold on—nuclear power for energy? When I hear the word “nuclear,” the first image that comes to mind is a mushroom cloud.
In fact, it’s not just nuclear technology—virtually every major innovation has an ethical shadow.
Take cars, for example. Governments have been accused of building roads by forcibly seizing private land, dismantling public transit systems, and aggressively promoting gasoline despite early evidence of its harmful effects on children’s health.
In the case of aviation, airplanes were initially developed primarily to drop bombs from the sky.
And as for nuclear weapons—there is hardly any need to explain why they stand as one of the most ethically fraught technologies ever created. If I had to sum it up, I’d say America introduced the world to nuclear weapons in a way that made their devastating potential impossible to ignore.
But the less obvious, and perhaps more complicated, question is why large language models also belong on the list of ethically controversial innovations.
That is the discussion I want to explore in this blog.
When you search on Google for something simple, like how to make cold coffee, you’re shown a list of websites where people have shared their recipes. You click through, and what you see—the story behind the recipe, the steps, maybe a video—is presented exactly as the creator intended. In return for sharing their knowledge, they might earn a little money through ads, affiliate links, or simply gain recognition for their work.
That’s always been a fair, reciprocal relationship: the creator provides value, and the audience shows up to engage, appreciate, or support them.
But with the rise of large language models, this relationship is being quietly dismantled.
Now, you can ask an AI the same question, and it will give you a clear, polished answer without ever mentioning who originally created the content. The model itself doesn’t know how to make cold coffee—it learned by collecting and analyzing thousands of recipes written by real people. It effectively skips over the creators entirely, extracting their knowledge through massive web crawling.
These models depend almost entirely on the work of others, yet they offer nothing back—not credit, not traffic, not compensation.
It’s a bit like walking into a library, photocopying everyone’s cookbooks to make your own guide, and then handing out free copies without ever telling anyone where the recipes came from.
There’s a saying: the apple doesn’t fall far from the tree.
In the case of large language models, especially those developed in the United States, that old proverb holds some truth. ChatGPT has become the most popular example of this technology. Although it often strives to present balanced perspectives and to include multiple sides of an argument, you can still find subtle biases favoring the U.S., particularly when it comes to matters involving American political interests.
For instance, you may notice that actions in Iran or Vietnam are frequently described with neutral terms like “intervention,” rather than more direct words like “invasion.” Part of this may be unintentional: because the model was trained predominantly on English-language data—much of it from American media and publishing—it naturally absorbs the framing and language that reflect U.S. viewpoints.
But some people have speculated that the bias goes deeper. They point to the sudden resignations of several high-profile figures at OpenAI—the company behind ChatGPT—even as their work was just beginning to produce “golden eggs.” Among the notable departures were Ilya Sutskever, a co-founder and chief scientist who stepped back from leadership; and more recently, Jan Leike, who led the Superalignment team focused on safety and ethical alignment of AI.
Some observers have speculated that these departures were more than ordinary organizational shifts. They suggest that internal disagreements—possibly over pressure to align the technology with U.S. strategic interests—played a role in driving away some of the early idealists behind the project.
While these claims remain speculative and are difficult to prove conclusively, they highlight a broader concern: when transformative technologies are developed under the influence of powerful governments and corporations, the question of who ultimately controls them—and to what ends—becomes impossible to ignore.
Then there is DeepSeek, a large language model developed in China. If you ask it about certain controversial issues related to China, you will often see an even stricter approach to information control.
For example, when prompted about sensitive topics—like the 1989 Tiananmen Square protests, the status of Taiwan, or the situation in Xinjiang—DeepSeek typically responds with statements like “I’m sorry, but this question is beyond my scope” or “I cannot provide an answer on this topic.”
In other cases, it does reply, but the answers are framed in ways that align closely with official government narratives. Here are a few examples users have reported:
In other words, DeepSeek plays it safe not just by refusing certain questions outright, but also by presenting only the official perspective when it does respond.
This contrast highlights a crucial point: every large language model carries the imprint of the environment it was created in.
These examples show that no AI system is neutral. The way models answer—or refuse to answer—reveals whose boundaries and priorities they are ultimately built to respect.