articles

“Beyond ‘Garbage In, Garbage Out’: How Our Systems Hardwired Bias Into AI”

Feb 23, 2026

I am often told about this AI that was informed that it was going to be shut down and blackmailed executives to avoid being shut down. Then the person I am speaking with suggests that AI is going to think for itself and do these ideas. But I know better. I know that the concept of affairs being something to hide had to be given AI and that the idea of self-preservation had also been part of the instructions. So, while they think they are pointing to a problem with AI, I see the problem resides in us.

And that is the greater danger.

AIoT is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

How We Taught AI To Be Biased

AI systems learn from human-generated data: text, images, decisions, and logs of who got hired, arrested, admitted, or promoted. Because those historical records are shaped by unequal societies, the patterns the models learn are also unequal. When a dataset overrepresents happy white faces, a vision system learns that “happy” is more likely to look white, and it misclassifies emotions in other groups. When résumé data reflects years of preferring male candidates for leadership roles, a hiring model learns that “leader” is more likely male and quietly downgrades women. In other words, the core “curriculum” we used to teach AI was not neutral; it was a mirror of our past behavior, with all its flaws.

The Amplification Problem

The damage is not just that AI picks up bias; it often amplifies it. Studies show that models trained on subtly biased datasets produce even more skewed outputs than the original human judgments, because the optimization process rewards patterns that improve prediction accuracy, not fairness. Generative systems then broadcast those patterns at scale: image models that depict CEOs as white men 100% of the time, or text models that associate certain names, accents, or neighborhoods with lower competence or higher risk. Worse, when people interact with biased systems, they can internalize the machine’s judgments and become more biased themselves, creating a feedback loop where we teach AI, AI exaggerates us, and then AI teaches us back.

Why “Fix the Data” Is Not Enough

The standard comfort phrase in the industry has been “garbage in, garbage out,” implying that we only need to clean the data. But bias is not just a data hygiene issue; it is a structural one. A U.S. National Institute of Standards and Technology report argues that AI bias arises from three interacting sources: human bias, systemic/institutional bias, and computational bias in models and metrics. Even with better datasets, models can still become biased due to how they are optimized, which users they are tuned for, and which errors society is willing to tolerate. If our institutions and incentives reward efficiency and accuracy over equity, the “correct” move for the model is to preserve and sharpen exactly the patterns we should be challenging.

The Classroom We Built For AI

If we step back, it looks like we enrolled AI in a school designed to reproduce our hierarchy. We gave it textbooks (datasets) where some people are overrepresented as successful and others as criminal, untrustworthy, or invisible. We assessed its homework with metrics that care about aggregate performance more than about who gets hurt at the margins. We hired a grading committee—reinforcement learning with human feedback—drawn from narrow cultural, geographic, and political slices, implicitly teaching the model which values count as “helpful” and which viewpoints are “unsafe.” Then we put this top student—who has mastered our hidden curriculum of bias—at the front of the class as a tutor, adviser, recruiter, and gatekeeper.

Where We Go from Here

If we accept that we have already mis-taught AI, the question shifts from “Is there bias?” to “What do we do about the bias we already embedded?” Some directions are clear: diversify and stress-test training data, routinely audit systems for disparate impact, and adopt socio-technical governance that treats bias as a property of organizations, not just code. We also need more transparency around where models fail and who they fail, so users can recognize bias rather than blindly trusting polished outputs. Most importantly, we should stop pretending that AI is an impartial oracle; it is a powerful student of us, and unless we change the lessons, it will keep getting better at the wrong things.

AIoT is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Enjoyed this? Subscribe for more on Substack.