As educational and healthcare institutions work to find positive ways to incorporate AI tools into their process, there are ongoing challenges to understand the contexts of how these tools work. That is reflected in posts here where the inner workings and challenges of AI tools are explored.
For example, the downsides of social media have become different and more challenging over time, as the forms and structures of that medium changed under constraints and incentives of the concentrated narrow oligarchic ownership structure.
Change occurring in development of new powerful technologies and innovations can exceed the pace and rate at which regulatory bodies can act, and/or that relevant institutions can adjust to.
Part of that is making sure to explore the necessary human components in any AI supported educational and healthcare initiatives, as the previous post here explores.
Another important part is that we endeavor to show how those structural impacts occur in hopes of policies and practices being developed to minimize them.
We hope to provide understanding that reveals the behind- the-scenes channelling of great AI potentials into merely addressing corporate goals. As noted with the example of social media’s development, the price of failed vigilance can be steep.
“Optimization” means a system repeatedly tweaks actions to improve a score (a metric). When the score is a proxy (attention, clicks, profit, quarterly growth), the system can produce harmful or soul-shaping outcomes without anyone intending evil. That’s why “it doesn’t need villains; it just needs objectives.” The objective function quietly becomes the “god” of the system, and humans adapt their inner lives to it.
Think of optimization as “score-chasing with feedback.”
So the machine optimizes what is measurable, not what is valuable.
Optimization stories go like this:
1 Someone sets a goal that sounds reasonable (increase growth, maximize engagement, reduce churn, raise profit).
2 Thousands of people (and algorithms) make small decisions that improve the goal.
3 Over time, the system evolves behaviors that feel predatory, manipulative, or dehumanizing.
4 And nobody had to “want” that outcome for it to happen.
It’s like erosion: you don’t need a malicious river. You just need gravity and water and time.
Incentives + feedback loops + scale = outcomes that can look like intentional cruelty.
