Select Page

As educational and healthcare institutions work to find positive ways to incorporate AI tools into their process, there are ongoing challenges to understand the contexts of how these tools work. That is reflected in posts here where the inner workings and challenges of AI tools are explored.

For example, the downsides of social media have become different and more challenging over time, as the forms and structures of that medium changed under constraints and incentives of the concentrated narrow oligarchic ownership structure.

Change occurring in development of new powerful technologies and innovations can exceed the pace and rate at which regulatory bodies can act, and/or that relevant institutions can adjust to.

 

Part of that is making sure to explore the necessary human components in any AI supported educational and healthcare initiatives, as the previous post here explores.

Another important part is that we endeavor to show how those structural impacts occur in hopes of policies and practices being developed to minimize them.  

We hope to provide understanding that reveals the behind- the-scenes channelling of  great AI potentials into merely addressing corporate goals. As noted with the example of social media’s development, the price of failed vigilance can be steep.

“Optimization” means a system repeatedly tweaks actions to improve a score (a metric). When the score is a proxy (attention, clicks, profit, quarterly growth), the system can produce harmful or soul-shaping outcomes without anyone intending evil. That’s why “it doesn’t need villains; it just needs objectives.” The objective function quietly becomes the “god” of the system, and humans adapt their inner lives to it.

Think of optimization as “score-chasing with feedback.”
A system is optimizing when it:
A) Picks an action.
B) Measures the result with a number (the score).
C) Adjusts the next action to raise the score.
D) Repeats this loop many times.
That’s it. No grand philosophy required. It’s basically “learn what works, do more of it.”
Examples that feel obvious:
A) A thermostat optimizes temperature: it takes actions (heat/cool) to hit a target number.
B) A delivery route optimizer reduces travel time.
C) A recommender system (Netflix/YouTube/TikTok) changes what it shows you to increase watch time or engagement.
The crucial part is: the system needs a metric.
If you can measure it, you can optimize it.
Why “optimization” matters morally
Here’s the catch: most real-world metrics are proxies, not the true thing we care about.
What we actually want:
Meaning, truth, mental health, a well-informed public, stable families, wisdom, learning, civic trust.
What we can easily measure:
Clicks, watch time, shares, purchases, user retention, ad revenue, “time on site,” cost per acquisition.

So the machine optimizes what is measurable, not what is valuable.

Optimization stories go like this:

1 Someone sets a goal that sounds reasonable (increase growth, maximize engagement, reduce churn, raise profit).
2 Thousands of people (and algorithms) make small decisions that improve the goal.
3 Over time, the system evolves behaviors that feel predatory, manipulative, or dehumanizing.
4 And nobody had to “want” that outcome for it to happen.

It’s like erosion: you don’t need a malicious river. You just need gravity and water and time.

Incentives + feedback loops + scale = outcomes that can look like intentional cruelty.

 

Objective: maximize “daily active minutes.”
What gets discovered by optimization:
A) Outrage keeps people watching.
B) Anxiety keeps people refreshing.
C) Tribal identity keeps people posting.
D) Sexual novelty keeps people hooked.
E) Simplified moral theater (“heroes vs villains”) spreads faster than nuance.
So the system gradually “selects” content that pushes those buttons.
Nobody has to say, “Let’s make people anxious and addicted.”
The system just finds what raises the metric, and the metric silently rewards the most psychologically compelling levers.
Then the big twist:
Users adapt. Creators adapt. Journalists adapt. Politicians adapt.

Human interior life shifts toward what the system rewards.