Select Page

We are suddenly trying to fully rationalize our experience in order to understand AI implications, but we don’t have any actual means of fully rationalizing human experience available. (essence of what B. Evans stated in recent post).

We don’t understand our own brain/ mind duality, and we don’t understand its consciousness component either. Which puts us at a very tentative place trying to then decide how AI differs from human processes, and in what way that might be seen as good or bad.

This author asked ChatGPT to use Dall-E to create an image of a universal model. It placed AI in the center!

universal model

IOW, we have been muddling along as human civilizations with various attempts at a model that explains everything since early early days. We used myths of all sorts, and many different religions along the way. The world still provides varied ways that we try to model the world and existence with; something humans seem to find essential, if frequently beyond our reach.

Today we might look to science for a universal model, but physics is in a quandary between what is real at different scales, and biology still has a long way to go to explain consciousness.

We might look to divinely inspired texts. Most of those are long in the tooth created a few millennia before our current conundrums… but more problematic is that we don’t have consensus on which religious model is the “true one” or the “best one”, which today is clearly visible in various violent clashes.

We have different forms of government that include both ideals and pragmatism but no actual model that works for all cases.

In the democracies of the world, we are still “making it up as we go along, hoping for the best.”

Clearly things have changed since the founding principals and MO were set down. We now have hundreds of thousands of citizens equating to one representative, which doesn’t seem very “democratic”. Just one of the structural problems inherited from earlier approaches, and as yet unchanged to match current realities. IOW, we don’t have a fully rational system of government either.

And now we want one size fits all to explain everything so that we can then compare AI to how humans know and to what humans know. Some would suggest models are always going to be provisional and contingent and not derived from the world of Aristotelian ideals, and if so, no model would be “the Model” that explains life the universe and everything.

Having the universal model is certainly an existential need/ desire, but it doesn’t suggest one is available, or just over the horizon either. Nonetheless, we may still find very useful insights as we make the attempts to model human intelligence and machine intelligence sans “the ultimate model”.

(as always, PSA Posts present the views of the author of the specific post, and not necessarily of the organization, BOD, or other members)