Select Page

Marshall McLuhan often talked about changes in our “sense ratios” that occur with various changes in media. Such as the change between a high reliance on what we “heard” with pre literate stories, myths, tribal knowledge around the campfire…and the focus on the eye making out symbols on a scroll or page with literacy. McLuhan was trying to help us understand how when media changes, it also changes our perceptive approach to the world. 

Today, we are experiencing an increased focus on “aural reality”, or our sense of hearing, as we spend increasing amounts of time with podcasts, and aural “books” ,as well as many webpages having an aural version of an article’s text, giving us a choice as to which sense ratio we prefer. This is consistent with what McLuhan predicted with electronic media, but what are we to make of our increasingly aural interface with the cloud through ” digital personal assistants” such as Siri, Alexa etc? 

To understand the forces being marshaled to pull us away from screens and push us toward voices, you have to know something about the psychology of the voice. For one thing, voices create intimacy.

We are already (mostly) adjusted to “speaking with computers” on the phone where the context of the “conversation” is generally very limited, lacks depth, and is often annoying because our computer phone partner doesn’t really understand the context. But who are we when we are conversing with an AI cloud version of a human, that is presently able to sound very human indeed, and can “do” many helpful and useful things for us? What does it mean to enter into what some people already “feel” is bordering on, or even equivalent to, a conversation where one of the participants is an increasingly hard to distinguish from human?

There’s more going on here than meets the eye…to make a pun in the McLuhan vein. Yes, we are exchanging the sense of touch on a keyboard for an aural interaction, and that has implications. One of which is speech and aural communication is an extremely rich media, where there’s the actual language, word choice, syntax as we are easily aware of, but there’s a whole additional level of meaning in “how that linguistic stuff” is presented, and “heard”. Much of the latter happens unconsciously, which is a deep connection between speaker and listener. What “happens” when that deep connection is with an AI entity?

Language connects people, and creates a social fabric that we “live within”, including when we consciously process ideas “in our head”…we are still connected with our tribal group, or our cohort of speakers of that language. Language, IOW, is one huge part of being human, and of “who we are” in a “more than just individual” reality. In essence, we know ourselves through the language community we live in. “No man is an island” applies here.

But the huge takeaway from admitting AI entities into our language cohort, which is a mutually defining relationship, is we are now Borg. We don’t have to wait for the machines to “take over” in the “singularity” moment when machines learn how to learn, and can do iterations bazillion times faster than our brains. That moment of becoming melded with digital versions of human consciousness is already with us, and who we are, is now being changed by this reality.

Two articles in the November 2018 issue of the Atlantic magazine, (which is owned by Steve Job’s widow btw) discuss the above, and present us with more relevant questions. (see part 21)