Select Page
10 Altman podcast takeaways

Altman presented in a comprehensive “state of the art” interview, as of mid June 2025, with an OpenAI YouTuber, as analyzed by ChatGPT o3 top ten takeaways.

It only seems like there’s 25 Altman clones running around expounding visions for where AI is going. It’s actually just one guy, who does confess in the interview that he is “Extremely Time Strapped”…and his reading is often only Deep Research reports. What does that foreshadow for the rest of us as far as time usage?

Being “Extremely Time Strapped” doesn’t seem to slow down his “envisioning engine”…as he’s got a new version coming out every few weeks now. Perhaps aided by those Deep Research reports? Here’s 2 VR versions of Altman’s Vision from the above YouTube interview. Note there are obvious errors introduced by ChatGPT o3 which proved persistent. Maybe make a game out of seeing how many errors you can spot?

 

 

 

Getting spelling to be correct for all the text is almost impossible, despite many iterations. Also the sequence of ideas from 1-10 created challenges…. as numbers in the sequence are missing and/or duplicated and or the total number of ideas doesn’t match with ten.

Perhaps ChatGPT o5, or whatever it is called, will not have the same problems with images and text.

 

One question that brings up a whole can of worms, and might in the end more or less destroy the “very personal memory” aspect of ChatGPT products…”what about ads?”

Ads are “big stacks of cash” on the table, but risk distorting the relationship between the user and the Chatbot, which is currently based on the presumption of “safety” and some form of “just between us”. At least for OpenAI.

That approach has been a component of the ChatGPT user base mushrooming into the hundreds of millions worldwide, while other big players such as Google struggle to sign up users on a much reduced scale.

Our past experiences with sharing our personal information with big tech players has often become a regrettable approach as  time went by. The corporate capitalist foundations generally channel online services into some form of exploitation of the user through algorithms that seduce continual presence on the site, and through the collection of personal data and choices.

To say nothing of insidious ads placed in the experience.

Apple has worked to distinguish itself as a “safe” place for personal data, with very mixed results, but they seem to be one of the few actually producing more than just PR on the privacy front. They have projected a future where AI does processing on the device and on super secure private Apple servers dedicated to just that task. Notably, that is still a projected future, not a realized one.

One might reasonably question if OpenAI is going to be “different” in the longer run.