Information about the coming AI and Chat empowered search services that Google and MS are planning is below. There will be a lot more to find out about this in the coming months.
One note re Axios approach to the ongoing story: they keep saying Chat AI tends to make things up. That’s an interesting phrase to use, as of course the replies are “made up” through a maze of algorithms interacting with ginormous databases of mostly facts. Some interpretation is necessarily involved, as with any type of curation…what is reliably correct, and what is not are challenges always with us.
But Axios is trying to say “make things up” in the sense of fiction rather than fact. While errors by Chatbots do occur, and are hard to spot, they don’t “tend to make things up”. They actually tend to supply very useful information in about a minute or less, it says here. Perhaps Axios might find this new tech a threat to their information curation functions?
•••••••••••••••••••••••••••••••••
For example, history is a field full of facts, but different historians may have different facts in their treatises than other historians. So how do we currently navigate that challenge? We take such facts “with a grain of salt”. and we try to put them into a context that reveals some correlation to reliability.
IOW, we note that facts cited may not be what everyone agrees on, and we proceed upon that basis. Then there’s the interpretation of facts, which is also a challenge. Even crucial matters may be in dispute between Historians. One sees battles on Wikipedia over “what actually happened” during crucial historical events. We also have to accept that history is incomplete, and the farther back we go the more incomplete our “records” become.
What is true in history will not be definitively sorted out by Chat Bots/ AI. Perhaps knowledge in the scientific realm can be more specifically agreed upon, but there’s still plenty of room for disputes and disagreements. One difference with science is that errors are subject to empirical evidence that accumulates over time, and that can change beliefs and theories and perhaps facts too, in a sense.
So knowing the truth consistently and reliably is problematic, and while AI might be able to find the “best facts” as it improves, that doesn’t imply we are suddenly going to fully understand the human reality we live in, nor discover the truth underlying all the universe.
•••••••••••••••••••••••••••
From Axios:
Google Chat AI and Search:
Details: Google is laying out three AI-related projects as part of a blog post from CEO Sundar Pichai.
- Bard, the conversational assistant based on Google’s LaMDA large language model, is starting limited external testing.
- The company is offering a preview of how it soon plans to integrate LaMDA into search results, including using the system to help offer a narrative response to queries that don’t have one clear answer.
- Google says it is developing APIs that will let others plug into its large language models, starting with LaMDA itself.
•••••••••••••••••••••••••••••••••••••••
Microsoft ChatGPT/ Search:
Details:
- Microsoft says Bing is using OpenAI’s technology — a newer version than the one powering ChatGPT — to act as a “copilot” and has also revamped its core search engine using other AI technologies.
- The revamped search engine will be able to help answer complex queries, such as planning a detailed trip itinerary.
- The new Bing is in a limited preview now and will be available to millions of people in the coming weeks, Microsoft said.