Select Page

At Edtech Insiders, we interview founders, investors, operators, educators, and thought leaders about the future of education technology. You’ll hear from leaders across the education technology industry and learn about all of the important edtech trends happening globally.

Study Modes: The New AI Battleground

By: Alex Sarlin


We sat down with Dave Messer (Google Guided Learning PM) and Drew Bent (Anthropic Education Lead) to go deeper on their learning mode strategies. Some quotes from our conversations are included in this article. Listen to our full conversations in upcoming episodes of the Edtech Insiders Podcast.


Over the past few months, the world’s leading AI companies have all unveiled innovations that are designed to transform learning. The convergence of AI and education is creating opportunities we’ve only dreamed about, and it’s happening all at once. Let’s recap this surge of activity: (All the LINKS are “live”)

These are exciting—dare I say, thrilling—times for those of us who are bullish about AI innovations for teaching and learning. But it’s also head-spinning, and the ramifications for the rest of the edtech sector are still hard to parse. As these innovations roll out, three key trends are emerging:

Emerging Trends

 

Education Is a Key Big Tech Battleground

The world’s most valuable companies are treating education like the next great platform opportunity, and for good reason. Google, Microsoft, OpenAI, and Anthropic aren’t just building educational tools; they’re positioning themselves as the infrastructure for how an entire generation learns to think alongside AI.

The top AI companies are in fierce competition with one another for talent, market share, and partnerships, as well as for mindshare among the next generation of AI-skilled workers (current university and high school students). The fact that education is one of the prime battlefields on which they are fighting (alongside coding, general productivity, and agentic automation) is exciting, and for many, a bit scary. As this battle unfolds, edtech point solutions are finding themselves consistently nudged toward irrelevance as the frontier models expand their territory.

The LMS Has Become a Core Entry Point

Learning Management Systems are proving to be the golden gateway into educational institutions. Google’s deep integration of LearnLM, Gemini, and Notebooks into Google Classroom leverages their existing dominance in K-12, while OpenAI’s partnership with Instructure could instantly give ChatGPT access to millions of higher education students through Canvas. Why are LMS integrations so powerful? They combine massive scale with rich data: every assignment, discussion, and grade becomes a context that could supercharge AI personalization. More importantly, they embed AI tools directly into educators’ existing workflows rather than asking them to adopt entirely new platforms.

Educator Buy-In and Training Remains Essential

The success of AI in education still hinges on one critical factor: whether teachers embrace these tools or resist them. That’s why we’re seeing simultaneous pushes from edtech companies, nonprofits, and even teachers’ unions to provide AI training and support.

 

The Current Battle: Study Modes

 

Right now, this competition is playing out most visibly in what we’re calling the “Battle of the Study Modes.” OpenAI, Anthropic, and Google have all launched dedicated learning experiences within their flagship products, and this represents a fundamental shift in how AI companies think about education.

This is a big deal for a few reasons:

Frontier Model Providers Are Directly Pursuing Education Consumers

The frontier model providers are getting clear data visibility into how often people are coming to their core applications looking either to explore their curiosity or seek formal educational outcomes (help with an assignment, passing a class, studying for an exam). As Drew Bent from Anthropic explains: “We released our Learning Mode this past spring after we saw a lot of the data from our research at the quantitative level in terms of studying over half a million conversations of Claude. We saw the need for the ability for Claude to not just answer questions, but also to ask you questions and to have Socratic dialogue and employ all the best practices that teachers and tutors do.”

Building an application layer that allows users to turn LLMs into tutors, teachers, or learning companions at the click of a button is a natural extension of that need.

Tools Designed for Motivated Consumer Use

While these tools are educationally oriented, they aren’t meant to be used exclusively in education settings. Do these moves have the potential to squeeze out third-party edtech companies whose core value proposition is making LLMs more educationally sound? Yes and no. Some student AI edtech use cases—creating flashcards, practice quizzing, turning class material into study guides—are directly covered by the learning modes. Any edtech company that focuses solely on this functionality now has a major new competitor.

But most use cases for educators, as well as most student use cases that involve actual implementation in educational environments—school-supported tutoring, mental health and wellness, early literacy, tools for neurodiverse and special needs learners—are still well out of the wheelhouse for these study modes.

Interestingly, the push for these tools came from students themselves. As Bent notes: “It was the specific moment I can remember the conversation we had with a focus group of college students. They were the ones who I think first even coined the term ‘learning mode.’ They were worried, and they are worried, about brain rot. That’s the way they frame it.”

What We’re Learning from Testing

At Edtech Insiders, we’ve been playing with the various study modes and seeing a few early patterns. There are some obvious strengths: they tend to have friendly but authoritative tones, give the user lots of options and autonomy in guiding the conversation, use metaphors frequently to illustrate concepts, shift to practice quizzing and Socratic questioning at the drop of a hat, and try to avoid giving answers for the user.

Dave Messer from Google’s Guided Learning team emphasizes this philosophy: “Learning isn’t just about information. The simple example is: you could either have an encyclopedia or you could have a conversation with a helpful tutor. For Gemini, it’s not done when you just get the information. It’s done when you’ve engaged and you’ve really developed an understanding.”

On the flip side, they are way too quick to respond before asking the ‘why’ behind the question. They ask the user more than one question at a time, they are not yet as adept at incorporating visuals and videos into answers as they should (and surely will) be, and they all still talk too darn much! In our light testing, a short question is likely to get a response of at least 150-200 words, which would take about a full minute to speak aloud. They act like a brainy know-it-all grad student who is so instantly excited about any topic that she has to hold herself back from launching into a long speech.

Here are a few comparisons of OpenAI ChatGPT’s Study Mode (GPT-5), Claude Learning Project (Sonnet 4), and Gemini Guided Learning (Flash 2.5 Pro) responding to the same prompt:

OpenAI ChatGPT’s Study Mode (GPT-5)
Claude Learning Project (Sonnet 4)
Gemini Guided Learning (Flash 2.5 Pro)

Final Prediction: Multimodal Learning Could Be the Next Frontier

 

The next evolution of this study mode battle will likely be won by whoever masters multimodal learning experiences. Right now, text-based conversations dominate these platforms, but that’s already changing.

Google is leading the charge with rich multimedia integration. As Dave Messer explains their approach: “We want to manage cognitive load. You can see actually in Guided Learning, we are including images and YouTube videos; these are ways in which, instead of just reading text, which could be cognitively taxing, by surfacing images sometimes that is the most helpful thing.” This isn’t just about AI-generated content—Google is incorporating YouTube videos, licensed educational images, and visual aids that mimic what great teachers do naturally.

OpenAI’s GPT-5 brings updated multimodal capabilities that can ingest and process text, images, audio and video as inputs, interpret charts and diagrams, and create contextually relevant visuals. The question is whether they’ll integrate Sora’s video generation or other advanced capabilities to create truly immersive learning experiences.

Anthropic has focused more on voice interactions and coding capabilities through Claude Code, but even these tools demonstrate the potential for interactive, multimedia learning through games and simulations.

The companies that can move beyond text-heavy conversations to create rich, visual, interactive learning environments—complete with video, voice, and even VR/XR capabilities—will likely capture the next wave of student engagement. The study mode battle is just getting started, and we can’t wait to see what comes next.