AI has various limitations that can be a challenge to overcome. It’s still very expensive for AI companies to train their biggest and baddest models, and it can get expensive for them to provide model access and compute time on their servers.
Currently OpenAI has instituted “infinite memory” for “Plus Users” and above, which allows the model to include previous chats in its searching for responses. That’s a powerful “specializing and tailoring” capability where each user’s past with AI becomes part of the present and it builds up over time.
When discussing the capabilities of a particular LLM, one may hear about how many tokens are available, and other variables, but one doesn’t often hear about the challenge of the size of the Sandbox.
This variable is aptly named, as it’s more or less the set-aside amount of compute that the model can use to do particular work on behalf of answering your query. This becomes especially important now that some models use tools, such as the PowerPoint query in the previous post.
The response in that PowerPoint process was severely limited by the size of the sandbox when I drilled down further into the details I wanted the end result to contain…such as design elements, quizzes, etc.
When I queried, the response would say, “the sandbox timed out“…IOW, when the process became sufficiently sizable, the OpenAI model ChatGPT o3 just stopped in its tracks. Often the model would respond with workarounds that involved breaking up the document and process into smaller pieces to be assembled back together after the fact.
Given the limitations of PowerPoint software, this was a problematic solution, at best, or a non starter often. Alternately, ChatGPT o3 wanted me to run Python code on my computer to process the problematic part of the document…to more or less offload the work to me and my computer.
However, many of us don’t do Python, and to run Python on my Mac requires installation of a number of “helping apps”; and then knowing what to do with them …Accomplishing that seems unlikely given the unfamiliarly involved, or to quote the vernacular: “Not Happening”.
So we’re hoping Sandboxes get bigger for times when the LLM wants to use a large app file like PowerPoint can produce. Supposedly one can’t get a bigger Sandbox by using a more expensive OpenAi subscription…it’s the same for all the subscriptions. For now anyway.