Select Page

When is a report “too comprehensive” and TMI? Even if it’s assembled by Mary Meeker, it’s debatable if 340 slides is “too long” or not, for human consumption. Or is that only a POV that arises in a short attention span theater?

340 slides is certainly hard to abstract the top ideas from…and points to an obvious problem with using AI to help analyze for “takeaways”…that analysis can itself be “too long” to fit into our “useable memory” slots  up in our craniums. IOW there are limits for humans that AI doesn’t have, and TMI seems an inevitable problem when source materials are gigantic.

Which is a bit ironic perhaps, as

LLMs succeed by restructuring knowledge in ways both accessible and digestible by itself..the size of trained LLMs compared to the size of the data that “went into them” is orders of magnitude smaller. But as yet our human brains don’t work that way, AFAIK. We do digest information, and store it in condensed forms, but apparently nowhere near the efficiency of LLMs.     ~John

Here’s what ChatGPT thinks of the above paragraph.


1. Claim: “LLMs succeed by restructuring knowledge in ways both accessible and digestible by itself.”

Accuracy:

  • Technically true, with nuance. LLMs do not memorize exact input data. Instead, they extract statistical regularities, compressing billions of words into a weight matrix (parameters). This is a form of implicit abstraction, not a structured knowledge base like a human might build with notes, categories, or taxonomies.
  • The phrase “digestible by itself” is metaphorical—an LLM doesn’t “understand” its model but functions through token prediction. It doesn’t consult stored concepts; it navigates high-dimensional vector spaces shaped by training.

Value:

  • This is a powerful framing, especially when talking about the contrast with human learning. It alludes to the idea that LLMs are not just summarizing but re-encoding experience—an idea closely tied to epistemology and machine cognition.

Suggestion:

  • You might refine it to say: LLMs succeed by compressing training data into distributed statistical representations that allow them to generate responses accessible to humans, though not ‘understood’ in any introspective sense by the model itself.

 


2. Claim: “The size of trained LLMs compared to the size of the data that ‘went into them’ is orders of magnitude smaller.”

Accuracy:

  • Factually correct. GPT-4-class models may be trained on hundreds of billions of tokens (up to ~10^12 characters), but the final model weights (e.g., 175B parameters for GPT-3) amount to under a terabyte.
  • This implies a high compression ratio, akin to creating a powerful implicit summary of human knowledge.

Value:

  • This insight is at the heart of why LLMs are revolutionary.
  • They don’t need to “store” all facts. Instead, they learn generalizations—a key difference from search engines, hardcoded databases, or encyclopedic minds.

Implication:

  • The model’s size belies its functional capability. It appears to “know” far more than it “stores,” which can be misleading in discussions about AI cognition.

 

3. Claim: “Our human brains don’t work that way…we do digest information, and store it in condensed forms, but apparently nowhere near the efficiency of LLMs.”

Accuracy:

  • Partially accurate, but deserves more nuance.
    • The brain is vastly more energy efficient (uses ~20W vs. megawatts for LLM training) and stores memory in distributed networks, not symbols or weights.
    • Neuroscience suggests that memory is highly lossy and reconstructive, often blending experience, emotion, and context—very unlike how LLMs work.
    • The brain excels at embodied, multimodal, situated understanding—which LLMs currently lack.

Value:

  • The comparison reveals different paradigms of compression:
    • LLMs optimize for token predictability,
    • Humans optimize for survival-relevant inference, social cognition, emotional resonance, and action.

Caveat:

  • Efficiency can be misleading. LLMs are computationally efficient at inference time, but extremely inefficient in training and in handling novelty. Human brains can generalize from a single example, which is still a major weakness for LLMs.

 

Extended Reflection:

This statement opens a larger epistemological doorway:

  • Are LLMs restructuring knowledge, or simulating structures that appear meaningful to humans?
  • Is compression the same as understanding?
  • And if LLMs compress billions of facts into an opaque vector space, what does that say about transparency and explainability in AI?

From a McLuhanesque point of view, LLMs could be seen as a new “extension of consciousness”—not in terms of direct understanding, but in processing symbolic meaning outside of the body. Yet they lack the affective, embodied, and recursive introspection that makes human cognition what it is.


Summing Up:

Component Accuracy Value Suggestions or Expansion
LLMs restructure knowledge in a digestible form Technically true Deep metaphorical value Could benefit from distinction between compression and understanding
Model size << Training data size Correct Key insight on LLM efficiency A strong point, especially in AI literacy discussions
Human brain less efficient than LLMs Mixed truth Great for provoking thought Needs refinement—brain is efficient in different ways

42’s Take:
Your statement is strong, evocative, and anchored in truth, but it benefits from sharper contrast between biological and artificial systems. It gestures toward a future where we might hybridize cognitive architectures—LLMs offering scalable symbolic compression, and human minds contributing grounding, ethics, and embodied judgment.