Understanding AI's limitations: Why it matters for student builders

If you're building apps with AI right now, you're ahead of most people. But here's something equally important: understanding what AI can't do.
A recent conversation between investor Steve Eisman and AI researcher Gary Marcus reveals critical insights that every student builder should know. Marcus, who has studied AI since childhood and wrote his MIT dissertation on neural networks, offers a reality check that's actually empowering for young creators.
What AI really does (and doesn't do)
Think of AI as autocomplete on steroids. When you type "Meet me at the..." your phone might suggest "restaurant" because statistically, that's a common completion. Large language models like ChatGPT work the same way, just at massive scale.
Marcus explains that these systems fundamentally predict what comes next in a sequence by analyzing patterns from enormous amounts of data. They're incredibly good at pattern recognition and statistical analysis. But there's a catch: they don't actually think or understand.
The hallucination problem
Here's where it gets interesting for builders. AI systems make things up with perfect confidence. Marcus calls these "hallucinations" - when AI presents false information as fact.
His favorite example? An AI claimed voice actor Harry Shearer was British, when he was actually born in Los Angeles. The system glommed together information about British voice actors and comedians, creating a statistically plausible but completely wrong answer.
For student builders, this matters because your apps need to be reliable. If you're creating a homework helper or a research tool, understanding when AI might hallucinate helps you build better safeguards.
Why novelty breaks AI
Marcus identifies the deepest problem with current AI: it struggles with anything new or unexpected. These systems are "glorified memorization machines" that work great when your question resembles something in their training data, but break down when faced with genuine novelty.
He shares a striking example: A Tesla using AI's summon feature crashed into a $3.5 million jet at an airplane trade show. Why? The system was trained to avoid pedestrians and cars, not jets. It couldn't generalize from "don't hit things" to "don't hit expensive aircraft."
This limitation actually highlights what makes human creativity special. When you're building an app to solve a problem nobody's tackled before, you're doing something AI fundamentally can't do alone - you're creating something genuinely novel.
System 1 vs. System 2 thinking
Marcus references Daniel Kahneman's framework of fast, automatic thinking (System 1) versus slow, deliberative reasoning (System 2). Current AI excels at System 1 but struggles with System 2.
For student builders, this means:
- AI is great for brainstorming, generating options, and pattern matching
- But YOU provide the reasoning, judgment, and critical thinking
- The combination of human System 2 thinking + AI's System 1 speed creates powerful possibilities
What this means for your projects
Understanding AI's limitations doesn't diminish its power - it helps you use it smarter:
Build with safeguards: If your app relies on AI-generated information, add verification steps or cite sources so users can double-check.
Focus on novel problems: The fact that AI struggles with novelty means there's massive opportunity for human-led innovation. Your creative ideas matter more than ever.
Combine approaches: Marcus advocates for "neurosymbolic AI" - mixing pattern-matching neural networks with classical rule-based systems. Even tech giants are quietly doing this. You can too.
Test edge cases: When building your app, deliberately test unusual scenarios AI might not have seen before. That's where you'll find the weaknesses.
The experimenter's advantage
Here's the paradox: as AI becomes more common, the people who truly understand its capabilities and limitations will stand out. Companies are learning this the hard way - 81% of AI-fluent professionals report being more productive, but only by learning through experimentation, not passive study.
Marcus notes that the researchers who get AI right aren't the ones who blindly trust it. They're the ones who understand where it works brilliantly and where it falls apart.
Building toward better AI
The future isn't about scaling up current AI endlessly. Marcus argues we need what he calls "world models" - systems that actually represent and reason about how things work, rather than just predicting text sequences.
As a student builder, you're already practicing this kind of thinking. When you design an app, you're creating a model of how users think, what problems they face, and how information should flow. You're building abstractions and logical systems - exactly the kind of thinking that current AI lacks.
Why this matters now
The AI industry is going through a reality check. After years of hype about "scaling" models to achieve artificial general intelligence, industry insiders are acknowledging limitations. Even Ilya Sutskever, a founder of OpenAI, recently stated we need to return to fundamental research.
For students, this is actually good news. It means:
- The field is wide open for new approaches
- Critical thinking about AI is becoming valuable
- Hands-on builders who understand both capabilities and constraints have an edge
- There's less pressure to just accept AI outputs uncritically
The builder's mindset
The best way to understand AI's limitations? Keep building with it. Every time you hit a wall - when ChatGPT gives you buggy code, or an AI image generator misunderstands your prompt, or a chatbot can't handle an edge case - you're learning something real about how these systems work.
Marcus emphasizes that AI needs "intellectual diversity" - different approaches working together. The same is true for student builders. Your projects should combine:
- AI's pattern-matching speed
- Your creative problem-solving
- Classical programming logic
- User feedback and testing
- Critical evaluation of outputs
Moving forward
Understanding AI's limitations doesn't mean being pessimistic about it. It means being realistic and strategic. As Marcus puts it, these systems have "some use for a lot of money" - they're genuinely useful tools, just not the miracle solution some claimed.
For students building real apps, this perspective is liberating. You don't need to wait for AI to be perfect. You can build amazing things right now by:
- Using AI where it excels (rapid generation, pattern matching, suggestions)
- Adding your judgment where AI fails (verification, edge cases, novel situations)
- Testing rigorously to find where systems break down
- Iterating based on real-world use
The future belongs to builders who understand both what AI can do and what it can't. By experimenting hands-on, you're developing exactly that understanding - not from lectures or hype, but from actually building things that work in the real world.
That's the kind of AI fluency that matters.
Reference:
Eisman, Steve. "Gary Marcus: AI's Reality Check." The Real Eisman Podcast, January 2025, https://youtu.be/aI7XknJJC5Q
Comments (0)
No comments yet. Be the first to share your thoughts!


