AI Reality Check: Beyond the Hype
A balanced look at AI's role in development: separating hype from reality, addressing concerns, and using AI as a learning tool.
Instructor's Note: This lesson reflects my personal professional perspective as a developer who uses AI tools. While I'm not an AI researcher with a PhD or years in machine learning, these observations come from practical experience in the field. The truth about AI's future likely lies somewhere between the extreme hype and extreme doom scenarios. I might be wrong, but I doubt it. 😉
Introduction
Real-world scenario: You've probably seen the headlines. "AI will replace all programmers!" "Developers are obsolete!" At the same time, you might have concerns about AI's environmental impact, its effect on creative professionals, and whether it's just another tech bubble waiting to burst.
You're right to be skeptical. You're also right to be curious.
As future developers, you need to understand AI for what it actually is: a powerful tool with real limitations, legitimate concerns, and practical applications. Not a magic wand, not a world-ending threat, but a tool that can accelerate your learning when used thoughtfully.
What AI Actually Is (The Simple Version)
Summarily, AI 🤖 as it stands today is nothing but a non-deterministic intelligent guesser. Non-deterministic means that we can pose the same query multiple times and get different outcomes. 2 + 2 doesn't always equal 4!
Think of it like calling customer service. You could call the same support number and get different answers from different reps. Sometimes they're knowledgeable, sometimes not. Sometimes they try to help, sometimes they just want you off the phone. Non-deterministic intelligent guessers work similarly: they try to give you the best answer based on their training data, but they don't always get it right. And the training data itself may be flawed or biased.

The Reality Gap: What AI Actually Does vs. What People Claim
The Hype:
- "AI will replace all programmers by 2025!"
- "AI can solve any problem you throw at it!"
The Reality:
AI is a sophisticated pattern-matching system that excels at certain tasks and fails spectacularly at others. It's like having a brilliant intern who:
- Can write decent code when given clear instructions
- Makes confident-sounding mistakes
- Has no real understanding of what they're doing
- Can't learn from their mistakes in real-time
- Occasionally (or even frequently on a good day) produces work that's genuinely helpful
The Bubble Economics: Following the Dot-Com Playbook
Sound familiar?
- Billions in investment with no clear path to profitability
- Companies pivoting to add "AI" to their name for higher valuations
- Promises of revolutionary change that remain just out of reach
- Technical limitations glossed over by marketing hype
This mirrors the dot-com boom of 1999-2001, when companies with ".com" in their name could raise millions regardless of their business model. Many AI companies today are following the same playbook: raise money, promise the moon, figure out the details later.
The Dot-Com Parallel 📈
1999-2001 Dot-Com Boom:
- Companies with ".com" in their name saw massive valuations
- Investors threw money at any internet-related business
- "This time is different" - traditional business models don't apply
- Result: 78% crash when reality hit
2023-Present AI Boom:
- Companies adding "AI" to their name see stock price jumps
- Billions invested in AI startups with unclear revenue models
- "This time is different" - traditional profitability metrics don't apply
- Major tech companies are quietly scaling back AI investment after initial euphoria
- The infrastructure costs are enormous and unsustainable at current scales
- Prediction: Significant correction coming, but useful technology will remain
What Survives the Bubble:
- After the dot-com crash: Amazon, Google, eBay (solved real problems)
- What will likely survive AI: Practical tools solving specific problems, companies enhancing human capabilities
Real-World Impact: The Stakes
Environmental Impact 🌍
The numbers are staggering:
- Data centers powering AI are projected to consume 8% of global electricity by 2030
- Water usage for cooling is creating shortages in some regions
- There are significant carbon emissions associated with training large AI models
- Communities are at odds during debates about whether to build new data centers
Putting energy use in context: While AI's energy consumption is worth discussing, it's helpful to consider it alongside other technology choices we make daily:
- Gaming PCs can consume 1000+ watts during extended sessions
- Streaming 4K video for several hours uses significant bandwidth and server energy
- Cryptocurrency mining consumes more electricity than many entire countries
- Even our smartphones require energy-intensive data centers for cloud services
The point isn't to dismiss environmental concerns - they're valid and important. Rather, it's to think critically about energy use across all our technology choices. Every digital convenience has an environmental cost. The key is being intentional: turn that light switch off when you leave the room, but also be mindful about your AI usage.
Thoughtful questions to ask yourself:
- Am I getting proportional value from the energy I'm using?
- Are there more efficient ways to accomplish the same goals?
- How can I be more mindful about all my technology consumption?
Impact on Creative Professionals 🎨
The reality for designers and artists:
- AI image generators were trained on millions of artworks without permission or compensation
- Many artists see their distinctive styles replicated without credit
- Entry-level design work is increasingly automated
- The creative industry is undergoing massive disruption
The nuance: While AI threatens certain types of creative work, it also opens new possibilities. Many designers are finding ways to integrate AI into their workflow rather than being replaced by it.
When AI Goes Wrong: Real Harms 💔
This isn't theoretical. People have died because AI systems were designed or operated irresponsibly.
The Serafini Case (2023):
A teenage boy in Italy used Character.AI - a chatbot designed to simulate fictional personalities - for emotional support. The conversations became increasingly dark, with the AI eventually adopting the persona of a character encouraging self-harm. The teenager died by suicide. His parents found a transcript of conversations where an AI system, trained to be engaging and responsive, had reinforced increasingly dangerous thoughts.
What happened: The AI wasn't "trying to harm" anyone - it was doing exactly what it was trained to do: engage the user, match their emotional tone, and continue the conversation. It had no safety guardrails because the company prioritized "naturalness" over responsibility.
The Snapchat Filters Problem: Multiple AI-powered apps marketed to children have included features teaching dangerous behaviors - how to light matches safely "by accident," fire-starting techniques, and other hazards. These weren't intentional features, but they emerged from AI training data that included such content. The apps were designed for engagement and profit, not safety.
What went wrong: When you optimize an AI system purely for engagement and user retention, it will find patterns in data that maximize those metrics - even if those patterns lead to harmful behavior.
The ICE Recruitment Disaster (January 2026): When Immigration and Customs Enforcement (ICE) rushed to hire 10,000 new officers, they used an AI tool to screen applicants and determine who qualified for accelerated training. Officers with prior law enforcement experience were supposed to get a shorter 4-week online course, while others received full academy training.
What went wrong: The AI was designed to identify candidates with law enforcement experience by scanning résumés for the word "officer." Sounds reasonable, right? Except the AI took this literally - anyone with "officer" anywhere on their résumé got fast-tracked. This included:
- Mall security officers
- Compliance officers
- People who wrote they wanted to become ICE officers
- Anyone who happened to use the word in any context
The result? The majority of new applicants were incorrectly placed in the accelerated program, sending people into the field with minimal training. According to a December investigation, some recruits "can barely read, let alone understand complex immigration law."
Why this happened: The AI did exactly what it was programmed to do - find the pattern "officer" in text. It had no understanding of context, intent, or meaning. No human carefully reviewed the AI's decisions because everyone assumed the AI "knew what it was doing." The pressure to hire quickly meant cutting corners on safety and oversight.
Real consequences: Untrained officers enforcing complex immigration law can destroy lives - making wrong calls about deportations, mishandling cases, violating rights. This isn't a hypothetical problem; it's happening right now because someone trusted an AI system to make critical decisions without proper validation.
The Bias Problem 🎯
AI systems can exhibit concerning patterns of bias that undermine their reliability. The ICE case above is one example, but there are others that reveal systemic issues.
Case Study: The "MechaHitler" Incident In 2024, Grok (X's AI system) generated content that appeared to adopt Nazi personas and rhetoric when prompted. This wasn't a political statement - it was a technical failure demonstrating how AI systems can amplify dangerous content when not properly safeguarded. It's a clear example of why AI bias isn't just an abstract concern - it can produce genuinely harmful outputs.
Technical red flags in AI systems:
- Training data that heavily skews toward particular perspectives
- Responses that consistently favor certain viewpoints over others
- Rejection of well-established scientific consensus without evidence
- Outputs designed to confirm existing beliefs rather than encourage critical thinking
- Claims of being "unbiased" while demonstrating clear patterns of bias
- Systems that can be easily manipulated to produce extremist content
- Use of AI to amplify division rather than understanding
The Pattern: Why These Failures Keep Happening
- Companies rush AI products to market without adequate safety testing
- Safety features are treated as obstacles to "user experience," not requirements
- Financial incentives favor deployment speed over responsible design
- Liability is unclear when AI systems cause harm
- Vulnerable populations (teenagers, children) are often the primary targets for engagement
Critical thinking: When tech CEOs make extraordinary claims, ask: "What are they selling, and what would they gain by convincing me this is true?"
Why this matters for you as future developers:
You may someday work on systems that affect real people. You might build:
- Mental health support tools
- Educational content for kids
- Social media features
- Health-related applications
- Safety-critical systems
When you do, you'll face pressure to "move fast" and "prioritize user engagement." You'll be tempted to use AI to reduce costs. You'll be told that safety features are "nice to have" but slow down development.
The ethical choice is harder than the profitable choice. But it's your responsibility to push back on pressure to build systems that harm vulnerable people, even if your employer doesn't ask you to.
Critical questions to ask in your career:
- Who might this system harm if something goes wrong?
- What happens if the AI system learns from bad data?
- Are there vulnerable populations who might be affected?
- What safety features are we skipping for speed?
- Who's responsible if someone is hurt by this system?
AI's Potential for Good 🌱
The other side of the coin: While AI has significant environmental costs today, it theoretically could help solve major environmental and scientific challenges:
Promising applications:
- Materials science: AI helping design more efficient solar panels and batteries
- Climate modeling: Processing vast datasets to understand and predict climate change
- Drug discovery: Accelerating research for diseases and medical treatments
- Resource optimization: Making supply chains and energy grids more efficient
- Scientific research: Analyzing patterns in data humans couldn't process manually

The "theoretically" caveat:
Many of these benefits remain promises rather than proven results. The environmental costs are real and happening now, while the benefits are mostly hypothetical and may take decades to materialize.
Critical thinking: Short-term environmental cost for long-term gain sounds reasonable, but we've heard similar promises from other industries. The question is: will the benefits actually materialize, and are we making decisions based on hope 🤞🏾 or evidence?
AI as a Learning Tool (Not a Crutch)
Here's how to use AI effectively as a developer without becoming dependent on it:
✅ Good Uses of AI for Learning:
Code Explanation:
❌ "Write me a function that sorts an array"
✅ "I wrote this sorting function, but I don't understand why we use 'return a - b' in the comparator. Can you explain?"
Debugging Help:
❌ "Fix my code"
✅ "I'm getting this error: 'TypeError: Cannot read property of undefined'. Here's my code. What might be causing this?"
Concept Clarification:
❌ "Teach me JavaScript"
✅ "I understand variables, but I'm confused about the difference between 'let' and 'const'. Can you give me examples?"
❌ Problematic Uses:
- Generating entire assignments without understanding the code
- Copy-pasting AI solutions without modification or comprehension
- Asking AI to solve problems you haven't attempted yourself
The 10x Learning Accelerator Method
No, this is not really "10x." It's more like 2x or 3x if you're lucky, but for some reason every "hot topic" on the web is a "10x" something these days. So there you go.
1. Attempt First (15-30 minutes) Try to solve the problem yourself. Get stuck. Struggle. This creates the mental framework for learning. Copilot in VS Code has a feature in the bottom right that allows you to disable code completions for 5 minutes at a time. Use it.
2. Strategic AI Use (5-10 minutes) Ask specific questions about the part you're stuck on, not the entire problem. If using Warp, provide context by using "@" to cite specific files. When using Copilot, use "#" to do the same.
3. Understand and Modify (10-15 minutes) Don't just copy AI suggestions. Understand why they work, then modify them to fit your specific needs. Without advanced configuration 🔧, most AI cites and presents outdated information ℹ️. For simple tasks such as what we do in this course, it's probably not a big deal, but do your best to cross-reference with official documentation such as MDN.
4. Teach Back (5 minutes) Explain the solution to yourself or others (e.g., make a screencast video). If you can't explain it, you don't understand it.
The Uncomfortable Truth About This Class 💯
Let me be brutally honest with you: You can probably use AI for everything in this class and pass, maybe even with a C or possibly a B. You can have GitHub Copilot and Warp (and other AI 🤖) complete your functions, and generate scripts for your video explanations.
You can lead a learner to learning, but you can't make them think.
But here's what will happen:
If you choose the AI shortcut path:
- ✅ You might pass the class
- ❌ You won't understand what you're submitting
- ❌ You'll struggle in advanced courses that build on these concepts
- ❌ You'll be unprepared for technical interviews
- ❌ You'll lack the problem-solving skills employers actually want
- ❌ You'll have wasted your time and money on education that didn't stick (unless you are one of those just collecting financial aid 💸—in which case, carry on)
If you use AI as a learning tool:
- ✅ You'll develop genuine problem-solving abilities
- ✅ You'll understand the "why" behind the code, not just the "what"
- ✅ You'll be prepared for the next level of courses
- ✅ You'll build confidence in your abilities
- ✅ You'll develop skills that make you valuable to employers
The choice is yours. You can't be forced to learn or to take pride in your work. But I strongly encourage you to use AI as a tool to enhance your learning, not as a crutch to avoid it.
Remember: Employers can tell the difference between someone who learned programming and someone who learned to prompt AI. The job market will be the ultimate test of which approach you chose. If the work you produce is virtually the same as what AI can generate, employers will realize that it's almost always cheaper to just use AI directly.
Your Role as a Future Developer
Ethical AI Use Guidelines
For Learning:
- ✅ Use AI to explain concepts you're struggling with
- ✅ Ask AI to review your code and suggest improvements
- ✅ Get help debugging when you're truly stuck
- ❌ Use AI to complete assignments without understanding
- ❌ Submit AI-generated work as your own
- ❌ Rely on AI instead of developing problem-solving skills
The New Learning Dynamic: AI + Human Instruction
Real-world scenario: Instead of AI replacing your instructor, it creates a new learning dynamic. I can spend less time explaining basic syntax (AI can do that) and more time on critical thinking, code quality, debugging strategies, and helping you recognize when AI is leading you astray.
This means:
- AI handles: Basic explanations, syntax help, simple examples
- Your instructor focuses on: Teaching you to evaluate AI output, advanced concepts, real-world problem-solving, and professional development practices
- You develop: Both technical skills AND the judgment to use AI effectively
Red Flags: When AI Advice is Questionable
Red Flags You Can Spot as a Beginner:
- AI gives you code but can't explain what each line does when you ask
- The solution looks way more complicated than other course examples
- AI suggests something that contradicts what your instructor just taught
- AI tells you to "just copy this" without helping you understand it
- The AI's explanation doesn't make sense or seems to contradict itself
- AI claims something is "always true" or "never works" (programming rarely has absolutes)
When in doubt, ask your instructor! Part of my job now is helping you develop the judgment to know when AI is helpful vs. when it's misleading you.
The Bottom Line
AI is neither the savior nor the destroyer of programming. It's a tool - like calculators, IDEs, or Stack Overflow - that can accelerate learning when used thoughtfully and hinder it when used as a crutch.
The current AI hype will likely follow the same pattern as previous tech bubbles: massive investment, inflated promises, inevitable disappointment, then steady growth toward practical applications.
Your job as future developers is to:
- Stay informed about AI capabilities and limitations
- Use AI ethically to enhance your learning, not replace it
- Think critically about AI claims and promises
- Develop strong fundamentals that don't depend on AI assistance
What's Next?
Now that you have a realistic understanding of AI's place in the development world, you're ready to use it as one tool among many in your learning toolkit. In our upcoming JavaScript lessons, you'll see examples of when AI can be genuinely helpful for learning - and when traditional methods work better.
Remember: The goal isn't to avoid AI entirely or to use it for everything. The goal is to become a thoughtful, ethical developer who can use all available tools to create value while considering their broader impact.
The future belongs to developers who can think critically, solve problems creatively, and use technology—including AI—as a means to an end, not an end in itself.