Information, Knowledge and Critical Thinking in the Age of AI
We have all seen how language models keep getting better every day. I’ve been using tools built on top of them too — some even claim they can “do the critical thinking” for me and even execute tasks.
At first, I was super excited 🚀. I jumped in and started building web services and back-end workflows that these tools promised would handle my work. But here’s the twist: the more I relied on them, the less I thought 🤯. Honestly, it made me feel… dumber.
After 5–6 carefully written prompts, I eventually found myself just typing:
- 👉 “fix this”
- 👉 “pls fix this”
- 👉 “just make it work”
Basically the same thing you’d hear from a frustrated client or manager: “pls fix this by EOD!” 😅
And sure, I got quick PoCs (great for demos 📊 or business decks 💼). But when it came to building something production-ready and deployable? Nope ❌. That’s when it hit me: I needed to rethink what LLMs (and the tools around them) are really doing — and where critical thinking actually comes from.
This reminded me of when Google Search made information easier to access 🔍. Just because it’s there it does not mean we instantly gain knowledge.
Here’s how I see it now:
📄 Information = raw data or claims.
📚 Knowledge = understanding + analysis of that information.
🧠 Critical Thinking = the skill that turns information into real knowledge.
⚖️ The Risk & The Opportunity
Risk: AI weakens thinking when we treat it as a shortcut.
Opportunity: AI strengthens us if we use it as a critique partner 🤝 — questioning, verifying, and reflecting deliberately.
🛡️ Practical Guardrails
AI = thinking partner, not ghostwriter ✍️. Ask it for alternatives, not final answers.
Demand fact-checking ✅ and sources 📎. Open-source or paid, tools should have this by default.
Use prompts that force YOU to compare 🔄, justify ⚖️, and revise ✏️. Don’t settle for a single-shot output.
Because here’s the thing:
LLMs simplify information extraction 🗂️. AI agents can chain “thoughts” and try tasks in ways that look right in their training data. Sure, they can book me a 🍕 or spin up multiple PoCs for a problem. But bridging the gap from PoC → production? That’s still require critical thinking and iterations by human.
👉 Bottom line: AI is powerful 💡, but our knowledge + critical thinking are what stop us from becoming passive “fix this” button pressers.
References: