How to be Better in Critical Thinking and Problem Solving
Updated: March 27, 2026
TL;DR
Critical thinking is the ability to analyze information systematically rather than accepting information at face value. Master frameworks like First Principles thinking, 5 Whys, and Fishbone diagrams; practice debugging as a critical thinking exercise; and apply systems thinking to complex software architecture problems.
You have access to AI tools that can brainstorm solutions in seconds. You can ask Claude or ChatGPT how to solve almost any problem and get reasonable answers. Does this make critical thinking irrelevant?
No. AI is a tool that amplifies critical thinking rather than replacing it. Bad critical thinking combined with AI produces confidently wrong answers. Strong critical thinking combined with AI produces better solutions, faster.
In this post, we'll explore frameworks that organize your thinking, practice methods that strengthen your critical thinking muscles, and discuss why debugging is one of the best critical thinking exercises available.
What is Critical Thinking?
Critical thinking is the ability to:
- Question assumptions: Don't accept things at face value. Why is this true?
- Analyze information: Break problems into components. Where is the uncertainty?
- Evaluate evidence: Some sources are more reliable than others. What's the evidence quality?
- Draw conclusions: Based on evidence, what's the most likely explanation?
- Recognize bias: Your own biases, cultural biases, confirmation bias — they all creep in
Common biases that derail critical thinking:
- Confirmation bias: Seeking information that confirms what you already believe
- Authority bias: Trusting an expert without questioning them
- Availability bias: Overweighting recent or memorable examples
- Sunk cost fallacy: Continuing because you've already invested time (fallacy: past investment shouldn't affect future decisions)
First Principles Thinking
First principles thinking breaks problems into their fundamental components and rebuilds from there.
Example: Why is our login slow?
Traditional debugging flow:
- It used to be fast
- We added a feature recently
- Maybe that feature is slow?
- Let's try reverting it
First principles thinking:
- What does "login slow" mean exactly? (2 seconds vs. 20 seconds? Slowness compared to what baseline?)
- What happens during login? (Database query, password hashing, session creation, external services like auth provider?)
- Which step is actually slow? (Measure each step)
- Why is that step slow? (Database query fetching too much data? Server CPU bound? Network latency?)
First principles doesn't assume the problem is where you think it is. It measures.
In Architecture:
Instead of "Should we use microservices?", first principles asks:
- Why would microservices help us? (Independent scaling? Separate teams? Different tech stacks?)
- Do we actually need any of those benefits? (Is scaling a problem? Do we have separate teams?)
- What's the actual cost? (Operational complexity, debugging difficulty, network latency)
- Are there simpler solutions? (Monolith with vertical scaling? Modular monolith?)
First principles prevents cargo-cult engineering — adopting patterns because everyone else does, not because they solve your actual problem.
The 5 Whys Technique
The 5 Whys is simple but powerful: ask "why?" five times to get to root cause.
Example: Our API is timing out
-
Why is the API timing out?
- Database queries are taking 20+ seconds
-
Why are queries taking so long?
- We're fetching 1 million rows to find 100 relevant ones
-
Why are we fetching a million rows?
- We added a filter that the database isn't using
-
Why isn't the database using the filter?
- We don't have an index on that column
-
Why don't we have an index?
- It wasn't part of the initial schema; we added the filter later without considering the index
Root cause: Missing database index. The fix: CREATE INDEX idx_filter_column ON users(filter_column);
If you stop at "why #2" (slow queries), you'd add caching or pagination without fixing the root issue. The 5 Whys forces you deeper.
Practical tip: Stop when you reach a point you can actually fix or control. If why #4 is "because aliens," you've gone too far.
Fishbone Diagrams (Ishikawa Diagrams)
Fishbone diagrams organize causes of a problem into categories:
Problem: High API Error Rate
▲
┌──────────────────────────┼──────────────────────────┐
│ │ │
┌─────┴─────┐ ┌──────┴──────┐ ┌────────┴────────┐
│ People │ │ Processes │ │ Technology │
│ │ │ │ │ │
┌─┴──┐ │ ┌─┴──┐ │ ┌────┴────┐ │
│ Low│ Staff │ │Poor│ Deployment │Database │ Network │
│exp │ tired │ │QA │ procedure │timeout │ latency │
└────┘ │ └────┘ │ └────────┘ │
│ │ │
└────────────────────────┼─────────────────────────────┘
│
Fishbone Diagram
This structure helps you think systematically about categories:
- People: Skills, experience, communication
- Processes: Deployment, rollback, monitoring
- Technology: Database, infrastructure, code
- Materials: External dependencies, APIs
- Environment: Infrastructure failures, network
Using a fishbone diagram, you might discover that "high error rate" comes from multiple factors:
- Database timeouts (technology)
- Poor deployment process (process)
- Insufficient staffing (people)
Fixing only one wouldn't solve the problem.
Debugging as Critical Thinking Practice
Debugging is one of the best critical thinking exercises available. The debugger's mindset applies everywhere:
The Debugging Process:
- Reproduce the problem — Can you consistently make it happen?
- Form a hypothesis — What do you think is causing it?
- Test the hypothesis — Design an experiment that would prove/disprove it
- Iterate — If wrong, form a new hypothesis based on what you learned
Example: A button sometimes doesn't respond to clicks.
Hypothesis 1: Network is slow, so the POST request hangs.
- Test: Add a loading indicator that shows network requests. Did the indicator appear? If yes, network was the issue. If no, it's something else.
Hypothesis 2: JavaScript error prevents the click handler from running.
- Test: Open browser console. Are there JavaScript errors? Add a
console.log()at the start of the click handler. Did it print?
Hypothesis 3: The element disappears after certain interactions.
- Test: Use browser DevTools to inspect the DOM. Is the button still there when it fails to respond? Add a visual indicator (red outline) to elements with click handlers.
Good debugging isn't guessing — it's systematic hypothesis testing. This transfers to architecture, product decisions, and business problems.
Systems Thinking
Complex systems don't have single causes or simple fixes. Software architecture is a complex system.
Common system thinking principles:
-
Feedback loops: Changes have second-order effects.
- Example: Adding caching reduces database load (good), but now you have cache invalidation bugs (bad)
-
Optimization for the whole, not parts: Optimizing one component often hurts the system overall.
- Example: Optimizing for cost (use cheaper servers) might hurt performance and increase latency, harming user experience
-
Emergence: System behavior can't be understood by analyzing components in isolation.
- Example: Individual services might be fast, but distributed systems have network latency and consistency challenges that single services don't have
-
Delays and buffers: Systems don't respond instantly to changes.
- Example: Scaling up servers takes minutes, not seconds. If you autoscale too aggressively, you'll overshoot
Architecture decision with systems thinking:
Instead of "Should we add a cache?":
- How will cache invalidation work?
- What happens when the cache is wrong?
- Will adding another layer increase complexity?
- Is database performance actually the bottleneck?
- Could a simpler fix (index, query optimization) solve it without cache complexity?
Systems thinking prevents local optimizations that create global problems.
Frameworks for Complex Problems
When facing complicated decisions, use frameworks:
SWOT Analysis (Strengths, Weaknesses, Opportunities, Threats)
Technology Choice: Microservices vs. Monolith for our system
Strengths of Microservices:
- Independent scaling
- Teams can work independently
- Can use different languages/tech
Weaknesses of Microservices:
- Operational complexity
- Debugging is harder (distributed traces)
- Network latency between services
Opportunities:
- If we grow to 100+ engineers, team isolation is huge
- Future ability to replace services independently
Threats:
- If we're small now, complexity creates bottlenecks
- If we don't have DevOps expertise, operations will be harder
Decision Matrix
Score options against important criteria:
Criteria | Weight | Monolith | Microservices | Serverless
─────────────────────┼────────┼──────────┼───────────────┼──────────
Scaling flexibility | 3 | 7 | 9 | 10
Team independence | 3 | 5 | 8 | 8
Operational simplicity | 2 | 10 | 5 | 7
Cost (at current scale) | 2 | 9 | 6 | 8
─────────────────────┼────────┼──────────┼───────────────┼──────────
Total Score | | 58 | 69 | 75
Weighted scores help make objective decisions in emotionally-charged choices.
Overcoming Analysis Paralysis
Critical thinking can slide into paralysis — constantly evaluating, never deciding.
Strategies to move forward:
- Decision deadlines: Decide by Friday; don't spend weeks evaluating
- Good enough is fine: You don't need the perfect choice, just a good one
- Reversible vs. irreversible: Can you change your mind? If yes, decide faster
- Data gathering cutoff: After gathering 80% of possible data, decide (the last 20% takes 80% of your time)
- Pilot small: For high-uncertainty decisions, try it small before committing
Example: "Should we use GraphQL or REST?"
- Reversible? Mostly (you can migrate APIs later)
- Timeline? Three months of use will tell us if it was right
- Pilot? Use it for one API endpoint, see how it goes
Make a decision, test it, iterate. The cost of reversibility lets you move faster.
Practicing Critical Thinking
- Ask more questions: When someone proposes something, ask "why?" three times
- Argue the opposite: Take the other side of a debate, even if you don't believe it
- Read diverse sources: Avoid echo chambers; understand different perspectives
- Debug deliberately: When facing a problem, write down your hypothesis before testing
- Document decisions: Write ADRs (Architecture Decision Records) explaining your thinking
- Reflect on past decisions: Were you right? What did you learn? How would you decide differently?
The more you practice, the faster and more accurate your critical thinking becomes.
When to Trust AI and When to Think Critically
Ask AI for: Ideas, brainstorming, explaining concepts, drafting text Think critically about: Whether AI's answer is actually correct, whether it solves your actual problem, whether there are better approaches
Example workflow:
Problem: "How should we cache this?"
Step 1: Ask AI for ideas
ChatGPT: "Use Redis caching, it's fast and scalable"
Step 2: Think critically
- Is caching actually our bottleneck? (Measure first!)
- Is Redis the simplest option? (Consider: simple in-memory cache, HTTP cache headers)
- What's the cache invalidation strategy?
- What's the cost and operational burden?
Step 3: Decide with critical thinking, informed by AI
"We'll start with HTTP cache headers (simple, zero operational cost). If that's insufficient, we'll measure and consider Redis."
AI accelerates your research, but critical thinking guides the final decision.
Conclusion
Critical thinking is more important in 2026, not less. AI tools are powerful, but they amplify whatever thinking you give them. Strong critical thinking means:
- Questioning assumptions
- Gathering good evidence
- Systematically analyzing problems (5 Whys, Fishbone, First Principles)
- Thinking in systems, not components
- Making decisions despite uncertainty
Practice through debugging, document your thinking through ADRs, and avoid both paralysis and recklessness. Critical thinking is a muscle — the more you use it, the stronger it gets.