I still remember the first time I got an “Accepted” on CodeChef.
It was past midnight. My roommate was asleep, and I was hunched over a problem called “Chef and Subsequences.”
I’d been stuck for hours off-by-one errors, wrong answers on hidden test cases the usual pain.
When that green checkmark finally appeared, I didn’t celebrate loudly. I just leaned back, smiled, and felt that quiet satisfaction only programmers know the kind that says you earned it.
That was 2016.
Back then, DSA wasn’t a skill, it was a religion
You solved problems to prove you could think, not just code.
Fast forward to now.
I gave that same problem to ChatGPT and it solved it in 10 seconds.
Perfect output. Clean code. Explained every step like a teacher.
And I’ll be honest I didn’t feel proud or jealous.
I just felt… replaced.
Subscribe for free to receive new posts and support my work.
Subscribed
The Autocomplete Era
Here’s what nobody tells you about AI: it didn’t just make coding easier. It made a specific type of intelligence feel obsolete.
You know that friend who memorized every LeetCode medium? The one who could implement Dijkstra’s algorithm on a napkin at lunch? Yeah, ChatGPT just made them irrelevant. The thing they spent 300 hours perfecting now comes free with a $20/month subscription.
It’s like training for years to be a human calculator, only to realize everyone has an iPhone.
Brutal? Maybe. But let’s be honest when was the last time you actually implemented a trie at work? I’ve been shipping code for ten years. I’ve used binary search twice. I’ve never once needed to color a graph outside of an interview.
So why are we still here, pretending DSA matters?
Because It Does - Just Not How You Think
Some time back, our notification feed imploded during peak hours. Users were staring at loading spinners for 3–4 seconds. Our support channel lit up. My manager sent a Slack message at 9 PM: “Fix this or we’re rolling back tomorrow.”
I opened the monitoring dashboard. Average response time: 2.8 seconds. Database queries: normal. CPU: fine. Memory: fine. What the hell?
Then I looked at the actual code. Some well-meaning engineer had added a “mark as read” feature. Sounds simple, right? Except we were checking if each notification was read by querying the database. For a user with 500 notifications, that’s 500 separate database calls. Every. Single. Time.
Classic N+1 query problem.
I didn’t need a fancy algorithm. I just loaded all the “read” notification IDs into a Set upfront one query instead of 500. Checked membership with notificationIds.has(id) instead of hitting the database each time.
Deployed it at midnight. Response times dropped to 180ms. Not perfect, but good enough that my manager stopped pinging me.
That’s not something you’d see on LeetCode. But it’s exactly what those hash table problems were training you for : recognizing when O(n) lookups are killing you and switching to O(1).
Here’s the thing: I could’ve asked ChatGPT to “optimize this feed endpoint.” It would’ve given me twelve different solutions Redis caching, query optimization, pagination improvements, database indexing. All technically correct.
But which one solves this specific problem without requiring a two-week refactor or convincing the infrastructure team to spin up a Redis cluster?
That’s the judgment call AI can’t make. It doesn’t know your codebase is held together with duct tape. It doesn’t know your team ships every Friday and breaking things now means working the weekend. It doesn’t know your database is already at 80% capacity and adding indexes might tip it over.
That intuition, knowing what’s actually feasible in your messy production environment, comes from pattern recognition you built grinding problems for years. You just didn’t realize you were learning how to think under constraints, not just how to code.
The Real DSA Lessons Nobody Talks About
Forget the algorithms for a second. What did grinding problems actually teach you?
Constraint thinking - Every problem on LeetCode starts with limits: “n ≤ 10⁵” or “must run in O(n log n).” Real systems have constraints too memory budgets, latency requirements, API rate limits. DSA taught you to start every problem by asking: “What can I not do?”
Trade-offs - Should you cache this? Pre-compute that? Use more memory to save CPU? These aren’t algorithm questions they’re engineering questions. But if you’ve never wrestled with time vs. space complexity, you’ll make expensive mistakes in production.
Debug instincts - When something breaks and your logs are useless, you need to think your way to the root cause. That’s the same skill you used debugging that wrong answer on Test Case 47. You just didn’t realize it was transferable.
I watched my teammate spend days tracking down a memory leak in our background job processor. He was convinced it was a database connection issue adding timeouts, tweaking pool sizes, restarting workers. Finally I looked at the code: he was loading 500k records into an array, processing them one by one, then loading another 500k. The array just kept growing. “Why not process in batches of 1000 and clear memory after each batch?” Twenty minutes later, memory usage was flat. He’d never thought about how data structures behave at scale.
That’s what happens when you skip the fundamentals.
What 2025 Actually Looks Like
Let me paint you a picture of the new reality:
You’re in a design meeting. Someone suggests using Redis for session management. Another person says, “Why not just use PostgreSQL with a TTL column?” AI can’t answer that. It’ll give you both implementations, beautifully documented. But you need to know that Redis is faster but costs more, that Postgres might bottleneck at scale, that your team doesn’t have Redis expertise.
Or imagine debugging a memory leak in production. AI will analyze your heap dumps and suggest fixes. But will you trust it blindfolded? Or do you need to understand why holding references to old objects prevents garbage collection?
The engineers crushing it in 2025 aren’t the ones with perfect LeetCode scores. They’re the ones who can:
Look at a system design and smell the bottleneck before it happens
Read AI-generated code and spot the edge case it missed
Explain to a junior dev why we chose this approach over that one
DSA doesn’t make you better at writing code. It makes you better at thinking about code.
You don’t need to know how to implement a heap from scratch.
But you damn well better understand when a priority queue is the right tool, because that knowledge will save you when you’re optimizing a job scheduler before a product launch.
You don’t need to ace every Codeforces contest.
But if you can’t recognize that your “innovative new feature” is just a variation of the two-pointer pattern, you’re going to waste weeks reinventing wheels.
AI has commoditized syntax. It hasn’t commoditized judgment.
So What Now?
If you’re early in your career, yes grind some problems. Not because Google’s going to ask you to reverse a binary tree (though they might). But because it builds mental models you’ll use forever.
If you’re experienced, you already know: DSA isn’t your competitive advantage anymore. Your ability to apply those patterns to ambiguous, real-world problems is.
The future doesn’t belong to people who can reiterate algorithms. It belongs to people who can look at a pile of AI-generated solutions and know which one won’t fall apart under load.
AI may know all the answers.
But someone still needs to know which questions are worth asking.
And that someone better understand why hash maps exsists
