Every AI coding assistant I've used has the same problem.
You get stuck. You ask. It answers. You move on feeling like you learned something — but you didn't. You watched someone solve a problem. That's not the same as solving it yourself.
When we started building Sensei, we made one rule: the AI never gives you the answer.
Not even if you ask nicely.
Why Blind 75?
If you're preparing for a software engineering interview at a top tech company, there's a well-known list of 75 LeetCode problems — the Blind 75 — that covers every major DSA pattern you'll face: arrays, trees, graphs, dynamic programming, binary search, and more.
Mastering these 75 problems gives you coverage of roughly 90% of interview questions at Google, Meta, Amazon, Apple, Netflix, and Microsoft.
The problem isn't finding the list. It's actually learning from it, rather than grinding through solutions you'll forget in a week.
The problem with most LeetCode prep
Most people approach Blind 75 like this:
- Read problem
- Get stuck after 5 minutes
- Look up solution
- Read solution, think "oh that makes sense"
- Move to next problem
- Repeat 74 times
- Fail the interview anyway
The "oh that makes sense" feeling is a lie. Recognition is not recall. Watching someone solve a problem is completely different from solving it under pressure in an interview room.
Sensei is built around a different philosophy.
The peek-correct-help strategy
When you get stuck in Sensei, you don't get an answer. You get a nudge.
Here's exactly what happens:
Step 1 — Peek Sensei watches your code as you type. After 5 seconds of inactivity, it takes a look. Using Claude Haiku (fast, cost-efficient), it evaluates: is this person stuck, or just thinking?
If you're stuck, it doesn't interrupt immediately. It waits another 10 seconds to see if you unstick yourself. Real thinking deserves space.
Step 2 — Correct If you're genuinely stuck, Sensei asks a Socratic question. Not "here's the approach" — but "what data structure would let you look up this value in O(1)?" or "you've handled the base case — what happens at the boundary?"
It points at the gap in your thinking. It doesn't fill it.
Step 3 — Help proceed If you answer the question and move forward, great. If you're still stuck after the hint, Sensei escalates — a slightly more direct nudge, still not the answer, but enough to unblock you and keep momentum.
The goal is always the same: you write the solution. Sensei just keeps you honest.
Minimum intervention at each stage — Sensei only escalates if the previous nudge didn't unblock you.
AI-gated submission
Here's where it gets interesting.
When you think you're done and hit submit, Sensei doesn't just check if your code passes test cases. It uses Claude Sonnet to review your solution against the optimal approach.
Did you solve it with the right time complexity? Did you handle edge cases? Did you actually understand what you wrote, or did you copy a pattern you half-remember?
If the solution is correct and well-reasoned, Sensei marks it solved and awards XP. If something's off — a suboptimal approach, a missing edge case — it flags it and asks you to revisit.
You can't game it. You can't submit garbage and move on. You have to actually understand your solution.
Spaced repetition for code
Solving a problem once doesn't mean you've learned it.
Sensei uses spaced repetition — the same principle behind Anki flashcards — to schedule when you revisit problems. Solve a problem confidently, and it moves further into the future. Struggle with it, and it comes back sooner.
There's also a revision mode: a 20-minute timed cold solve with no hints. No Sensei watching. No nudges. Just you and the problem, exactly like an interview. This is how you know you actually retained it.
What the coaching loop looks like in practice
You open Two Sum.
You write a brute force O(n²) solution.
You submit.
Sensei: "This works. But what if n is 10 million?"
You think. You rewrite using a hash map.
You submit again.
Sensei: ✓ Optimal. XP +10.
Three weeks later, Sensei surfaces Two Sum again.
Cold solve. 20 minutes. No hints.
You solve it in 4 minutes without thinking.
That's retention.
The architecture behind it
We use two Claude models deliberately:
- Claude Haiku 4.5 — watches your editor every 5 seconds. Fast, cheap, good enough to detect stuck vs. thinking. This runs constantly in the background.
- Claude Sonnet 4.6 — does the heavy lifting: Socratic coaching, solution review, submission gating. Only invoked when needed.
This split keeps the product responsive and cost-efficient without sacrificing coaching quality.
Haiku runs constantly in the background — cheap and fast. Sonnet is only called when it matters.
The backend is Express.js with PostgreSQL for persistence — progress, XP, spaced repetition state, and sessions all survive server restarts. GitHub SSO for auth, so there's no password friction.
Why we built this
We're Zerotree Labs — a small AI lab from Kanpur, India. We build things that are supposed to work in production, not just demos.
Sensei is our answer to a real problem: too many smart engineers fail interviews not because they can't code, but because they prepared wrong. They practised recognition instead of recall. They watched solutions instead of struggling through them.
The AI should make you better, not dependent.
Try Sensei
Sensei is live and in beta.
All 75 problems. Real coaching. No free answers.
If you're preparing for interviews at a top tech company — or you're a hiring manager curious about what AI-assisted learning that actually works looks like — we'd love your feedback.
Reach out: founder@zerotreelabs.com