Lately, I’ve been yelling about hiring on LinkedIn. (Deservedly—what a mess.) But this post? This one’s about something I actually like: process.
Not the capital-P, “scaled agile” kind of process that shows up in a 17-tab spreadsheet. I mean the human stuff—how we work together. How we learn from each other. And in particular: how we review code.
Because let’s be honest: a lot of teams treat code reviews like a security checkpoint. You flash your ID, someone waves you through, and nobody remembers what happened five minutes later.
And it’s a shame—because when we do it right, code review is one of the most valuable, collaborative rituals in our whole damn craft.
The Review Isn’t for the Code. It’s for the People
Yes, yes—reviews catch bugs, spot inconsistencies, and help you avoid naming things like `dataThing`. But at their core, they’re not just about code quality.
They’re about team quality.
Done well, code reviews:
Help developers share context and decisions
Surface trade-offs and design concerns early
Give people a chance to mentor and be mentored
Build confidence, safety, and shared ownership
That’s a lot more than “add a comma and merge”.
If your team treats code review like a slot machine you just pull until you get a ✅, you’re missing the point.
Red Flags in the Review Process
You can tell a lot about a team by watching how they do code reviews. Some of the greatest hits:
Pull requests sit untouched for days, then get a single “LGTM” from someone who clearly didn’t read it
Reviews give nitpicks but no feedback on actual structure, performance, or clarity
Every comment is phrased like an order, not a question
Nobody ever asks “why?”
And my personal favourite: The AI Review
Let’s Talk About That AI Thing
Look, I get the appeal. It’s fast. It’s available 24/7. It knows the entire documentation of every JS framework released since breakfast.
But AI code review is, at best, a lint pass with a thesaurus.
It can’t tell you whether your approach makes sense in your domain. It doesn’t understand your team’s context, your project’s weird legacy edge cases, or that one file you don’t touch unless you’ve had a drink.
More importantly: it doesn’t build trust. It doesn’t coach. It doesn’t spark discussion.
And if your entire review process is “LLM-generated comments + rubber stamp”, you don’t have a process—you have a bottleneck with a chatbot.
Code Review as Conversation
Here’s a wild idea: review code like you’re talking to a teammate. Not like you’re writing a legal brief.
Try:
“Curious why you chose this approach?”
“Would it be simpler if we…?”
“Heads up, I think this could break X because of Y.”
“This is super clear—nice.”
Praise is allowed. So is curiosity. The goal isn’t perfection—it’s understanding. The review is a chance to collaborate, not just correct.
What Good Looks Like
The best code reviews I’ve seen had:
A clear description of what’s changing and why
Reviewers who asked thoughtful, specific questions
Authors who responded with humility—not defensiveness
And comments that led to better conversations, not just better code
Bonus points for reviewing in pairs, or actually talking to each other instead of a three-day async debate about one optional prop.
Culture is in the Comments
You want to know if a team is healthy? Don’t look at their burndown chart. Look at their PRs. If people are learning, growing, and supporting each other through their review process—you’re probably doing something right.
If they’re terrified to open a PR, silently merging to avoid feedback, or leaning entirely on ChatGPT for review comments? You’ve got bigger bugs than the ones in the code.
Final Thought
Code reviews are for humans. For communication, clarity, and connection. They’re one of the last real-time, high-signal opportunities we have to learn from each other while we’re building.
Let’s stop treating them like a checkbox. And definitely not like something we can just automate away.
TL;DR: Review code like you give a damn about the person writing it.