top of page

You’re Doing Peer Reviews Wrong — They Start Before the Pull Request

  • Admin
  • 7 days ago
  • 3 min read
ree


See, the thing is — most teams treat peer reviews like a hygiene check.

A few comments, a few approvals, a few “looks good” replies, and we feel done.

But if you really trace the story, the code was already merged in everyone’s mind the moment the PR got created.

That’s not review. That’s a ritual.





The “Looks Good” Culture



Every team has its own phrases —

“LGTM”, “looks good”, “no blockers”.

And it’s not malicious; it’s just that most reviewers never actually get context.

They open the PR, see ten changed files, skim through variable names, maybe check for missing null checks, test coverage, indentation, and call it a day.


But none of that is peer review — that’s linting with emotions.

If your code quality tools (Sonar, PMD, or ESLint) are already catching those issues, what are you really reviewing?





Peer Review ≠ Code Review



A true peer review starts much before a PR even exists.

It begins when a problem statement is discussed — when you decide how to solve, not just how to format.

That’s where design reviews come in.


One short call, early in the sprint — walk through what’s being built, why it’s needed, and how it’s being approached.
It takes 30 minutes but saves days of rework.

Because by the time a PR lands, you’re not reviewing design choices anymore. You’re just reviewing artifacts.





Context Is the Real Prerequisite



If you don’t understand the business problem behind the story, you’re not a reviewer — you’re a spectator.

You can’t tell whether a developer used the right pattern or whether the logic even solves the correct thing.

You’re just counting lines and hoping no merge conflicts appear.


The right question in a peer review isn’t “does this look fine?”
It’s “does this solve the right problem — and does it create a new one somewhere else?”




Functional Understanding Beats Cosmetic Cleanliness



Take a Salesforce example.

Say a developer adds logic that auto-populates fields for U.S. and Canada, but not Mexico.

You, as the reviewer, see clean code, perfect naming, and even test coverage.

But you never asked: “Wait, what about Mexico? Was that intentional or an oversight?”

That’s a functional miss disguised as a clean commit.


Clean code that solves the wrong problem is still wrong.




Make Design Reviews a Habit



If you’re leading a team, schedule one design review call per sprint.

Not formal, not overloaded — just a whiteboard or Miro board where everyone quickly explains:


  • what stories they’re handling,

  • what approach they’re planning,

  • what risks they foresee.



That half hour of alignment prevents two weeks of silent assumptions.





Set a Baseline for Every Story



A good peer reviewer helps create a baseline checklist:


  • Have you validated data type and field length assumptions?

  • Have you tested for multiple geographies or currencies?

  • Are there any side effects in automation flows?

  • Are you skipping null-safe handling because “it worked locally”?



These sound simple, but every project’s failure has at least one of these unchecked.





Peer Reviews Are Not Policing



It’s not about catching someone’s mistake; it’s about sharing understanding.

If you find yourself fixing typos in variable names more often than questioning intent, you’re doing the wrong kind of review.

A team that reviews for intent grows faster than a team that reviews for syntax.





The Real Review Happens in Conversations



The best peer reviews aren’t on GitHub.

They happen on a whiteboard, in design discussions, or in those five-minute calls where you simply ask,

“Hey, walk me through what this PR is solving.”

That’s where alignment forms.

That’s where craftsmanship lives.





Closing Thought



If your peer reviews still rely on “looks good” and “merge when ready,”

you’re not building software — you’re maintaining comfort.


Real peer reviews start with problem clarity, not diff analysis.

If you don’t understand the “why,” your review has already failed, no matter how many files you read.




Comments


bottom of page