Keep pull requests SMALL, so they’re easy to understand and review. Plus, small pull requests create fewer problems later. 1.
Watch for duplicate & dead code. Remove unused code and abstract logic to avoid duplication.
Good tests prevent regressions and show how code should work. So verify whether the new code has “sufficient” test coverage.
Choose the correct reviewer for each change: code owners or domain experts can quickly catch domain-specific issues. If you assign many reviewers,,, ensure each understands their responsibilities to prevent delays. 1.
Write clear pull request descriptions that explain the “what” and “why” of changes. Also, link relevant tickets and attach screenshots that help reviewers understand the context.
Use a code review CHECKLIST...
Keep pull requests SMALL, so they’re easy to understand and review. Plus, small pull requests create fewer problems later. 1.
Watch for duplicate & dead code. Remove unused code and abstract logic to avoid duplication.
Good tests prevent regressions and show how code should work. So verify whether the new code has “sufficient” test coverage.
Choose the correct reviewer for each change: code owners or domain experts can quickly catch domain-specific issues. If you assign many reviewers,,, ensure each understands their responsibilities to prevent delays. 1.
Write clear pull request descriptions that explain the “what” and “why” of changes. Also, link relevant tickets and attach screenshots that help reviewers understand the context.
Use a code review CHECKLIST. It could cover design, readability, security, testing, and so on. This ensures consistency in reviews and reduces the chances of missing common issues. 1.
Automate easy parts. Use tests, linters, and static analysis to catch errors and style issues. This way, reviewers can focus on logic & architecture.
I’m happy to partner with CodeRabbit on this newsletter. Code reviews usually delay feature deliveries and overload reviewers. And I genuinely believe CodeRabbit solves this problem.
Use review metrics to find “bottlenecks”. Measure: review time and bug rates, and pull request size. Then adjust the process based on data to improve speed without sacrificing quality. 1.
Review quickly… but don’t rush! The goal is to improve code health, not just quick approvals. 1.
Keep reviews SHORT. It’s hard to stay focused after reading 100+ lines of code. If the change is big, break it up into smaller parts or focus on one section at a time to give effective feedback.
Get early feedback on big features to save time later. This helps to catch issues early and makes reviews more manageable. 1.
Ask for a review ONLY after tests & builds pass. This prevents wasting the reviewer’s time on broken code. Besides, it signals the code is stable enough to review. 1.
Use review tools effectively to save time - threaded comments, suggested edits, and templates, and so on. The correct setup makes reviews smoother. 1.
Watch out for potential bugs & logic mistakes that tests might miss. Think about “race conditions or extreme inputs”. Human reviewers can often spot bugs that automated tests miss, especially in complex logic.
I’ll send you some rewards for the referrals.
Share 1.
Encourage ALL team members to take part in code reviews. And don’t let the same people handle all reviews. Rotation spreads knowledge and avoids burnout. 1.
You can’t review code effectively if you don’t understand what it does. So read the code carefully and run it locally if necessary.
Keep the feedback within the “scope”. If you notice any issues outside the scope of the change, log them separately. This keeps reviews constructive and prevents endless delays. 1.
Review in layers: design then details. This approach helps you catch both major and minor issues efficiently. 1.
Compare the implementation with the requirements. Ensure it handles acceptance criteria and edge cases and error conditions correctly. 1.
Enforce coding standards for CONSISTENCY. Suggest refactoring if the logic is hard to follow. 1.
Use AI tools to summarize changes or find issues. It saves time! But use those as a helper... and not a replacement for human reviews.
Guess what? When you open a pull request, CodeRabbit can generate a summary of code changes for the reviewer. It helps them quickly understand complex changes and assess the impact on the codebase. Speed up the code review process.
Set clear “guidelines” for how reviews get approved. For example, have at least two reviewers for critical code changes. 1.
Consider how code performs at scale in “performance-critical” areas. Look out for things that might cause slowdowns in critical paths - unnecessary loops and so on. Remember: fixing issues is easier during review than in production. 1.
Use reviews as an opportunity to share KNOWLEDGE and grow together. Share tips and best practices, especially with junior engineers. 1.
Ensure the code handles errors “gracefully”. Functions must deal with null inputs or external call failures without crashing. Good error handling makes the system robust & easy to debug. 1.
Adjust practices to fit your team’s needs. What works at one company might not work for another. Keep experimenting until you find your ideal flow.
I’ll send you some rewards for the referrals.
Share 1.
Always review with the bigger picture in mind. Think about how the change interacts with the codebase. And consider cross-cutting concerns: performance, concurrency, and backward compatibility. 1.
It’s better to clarify… than to assume. So ask clarifying questions when something is unclear about the change. A simple question can prevent misunderstandings or reveal missing requirements. 1.
If possible, run the code locally, especially for complex & critical code changes. Seeing it in action can reveal issues that reading won’t.
Focus on code correctness & clarity,,, not personal style. If an issue is purely stylistic and not covered by a guideline, consider letting it pass or marking it as a nitpick. Remember, reviews are about improving the codebase. 1.
Suggest a solution when pointing out a problem. If a function is complex, propose breaking it into smaller functions or using a design pattern. Reviews are most valuable when they teach… not just criticize. 1.
Consider whether the documentation requires any updates because of the change. An API change may need changes to the API docs or the README file. Ensure everything remains accurate and complete.
Treat code review as a “team effort”, not a fight. Focus on making the product better rather than proving someone wrong. A friendly tone makes feedback easier to accept. 1.
Mention explicitly which comments are essential & which are optional. Label important fixes separately from small “nice-to-have” ideas. This helps the author to prioritize and stay focused.
Bet you didn’t know…** CodeRabbit CLI **brings instant code reviews directly to your terminal, seamlessly integrating with Claude Code, Cursor CLI, and other AI coding agents. While they generate code, CodeRabbit ensures it’s production-ready - catching bugs, security issues, and AI hallucinations before they hit your codebase.
Involve a neutral third party in disagreements over CRITICAL issues - ask a tech lead or architect. Also, create a follow-up task if the problem is outside the current scope. 1.
Explain the “why” behind your feedback. Understanding the reason behind feedback helps others learn. This way, they’re less likely to repeat the issue. 1.
Secure code protects users and the business. So always think about SECURITY. Be cautious of weak data validation, exposed data, or improper error handling. 1.
Be open to discussion when opinions differ. Ask for the author’s reasoning and listen before insisting. Talking through disagreements often leads to better solutions.
Point out what’s done well too… it motivates people to keep doing it. Keep a balance between criticism and appreciation for high morale. 1.
Don’t use code reviews for PERFORMANCE EVALUATIONS! Reviews exist to improve code, not to measure people. When engineers feel safe, they write better code & review honestly. 1.
Respond to feedback with curiosity,,, not defensiveness. Treat comments as learning opportunities. 1.
Having another set of eyes helps catch mistakes. So make sure someone else reviews “every” change. Even small changes benefit from peer review.
I could go on and on and on.
But if those 42 ways aren’t enough to 10x your code reviews, then probably anything else I say will go in one ear and right out the other.
As far as AI code reviews to catch bugs, security flaws, and performance issues *as *you write code?
That’s why CodeRabbit exists.
It brings real-time, AI code reviews straight into VS Code, Cursor, and Windsurf.
👉 Install CodeRabbit in VSCode for FREE
If you find this newsletter valuable, share it with a friend, and subscribe if you haven’t already. There are group discounts, gift options, and referral rewards available.
**Want to advertise in this newsletter? **📰
If your company wants to reach a 190K+ tech audience, advertise with me.
Thank you for supporting this newsletter.
You are now 190,001+ readers strong, very close to 191k. Let’s try to get 191k readers by 21 November. Consider sharing this post with your friends and get rewards.
Y’all are the best.