When I heard a year ago that Uncle Bob was planning on writing a second edition of Clean Code, I got excited, which isn’t normal for me. I thought the first edition was alright, and I don’t read often.
Maybe it was the thought of getting to roast its code examples again like I did in my first ever article.
Maybe it was the promise of a modernization to the teachings from the first, kind of like the excitement you get from reading patch notes to a piece of software you use.
Or maybe, deep down, I was hoping to see someone revise his ideas after so long, and realize he has to change his outlook on “clean code”. After all, that’s been the most scathing criticism of the first edition since it was publi…
When I heard a year ago that Uncle Bob was planning on writing a second edition of Clean Code, I got excited, which isn’t normal for me. I thought the first edition was alright, and I don’t read often.
Maybe it was the thought of getting to roast its code examples again like I did in my first ever article.
Maybe it was the promise of a modernization to the teachings from the first, kind of like the excitement you get from reading patch notes to a piece of software you use.
Or maybe, deep down, I was hoping to see someone revise his ideas after so long, and realize he has to change his outlook on “clean code”. After all, that’s been the most scathing criticism of the first edition since it was published over seventeen years ago.
As much as I am a cynic who likes to poke fun, I genuinely respect those who admit they’re wrong and change their minds. I feel a deep joy when my ideas reach people and change their outlooks on things they’re passionately wrong about (though I sometimes wonder if I’m sabotaging my efforts by being belligerent).
So imagine my disappointment when, after spending $60 on this eBook, I found that Bob has not only changed his tune about his most controversial practices, he’s actually doubled down.
Oh yeah.
But I’m getting ahead of myself.
The Good
I find myself agreeing with the vast majority of his views at a high level.
He discusses how professionals have a duty to keep code from rotting by actively applying clean code principles, even when it slows you down in the short term.
I especially loved his hypothetical but all-too-painful anecdote of code getting so bad that a separate thread of work is dedicated to a complete rewrite. This of course leads to bugs slipping through, as well as a draining overhead of keeping the rewrite up to date with the legacy code in terms of new features and bug fixes.
He discusses the importance of clean code, not just for productivity, but for ethical reasons such as the harm that software errors can cause in our software-dependent society.
His passion for clean code is clearly born from harrowing experiences. The first chapter is a must-read for these principles alone.
His principles on architecture and design are SOLID (heh) as usual, and I’d recommend reading those sections too if you’re fascinated about the intricacies of what makes good architecture.
You can tell this edition was updated for modern times. There are discussions about LLMs and their place in software development, and Bob even uses Grok and *Copilot *to compare his refactorings to AI-generated ones.
What I found especially pleasant was his focus on using modern languages and constructs to present his ideas. For one, he doesn’t just stick to Java. He uses Golang, Python, and *JavaScript *as well. And even when he uses *Java, *he takes advantage of more modern constructs like lambdas, streams, records, and pattern-matching. He’s clearly embraced a lot of functional programming concepts, and it’s a delight to see.
He breaks down each refactoring into digestible intermediate steps, and thoroughly explains his thought process between each. It frustrates me when experts immediately jump to their final solution and justify it after-the-fact, so I’m glad Bob didn’t do this.
Every discussion of a potentially controversial belief of his addresses counterarguments that people have raised. A lot of the time, I’d think of a caveat or exception to one of his ideas, and right in the next paragraph, he’d bring it up. That’s a sign of someone who’s privy to technical discussions, and it’s a definite improvement from the first edition.
In fact, the last section of the book is a transcript of a famous discussion between him and John Ousterhout challenging his clean code principles, which was a nice treat.
Whatever you liked about the first edition, it’s here, and there’s more of it.
The Bad
But all the things you didn’t like are back. With a vengeance.
He reused some of the code examples from the first edition, namely the godawful GuessStatisticsMessage & PrimeGenerator classes, and still believes them to be even remotely clean enough to be presented in a book like this.
But rather then rehash old code, I’ll take a look at one of the newer samples. The following is the first major example in the book, and it’s from the second chapter, Clean That Code!. With the help of AI, Bob deliberately wrote this code uncleanly for demonstration purposes:
public class FromRoman {
public static int convert(String roman) {
if (roman.contains("VIV") ||
roman.contains("IVI") ||
roman.contains("IXI") ||
roman.contains("LXL") ||
roman.contains("XLX") ||
roman.contains("XCX") ||
roman.contains("DCD") ||
roman.contains("CDC") ||
roman.contains("MCM")) {
throw new InvalidRomanNumeralException(roman);
}
roman = roman.replace("IV", "4");
roman = roman.replace("IX", "9");
roman = roman.replace("XL", "F");
roman = roman.replace("XC", "N");
roman = roman.replace("CD", "G");
roman = roman.replace("CM", "O");
if (roman.contains("IIII") ||
roman.contains("VV") ||
roman.contains("XXXX") ||
roman.contains("LL") ||
roman.contains("CCCC") ||
roman.contains("DD") ||
roman.contains("MMMM")) {
throw new InvalidRomanNumeralException(roman);
}
int[] numbers = new int[roman.length()];
int i = 0;
for (char digit : roman.toCharArray()) {
switch (digit) {
case 'I' -> numbers[i] = 1;
case 'V' -> numbers[i] = 5;
case 'X' -> numbers[i] = 10;
case 'L' -> numbers[i] = 50;
case 'C' -> numbers[i] = 100;
case 'D' -> numbers[i] = 500;
case 'M' -> numbers[i] = 1000;
case '4' -> numbers[i] = 4;
case '9' -> numbers[i] = 9;
case 'F' -> numbers[i] = 40;
case 'N' -> numbers[i] = 90;
case 'G' -> numbers[i] = 400;
case 'O' -> numbers[i] = 900;
default -> throw new InvalidRomanNumeralException(roman);
}
i++;
}
int lastDigit = 1000;
for (int number : numbers) {
if (number > lastDigit) {
throw new InvalidRomanNumeralException(roman);
}
lastDigit = number;
}
return Arrays.stream(numbers).sum();
}
public static class InvalidRomanNumeralException extends RuntimeException {
public InvalidRomanNumeralException(String roman) {
}
}
It works as follows:
- Validate the input string against impossible three-digit sequences.
- Replace two-digit characters involving subtractive notation by a custom single numeral character.
- Check for unnecessary repetitions of certain digits (“IIII” should be “IV”).
- Go through each character and convert it to the equivalent decimal number, taking into account the custom numerals from earlier, while placing them into an array.
- Make sure the numbers are in non-increasing order (to catch invalid numbers like “VX”).
- Add up all the numbers and return the final result.
Bob conducts his refactoring step-by-step throughout the rest of the chapter. I won’t show the steps; you can read the book for yourself if you’re interested.
**Side Note: **I want to mention that this is quite a tricky example. Unless you already know the best algorithm, it’s hard to determine whether to do a cleanup refactoring, or an algorithmic refactoring. The former is when you reduce duplication, make syntax more concise, extract functions or variables, etc. The latter is a revision of the logic itself in hopes of simplification or optimization.
Here’s Uncle Bob’s refactoring:
public class FromRoman2 {
private String roman;
private List<Integer> numbers = new ArrayList<>();
private int charIx;
private char nextChar;
private Integer nextValue;
private Integer value;
private int nchars;
Map<Character, Integer> values = Map.of(
'I', 1,
'V', 5,
'X', 10,
'L', 50,
'C', 100,
'D', 500,
'M', 1000);
public FromRoman2(String roman) {
this.roman = roman;
}
public static int convert(String roman) {
return new FromRoman2(roman).doConversion();
}
private int doConversion() {
checkInitialSyntax();
convertLettersToNumbers();
checkNumbersInDecreasingOrder();
return numbers.stream().reduce(0, Integer::sum);
}
private void checkInitialSyntax() {
checkForIllegalPrefixCombinations();
checkForImproperRepetitions();
}
private void checkForIllegalPrefixCombinations() {
checkForIllegalPatterns(
new String[]{"VIV", "IVI", "IXI", "IXV", "LXL", "XLX",
"XCX", "XCL", "DCD", "CDC", "CMC", "CMD"});
}
private void checkForImproperRepetitions() {
checkForIllegalPatterns(
new String[]{"IIII", "VV", "XXXX", "LL", "CCCC", "DD", "MMMM"});
}
private void checkForIllegalPatterns(String[] patterns) {
for (String badString : patterns)
if (roman.contains(badString)) throw new InvalidRomanNumeralException(roman);
}
private void convertLettersToNumbers() {
char[] chars = roman.toCharArray();
nchars = chars.length;
for (charIx = 0; charIx < nchars; charIx++) {
nextChar = isLastChar() ? 0 : chars[charIx + 1];
nextValue = values.get(nextChar);
char thisChar = chars[charIx];
value = values.get(thisChar);
switch (thisChar) {
case 'I' -> addValueConsideringPrefix('V', 'X');
case 'X' -> addValueConsideringPrefix('L', 'C');
case 'C' -> addValueConsideringPrefix('D', 'M');
case 'V', 'L', 'D', 'M' -> numbers.add(value);
default -> throw new InvalidRomanNumeralException(roman);
}
}
}
private boolean isLastChar() {
return charIx + 1 == nchars;
}
private void addValueConsideringPrefix(char p1, char p2) {
if (nextChar == p1 || nextChar == p2) {
numbers.add(nextValue - value);
charIx++;
} else
numbers.add(value);
}
private void checkNumbersInDecreasingOrder() {
for (int i = 0; i < numbers.size() - 1; i++)
if (numbers.get(i) < numbers.get(i + 1))
throw new InvalidRomanNumeralException(roman);
}
public static class InvalidRomanNumeralException extends RuntimeException {
public InvalidRomanNumeralException(String roman) {
super("Invalid Roman numeral: " + roman);
}
}
}
Nothing was learned, it seems. This works as follows:
- Check against invalid sequences of numerals (illegal prefixes and unnecessary repetitions)
- Loop through the roman string’s characters. In each iteration:
- Get the current letter and the next (if there is a next)
- If the current letter is ‘I’, ‘X’, or ‘C’, check if the next letter accepts it as a prefix. If so, subtract its value from the next letter’s value, add that to a list, and skip the next letter for the following iteration.
- Otherwise, just add the value of that single letter to the list.
- Finally, check that no number is greater than the previous in the list.
First of all, he took a pure function and turned it into an instance method with attributes, instead of passing around arguments. He did this last time too, but this time he gives reasons, which I’ll discuss in a later section.
Second, and just like last time, his method decomposition is bad.
For example, the doConversion method calls out to three other methods, but these methods don’t reduce duplication, nor do they improve the comprehensibility of the overall method. Sure, it reads like a high-level checklist of steps, but unless your target audience is non-technical people, all this accomplishes is obfuscating HOW the conversion happens.
Once a reader enters the doConversion method, they’ve mentally accepted that they’re going to see some ugly details. The word “conversion” makes this clear, so arbitrarily abstracting them away into their own functions just wastes time. I have to go **three **methods down, each containing the word “convert”, so I can see how the conversion works. Why?
Code should be “blunt” when being “polite” results in being “opaque”.
I’ll admit, it’s not *that *bad in this example, because after I read each method, I found their names to be intuitive. But did you catch that?
**After **I read each method.
Since there are no arguments, and no promise of purity (as evidenced by the abundance of instance variables), I have to **blindly **trust that each method’s name adequately describes what the method does without any side effects.
If you have the hindsight bias that Uncle Bob has, this isn’t an issue. But I believe code should be as readable for first-time readers as it is for those familiar with it. And a major component of readability is the **trust **that each method does what it says it does. Because without that trust, readers will feel compelled to read those methods anyway, and the abstraction benefits that Bob purports are nullified, while the costs of indirection remain.
Obviously, pure functions aren’t automatically trustworthy, but it’s not a binary. Purity leans more toward determinism and self-containedness, which leans far more toward trustworthiness than the above.
This is what happens when you optimize for superficial readability (nice method names with hidden complexity), while allowing unpredictable behavior to proliferate (stateful entanglement).
I could go line-by-line and explain every instance of code that made me go “huh?” or “seriously?” (like why is nchars an instance variable when it’s only used in one method?). But it would probably be easier if I just presented my own refactoring, while maintaining Bob’s general logic.
public class FromRoman3 {
private static final Map<Character, Integer> ROMAN_NUMERALS = Map.of(
'I', 1,
'V', 5,
'X', 10,
'L', 50,
'C', 100,
'D', 500,
'M', 1000);
private static final Map<Character, Character> NUMERAL_PREFIXES = Map.of(
'V', 'I',
'X', 'I',
'L', 'X',
'C', 'X',
'D', 'C',
'M', 'C'
);
private static final String[] ILLEGAL_PREFIX_COMBINATIONS = new String[]{
"VIV", "IVI", "IXI", "IXV", "LXL", "XLX",
"XCX", "XCL", "DCD", "CDC", "CMC", "CMD"
};
private static final String[] IMPROPER_REPETITIONS = new String[]{
"IIII", "VV", "XXXX", "LL", "CCCC", "DD", "MMMM"
};
public static int convert(String roman) {
if (containsIllegalPatterns(roman, ILLEGAL_PREFIX_COMBINATIONS) ||
containsIllegalPatterns(roman, IMPROPER_REPETITIONS)) {
throw new InvalidRomanNumeralException(roman);
}
List<Integer> numbers = new ArrayList<>();
int i = 0;
while (i < roman.length()) {
char currentLetter = roman.charAt(i);
char nextLetter = i == roman.length() - 1 ? 0 : roman.charAt(i + 1);
if (!ROMAN_NUMERALS.containsKey(currentLetter)) {
throw new InvalidRomanNumeralException(roman);
} else if (NUMERAL_PREFIXES.getOrDefault(nextLetter, (char) 0) == currentLetter) {
int num = ROMAN_NUMERALS.get(nextLetter) - ROMAN_NUMERALS.get(currentLetter);
numbers.add(num);
i += 2;
} else {
int num = ROMAN_NUMERALS.get(currentLetter);
numbers.add(num);
i += 1;
}
}
if (containsIncreasingNumbers(numbers)) {
throw new InvalidRomanNumeralException(roman);
}
return numbers.stream().mapToInt(Integer::intValue).sum();
}
private static boolean containsIllegalPatterns(String roman, String[] patterns) {
for (String badString : patterns)
if (roman.contains(badString)) return true;
return false;
}
private static boolean containsIncreasingNumbers(List<Integer> numbers) {
for (int i = 0; i < numbers.size() - 1; i++)
if (numbers.get(i) < numbers.get(i + 1)) return true;
return false;
}
public static class InvalidRomanNumeralException extends RuntimeException {
public InvalidRomanNumeralException(String roman) {
super("Invalid Roman numeral: " + roman);
}
}
}
First thing I did was replace all instance variables with local variables passed around through arguments. That alone massively increased readability.
Second, I extracted every explicit mention of roman numerals into constants. I did this partly for performance reasons. But really, I did it so I could label those numerals with constant names rather than rely on function names like Bob.
Third, I revised the function structure.
I made the convert function into the meatiest function of the whole class. You came all the way here to see how numeral conversion works, so here it is in all its glory.
I changed checkForIllegalPatterns to containsIllegalPatterns, and brought the exception throws into the main function. I felt this was more explicit. The word “contains”, along with the rest of the signature, clearly indicates what the function does. The word “check” doesn’t tell you what happens if the check fails.
I changed checkNumbersInDecreasingOrder to containsIncreasingNumbers and brought the exception out, similar to the pre-validation steps. But there’s no duplication here, so why did I keep the method? Two reasons:
- This method can be understood in isolation.
- This post-validation step isn’t the main purpose of the
convertfunction.
The thing I struggled the most with was the conversion loop. Bob coded it in a way that would’ve caused a lot of duplication had I simply inlined addValueConsideringPrefix. I wanted to clean this up without changing his algorithm.
The first idea that came to mind was using a Map<Character, Character[]> to map each prefix letter to its potential next letter. But as I was implementing that idea, I realized I could just reverse the mapping so that each letter was mapped to its prefix. And since the reversed mapping was unique, I didn’t need Character[] as the value type.
After that, it was a simple matter to keep all the logic inside the loop (which I changed to a while loop to make the index incrementation more obvious).
Now of course, by doing this, I’ve lost the conciseness of the existing pattern-matching, but I think it was worth it to avoid indirection.
By the way, Bob was nice enough to provide a comprehensive test suite, which was much appreciated for a tricky algorithm like this. So I’m as confident in my code as he is with his.
import fromRoman.FromRoman.InvalidRomanNumeralException;
import org.junit.jupiter.api.Test;
import static fromRoman.FromRoman.convert;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;
import static org.junit.jupiter.api.Assertions.assertThrows;
public class FromRomanTest {
@Test
public void valid() throws Exception {
assertThat(convert(""), is(0));
assertThat(convert("I"), is(1));
assertThat(convert("II"), is(2));
assertThat(convert("III"), is(3));
assertThat(convert("IV"), is(4));
assertThat(convert("V"), is(5));
assertThat(convert("VI"), is(6));
assertThat(convert("VII"), is(7));
assertThat(convert("VIII"), is(8));
assertThat(convert("IX"), is(9));
assertThat(convert("X"), is(10));
assertThat(convert("XI"), is(11));
assertThat(convert("XII"), is(12));
assertThat(convert("XIII"), is(13));
assertThat(convert("XIV"), is(14));
assertThat(convert("XV"), is(15));
assertThat(convert("XVI"), is(16));
assertThat(convert("XIX"), is(19));
assertThat(convert("XX"), is(20));
assertThat(convert("XXX"), is(30));
assertThat(convert("XL"), is(40));
assertThat(convert("L"), is(50));
assertThat(convert("LX"), is(60));
assertThat(convert("LXXIV"), is(74));
assertThat(convert("XC"), is(90));
assertThat(convert("C"), is(100));
assertThat(convert("CXIV"), is(114));
assertThat(convert("CXC"), is(190));
assertThat(convert("CD"), is(400));
assertThat(convert("D"), is(500));
assertThat(convert("CDXLIV"), is(444));
assertThat(convert("DCXCIV"), is(694));
assertThat(convert("CM"), is(900));
assertThat(convert("M"), is(1000));
assertThat(convert("MCM"), is(1900));
assertThat(convert("MCMXCIX"), is(1999));
assertThat(convert("MMXXIV"), is(2024));
}
@Test
public void invalid() throws Exception {
assertInvalid("ABE"); // I added this one
assertInvalid("IIII");
assertInvalid("VV");
assertInvalid("XXXX");
assertInvalid("LL");
assertInvalid("CCCC");
assertInvalid("DD");
assertInvalid("MMMM");
assertInvalid("XIIII");
assertInvalid("LXXXX");
assertInvalid("DCCCC");
assertInvalid("VIIII");
assertInvalid("MCCCC");
assertInvalid("VX");
assertInvalid("IIV");
assertInvalid("IVI");
assertInvalid("IXI");
assertInvalid("IXV");
assertInvalid("VIV");
assertInvalid("XVX");
assertInvalid("XVV");
assertInvalid("XIVI");
assertInvalid("XIXI");
assertInvalid("XVIV");
assertInvalid("LXL");
assertInvalid("XLX");
assertInvalid("XCX");
assertInvalid("XCL");
assertInvalid("CDC");
assertInvalid("DCD");
assertInvalid("CMC");
assertInvalid("CMD");
assertInvalid("MCMC");
assertInvalid("MCDM");
}
private void assertInvalid(String r) {
assertThrows(InvalidRomanNumeralException.class, () -> convert(r));
}
}
Bob claims his code is cleaner than the original. It’s not. I actually like the original function’s forthcomingness. It decomposes nothing, and yet, the fact that it barely has any branching makes it easy to comprehend. I even found the double-digit numeral to single-digit numeral conversion to be…elegant?
Seriously, it’s thanks to this trick that the conversion loop of numerals to decimals has practically zero logic. It might as well be a table. Bob points out that it doesn’t pass all the test cases, but with very minor changes, I was able to fix it.
This isn’t the only code example where Bob over-decomposes, but I’d be here all day if I went through each one.
The Butchering
After the above refactoring, he writes this:
You might be a functional programmer horrified that the functions are not “pure.” But, in fact, the static convert function is as pure as a function can be. The others are just little helpers that operate within a single invocation of that overarching pure function. Those instance variables are very convenient for allowing the individual methods to communicate without having to resort to passing arguments. This shows that one good use for an object is to allow the helper functions that operate within the execution of a pure function to easily communicate through the instance variables of the object.
And in Chapter 7: Clean Functions, he argues that the following three variations of a sigma function are equally impure:
public static double sigma(double… ns) {
var mu = mean(ns);
var deviations = Arrays.stream(ns)
.map(x->(x-mu)*(x-mu))
.boxed().mapToDouble(x->x);
double variance = deviations.sum() / ns.length;
return Math.sqrt(variance);
}
public static double sigma(double… ns) {
double mu = mean(ns);
double variance = 0;
for (double n : ns) {
var deviation = n - mu;
variance += deviation * deviation;
}
variance /= ns.length;
return Math.sqrt(variance);
}
public static double sigma(double… ns) {
return new SigmaCalculator(ns).invoke();
}
private static class SigmaCalculator {
private double[] ns;
private double mu;
private double variance = 0;
private double deviation;
public SigmaCalculator(double… ns) {
this.ns = ns;
}
public double invoke() {
mu = mean(ns);
for (double n : ns) {
deviation = n - mu;
variance += deviation * deviation;
}
variance /= ns.length;
return Math.sqrt(variance);
}
}
And you might wonder how he could possibly believe this?
Simple. He misunderstands the definition of function purity. I mean, he gets it mostly right until he says this:
How do you create a pure function? Simple. Don’t change the value of any variables; or to paraphrase the famous line from Mommie Dearest: “No Assignment Statements Ever!” Or to say that in yet another way: Pure functions are immutable.
His source?
Functional Design, by Robert C. Martin.
Uncle Bob has conflated two different functional programming principles.
There’s the principle of purity, which means no side-effects, that is, functions modifying outside their scope.
Then, there’s the principle of immutability, which emphasizes the lack of variable reassignments and mutations of existing values.
You can adhere to the first without adhering to the second.
This doesn’t sound like a big deal, but it’s a HUGE part of his rationalization. Since assignment statements are “impure” in his mind, he believes the first two examples to be just as impure as the third.
Now we’ve got instance variables and all manner of variable manipulations. And yet, the sigma function is pure. None of those impure operations are visible outside the sigma function. The bottom line is that purity is an external characteristic of a function, not an internal characteristic. It does not matter how impure the internals of a function are—that function will be pure so long as all the impurity is hidden from all external observers (including other threads).
He butchers the definition of function purity to essentially apply only from the perspective of public methods.
I’m genuinely appalled.
The concept of purity is meant to apply to ALL functions, not just the outermost ones. It’s supposed to make code easy to reason about, and that includes implementation details.
Bob claims that passing around instance variables is less of an overhead than function arguments. It’s no surprise he thinks his, considering he believes his method’s names are so precise and descriptive that arguments aren’t necessary.
Let’s imagine someone decides to hop into a method to understand an implementation detail.
They see a method littered with references to instance variables, and then they wonder what values those variables had before.
Maybe this method depends on certain variables being initialized a certain way at certain times.
Maybe you can’t properly call this method without calling certain other methods first.
In other words, each method depends on shared state.
Does Bob expect people read through every method before the one they want to understand? Doesn’t that defeat the whole point of abstraction?
Bob has replaced the overhead of method arguments, with an even more problematic overhead of shared state. The fact that this state exists only within a specific instance of a class doesn’t make it acceptable.
Pure functions read like contracts (arguments are part of that contract). They take specific input, whatever state that input may be in, perform a certain set of operations on that input, and output the same result every time. This makes them easier to understand in isolation, which reduces mental burden. The only “shared” state, if you can even call it that, is what the higher-level function passes as arguments between function calls.
Stateful methods without arguments ask you to take their names at face value, as if knowingly trying to dissuade you from the ugliness underneath. Every time you step into a method, you have to add a node into your mental execution graph (which you do anyway), but then you have to factor in the state of each instance variable between function calls.
This might not be so bad if the shared state is only between two functions that are called consecutively. But what happens when this goes four levels deep in four different call hierarchies? Bob must have a crazy powerful working memory to be able to juggle all this.
You might argue that there’s no significant cognitive difference between local arguments passed around as arguments, and instance variables being referenced. But there is, and it involves scope. By keeping a variable’s scope as small as possible, you reduce the space across which readers have to keep it in mind to be able to reason about it.
If this weren’t the case, you could simply have EVERY variable be at class scope, and it wouldn’t matter. Even Bob wouldn’t write code like that.
So, either Bob isn’t aware of these costs, or he *is *aware, but SEVERELY underestimates them. I believe it’s the latter, but either way, it’s a damn shame.
Conclusion
My opinion is similar to last time.
Follow the advice at a very high level, but ignore the examples. If you were expecting improvements there, you won’t find it.
I will say, Bob comes across as less dogmatic about his code changes in this edition, which is nice. But I don’t think you can play the “let’s agree to disagree” card when the “improved” code is *this *bad. I know that’s harsh, but there’s no polite way to say this.
Anyway, thanks for reading, and have a nice day.
P.S
It’s been about five days since this book was published on Amazon, and I haven’t seen a single article or video about it since. It’s just been radio silence. Even Uncle Bob himself hasn’t been marketing it. I’m getting kinda creeped out here.
It’s lonely.