Off the cuff tech-philosophy
What it would take to convince me
January 10, 2026
Hey Elliot, you’re still an AI hater right? What would it take to convince you?
Firstly, please call me Mr. Morris. Secondly, you say “still” as though my hatred for AI is related to it’s effectiveness and not to how its existence shatters the democratic principle of laws applying equally to all the human participants of a nation state.
Okay fine, if you really wanted to convince me that heavily LLM driven software development practices (because that’s what you must mean to justify … *gestures broadly*) are now an effective way of producing valuable software, you’re going to need to build and sell a software product …
Off the cuff tech-philosophy
What it would take to convince me
January 10, 2026
Hey Elliot, you’re still an AI hater right? What would it take to convince you?
Firstly, please call me Mr. Morris. Secondly, you say “still” as though my hatred for AI is related to it’s effectiveness and not to how its existence shatters the democratic principle of laws applying equally to all the human participants of a nation state.
Okay fine, if you really wanted to convince me that heavily LLM driven software development practices (because that’s what you must mean to justify … *gestures broadly*) are now an effective way of producing valuable software, you’re going to need to build and sell a software product to a specific set of conditions.
I’m going to preface this by saying this is what would convince me, it is not a fair test, and I’m a picky bastard. I’m talking about being convinced to change my mind, which means I need to really be certain that specific suspicions of mine are not at play. I’ll explain what I mean as we go.
I’m also going to try and make these relatively falsifiable. No “Produce real value” or any other metaphysically adjacent requirements. Still, I’m sure anyone attempting to actually test me on these will find it frustratingly impossible to agree if any given example “counts”. Too bad.
#1 – Displace the market leader
If LLM development practices are the future, and are an improvement, it stands to reason that eventually a company driven by them will disrupt and replace.
This only makes sense if we are in at least a semi-rational market environment, and development practices actually, you know, matter when it comes to organizational success. If either of them are untrue then nothing we do means anything anyway, and this whole LLM kerfuffle is a massive waste of time and money.
#2 – Do so with a product unrelated to AI
I’m not sure I’ve ever seen a properly off the deep end AI booster ever not be developing an AI product. It’s normally some sort of LLM orchestration frame, or context management system.
It goes without saying this is deeply suspicious, you can’t manifest value solely by eating your own tail.
#3 – Do so in a “traditional” market domain
Let’s take that thinking a step further. To be truly convinced, I’m going to need to be sure you’re not operating within an irrational market environment.
This means “new” domains are out, I can’t take the risk, they could be pump and dump bubbles. Any hype driven domains like crypto, metaverse or anything to do with speculative investment are also way out.
Here are some examples of domains that would satisfy me : consumer or server operating systems, GIS software, FEA/MBD simulators, accountancy/tax software, office suite software, CAD software. You know … stuff that’s actually important.
#4 – Your “engineers” never edit source code
This one may seem wild, and I agree, it’s not fair. However, I’m more aware than anyone how often a small number of motivated engineers can keep a ship moving forward by stealth, despite destructive and baffling behaviors by leadership.
I need to ensure that this is not happening to be convinced. Therefore, no engineers editing the code, ever. I’d quite like to weaken this requirement as even I grant that there’s a huge gap between full-vibe and LLM assisted practices, but I can’t think of a better way to control against engineers compulsively trying to produce working products.
#5 – Make a real profit for three strait years.
Finally, actually make some goddamn money. Three years seems like a good amount of time to prove that you’ve got something sustainable and aren’t riding a bubble. The length of time should also guard against the greenfield problem, as I’ll need to be shown that you can maintain a product to the satisfaction of your customers, whether via traditional maintenance or newfangled AI driven total rewrites.
This has to be real profit too. No weird accounting tricks, no counting investment as profit. Customers need to freely choose to give you their money, ideally they will also be happy about doing so. This also means that you need to be doing this in a non-subsidized token environment, unless those subsidies can be expected to remain available in the long term. I’m confident enough that three years is long enough for that to resolve, although the market is extremely good at staying irrational so perhaps this is too optimistic.
These would convince me. They are not proofs. I can imagine LLM driven development being useful without any given product having fulfilled these conditions, especially the traditional market domain one, which has regulatory capture issues to contend with. I also don’t think an LLM driven company that fulfills these conditions has proven anything, not formally, these are mostly market based and market success doesn’t necessarily equate to value.
Nonetheless, I am a human living in a market economy so they hold a large psychological sway, and these conditions being met would cause me to have to re-evaluate my “LLM systems cannot produce useful value by definition” ideology.
You shouldn’t really care about convincing me though, I don’t matter. If you’re really confident you’ll achieve all these things and keep it to yourself, why bother making a fuss, you’ll have already won.
Move Fast and Ship Junk
January 8, 2026
Hold onto your hats, brave and controversial opinion alert!
It is neither inevitable, nor desirable, to ship software containing bugs.
Are you shocked? I’ll admit I’m a little surprised at how viscerally certain kinds of people in software react to this. I suppose it goes against the ever-infallible common-wisdom. Let’s unpack it.
Obviously it’s impossible to guarantee that something works, right? We have no reason to believe software is some special animal here. Cars, Planes, Agricultural machinery, none of these can be relied on to work correctly, even brand new.
Taking my tongue out of my cheek, this is sort of true. Cars break down, but there’s a difference. We expect our cars to start. It’s strange when they don’t. Furthermore we can tell if a car is likely to start reliably and when it is not, there are experts capable of doing that.
Given this, we should move forward under a weakened and more realistic assertion of what working software means. I would define it as
⦁ Reliability in the 99%+ range.
⦁ Ability to inspect software and make an informed decision on whether the above reliability guarantee will hold in any given environment.
Okay great, with a definition that matches physical systems, we’re all good. This is how serious software works currently right?
Does it fuck.
The next argument goes that, well, you can get software that operates like this, if you’re willing to put an undue amount of effort into paranoid coding standards and triple checking everything, but it’s just not worth it. Software isn’t that important, the risks involved with delivering broken behaviour is so low, moving fast yields better outcomes in the long haul.
I have so many objections to that line of reasoning, here are some of them:
Plenty of software is vital. You may roll your eyes at this. Of course if you’re building a space shuttle you’re going to do things differently. That’s not really the point though. Do you know how your software is used, really? Most important software is components and libraries, or otherwise used as an element in a grander workflow. Do you think your software gets audited? Do you think we, as a field, are capable of auditing software?
Where is your disclaimer that states you have no real idea if your software can be relied upon in any given environment? Don’t balk at this too soon, it’s not as ridiculous as it sounds. “This software is exercised in X environments to Y degrees of rigour. It may work in other environments but we make no guarentee’s.” is good enough. Understand what your software is and be honest about it, no one’s going to judge you.
Why does software get a free pass in terms of the responsibility we take on? If my new boiler started exhibiting strange, inscrutable behaviours three months after install, am I just to shrug? No, someone is responsible for that. I’m not here saying that I need to get angry at the installer, humans do make mistakes, but nonetheless it should not happen and someone needs to remedy the situation, and make sure it does not happen again.
Software seems unique in that producing it seems to come with little to no personal or organizational responsibility. Our craft cannot be the bedrock of the global economy and simultaneously not important enough to give a shit about.
I think we need to be held legally responsible here, as software engineers, maybe even as individuals. Doctors don’t get to hide behind “The hospital made me do it,” so neither should we. I know that’s scary. I know it’s hard to imagine working in ways where we are accountable, but we can get there. Not only will this make software better, but it will also imbue engineers with more agency. When a profession is legally liable, that profession will be forced to develop mechanisms to push back against enshittification pressure, there will be no other option.
Reliability doesn’t have a 1:1 correlation with cost. Think of the most reliable appliances in your home. Whilst they may not be the cheapest , regretfully there is still a market for corner cutting even in physical goods , I bet they are among the cheapest, especially in relation to expensive high-complexity gadgets and gizmos. This is because the manufacturers of those things figured it out. They removed the ambiguity from their craft and turned it into a science, which then allowed them to cut costs in controlled, comprehensible ways.
Software as a field needs to advance. No it’s not something you can just do as an individual contributor or even as a company, but it’s possible if we all pull together. We need standards bodies, we need professional accreditation. We probably need to reduce the amount of working software engineers in general as we hold ourselves to higher standards. It’ll be a long road, but don’t let a perverse system motivated by ever increasing wealth extraction convince you that what we’re doing currently is either good or inevitable. It isn’t.
Schedule me in for an a-ha.
January 8, 2026
Everyone’s having a-ha moments with LLM’s. The new GPT 5.2 and Opus 4.5 models have once again changed the game, they’re now good enough to write 90% of all your code, even skeptics are turning around and hopping onboard. Wow! Can’t wait to see it.
Whilst I’m not being totally sarcastic in my desire to have my mind changed, you can tell from my tone I’m not there yet. What folk speak about concerning their a-ha moments does not resonate with me, to the extent that I wonder sometimes if we’re even in the same profession. Join me in popping the cork as I officially become an LLM malcontent.
Greenfield work
Obviously LLMs seem good at this, because engineers who can barely invoke a compiler also seem good at this. Greenfield work is so delightful because you can’t really get it wrong, there are no awkward pylons jutting out of the ground or ancient utilities you need to know how interact with just so. Just how many of you are doing greenfield work as your main thing? Am I out of touch?
I remain suspicious of greenfield work that happens quickly. LLM’s or no, if you show me a lot of visible progress all at once for a new project, I am going to assume you’re building a toy that is ignoring all the sticky complexities of the problem domain. This was true before LLM’s, and is especially true now.
Publishing open source repositories
Seriously who the fuck cares. Your project is probably pointless. Before LLMs there was an oversupply of unproven toy projects, and now that’s even worse. Publishing a project that doesn’t get adopted is negative value. That’s fine, I’m proud of you, everyone needs to learn. But you must realize that until you’re actually supporting serious users you have achieved less than zero when it comes to delivering value.
I suspect many people have fallen into the trap of thinking that the code that makes up their project is what gives it value. Code has no inherent value, it only has inherent cost.
Writing better code
“Claude writes better code than I do”
What the fuck do you mean?
Like what, it’s prettier? More performant? Has fewer edge cases? Runs on a larger swathe of target architectures? Better documents its preconditions? Has fewer dependencies? Will put you in a better world-line 6 months from now such that you can trivially integrate unforeseen use cases? Has sniffed out the way the wind is blowing among the technical directors and CTO and has left its options open for the big change that everyone says isn’t going to happen but you are pretty sure actually is?
This is one I just don’t understand. “Good code” obviously doesn’t exist outside of an extremely trivial set of metrics, and we’ve never even agreed on what those are!
However, Bad Code does exist. There is code with memory leaks, code that crashes, code that doesn’t report errors richly enough. Maybe this is what people mean, LLM’s tends not to output Bad Code? I can agree with that.
“Claude writes better code than I do”
Oh no.
Better search
Okay, I care about this a little. I think this is what most people are practically using LLMs for, both inside and outside the engineering profession. Whilst I am a claude code enjoyer, the majority of my LLM usage remains simple questions.
I’m going to try and leave my intense, seething rage at the absolute devastation of the commons LLM companies enacted in order to pull this off aside, in order to make a smaller point.
Traditional search is better than LLMs. Google used to be good, it’s one of my great fears that people will forget this. People called it magic in the same way people seem stunned by the capabilities of LLMs today. It was telepathic, it just knew what you wanted. You could even learn google-fu and make it even better. I would instantly remove all LLM tooling from my workflow if it meant I could go back to 2015 google, but alas, the need to extract wealth has eliminated that possibility, I doubt the internet as it stands could even support search that good anymore.
Do people not realize the same enshittification is going to happen with these tools? We are being forced to become dependent on these centrally controlled systems for something as essential as easy access to information. It’s disgustingly transparent, and obviously catastrophic in ways that reach beyond mere software engineering. Yet, here I am here typing questions into claude the same as everyone else, because it’s the only easy option. I hate myself.
Finally, a note on the discourse.
To those who think I’m being a moron, that I’m being biased, closed-minded, will be left-behind, etc. That’s great for you! The best part of your position is that you being correct makes me irrelevent. You have no incentive to argue back at me. Let my complaints hit the wind and become evidence of my idiocy. You’re going to out-compete me in the market anyway right?
Maybe show a little grace and let us losers have our tantrums before we are drummed out of the profession, you’ve nothing to lose either way.
Shadow Backlogs
January 7, 2026
I admit it! I maintain a shadow backlog. That is to say, I habitually spend time working on tasks that have not been prioritized by my team, but I have unilaterally decided are important enough to devote some fraction of my working week towards. I know, I should be ashamed. It’s a miracle I’ve not been chased out of the profession by now.
Luckily, I am in at a level of seniority where this is somewhat expected of me, even if that goes unstated for the most part. However, I have been doing this for the bulk of my career, and I’m here to tell you that you should too.
What I am not here to tell you is that you should prioritize your own gratification over the success of your team. Exploitative corporate structures aside, your team consists of your colleagues, and you owe it to them not to be a burden. (I know, this thinking can in its own way be another mechanism of control by the owning class, but not everything can be a class war my guy.)
I spend 10-20% of my time on my shadow backlog, and often drop it for months at a time when mainline work is more pressing. Sometimes companies formalize this as 10/20% time or Innovation Days. Whilst I appreciate the intent of these, I’m not a fan as it’s the flexibility that illegibility provides that makes shadow backlogs work for me, and frankly I don’t think employees need permission to be doing this.
That being said, take this all with a grain of salt. Shadow backlogs are not allowed. If you can’t do this without it adversely affecting the elements of your work that are actually important to your organization, you should not do it. However, if you find yourself just sitting there for extended periods of time, unable to muster any motivation, as I know a lot of us do, then keep reading.
Off the top of my head, here are some examples of things currently in my shadow backlog:
- Add mock injector functions such that particular service errors can be tested. There is some work coming up that will require this, and the mocking framework in our tests is haphazard and incomplete, I’d rather have the groundwork laid to avoid having to bundle the test complexity into the upcoming ticket, especially if more junior colleagues end up taking it.
- Finish up some gnarly work in our interop code generator around value types. The work this relates to got paused over the holidays, and having this done before we reboot will make the overall story simpler.
- Continue work on a side-bet of mine, involving an integration of our core framework into a DCC tool.
- Write a bunch of conceptual documentation for a client repository that I am the sole knowledge-holder for.
You know, before writing these out I expected that I would find I am disagreeing with many of the organizations prioritization decisions, but it’s not really that is it? It’s more that the cost/value calculation of just doing these, rather than communicating and planning and prioritizing, leans towards them being appropriate for shadow backlogging. I’m sure the specific texture of everyone’s shadow backlog will be different*,* but what I am sure will remain constant is that you, like me, will have more context than anyone else in these particular domains.
In particular, what I mean by that is :
- I have more context in technical minutia around areas of which I am familiar.
- I have more context over my own motivations and capabilities at any given moment.
The second one is the really important one.
Perhaps you’re a machine and can take any task given to you and churn through it. Fantastic, I’m proud of you, and you don’t need a shadow backlog for the same reasons I do. For the rest of us however, we are not machines, and are never going to be machines no matter what delivery framework we are placed inside.
A shadow backlog can serve as an escape hatch. There are times, rather frequent times for me, where I physically cannot motivate my hands and fingers to move such to progress a specific task. This sounds pathetic, perhaps I am broken. I don’t think so however, as I observe this behavior commonly in others, although people are ashamed to admit it. The arrogance of complaining that it’s hard to move your fingers to type code because it doesn’t currently speak to you? Bah! Don’t you know there are people in jobs who don’t have that luxury, who actually have to do labour? You pathetic, useless worm.
Maybe that’s true, but you know what, I don’t care. Shadow backlogs are a way of coping that works for me, and if you’re an employer who understands that humans are human, they work for you too. I don’t have to deal with the existential terror of finding myself physically unable to move forward on the one thing that is currently important, and the company gets a worker who is doing something rather than nothing.
The bonus win for everyone is that the things in shadow backlogs tend to be more valuable than the things that actually get prioritized for the reasons everyone knows but no one talks about.
As a closing thought, I’ll ask, are shadow backlogs an antipattern? I think probably yes. I have worked on teams where I didn’t need one. Communication was tight, values were aligned, the things we wanted to do and the things we needed to do were the same. However, we all know how rare and difficult this is, so I don’t begrudge this pattern as an effective coping mechanism.
Beating the Tutorial
January 6, 2026
Most software engineer job descriptions will have a requirement like this :
Has the ability to deliver ticketed tasks promptly and to a high quality standard.
This is well and good, it’s the primary gameplay loop of software engineering afterall. Receive ticket, make changes to match the behavior described in the ticket, make sure your code is reasonably readable and documented, deploy code to main. Via this mechanism, you deliver value.
After a time, you gain confidence in this, your peers and managers will praise you for your ability to do these tasks mostly unaided. The product becomes malleable to you, you start to think there isn’t any task you can’t accomplish given enough time. Heck, maybe you could even rewrite the entire product.
Congratulations on beating the tutorial.
Most organizations would have already promoted you to senior engineer by this point. This is an industry-wide mistake. Whilst I won’t go so far to encourage folk to turn down promotions, I will encourage them to avoid conceptualizing themselves as experts before they are ready. The real journey has only just begun.
Being able to deliver any given feature somehow is table stakes. Up until this point, you have not been contributing very much to your organization, not really, in fact you’ve probably spent a significant part of your career being a net-negative contributor in terms of absolute product value. This may seem shocking, clearly you’ve been delivering features, probably some customers even find them useful, but this is missing the point.
All change has cost, and although the organization will assert that the value of ticket delivery is always worth the cost of change, (otherwise they wouldn’t have asked for the feature right?) the truth is more complicated.
Creating any one single behavior in a computer system is almost always trivial for the experienced engineer. When the experienced engineer on your team says that something can’t be done easily, they almost always mean is that the thing can’t be done easily in a way that is acceptable to the health of the product. Junior engineers tend not to have to consider this constraint. Especially on teams that are more feature-mills, junior engineers will frequently add features in ways that are at best value-neutral, and at worst value-negative over the lifetime of the product.
This is intentional! All things exist in contrast, good and bad are paired, you must be allowed to fail in ways both varied and numerous in order to figure out what success even looks like. The technical growth that comes from this sort of work is the point.
It therefore worries me when I see newer engineers talking about their careers as though feature delivery is the final goal of technical growth.
This is a systemic failure to lead, but the pushing of this attitude is also arguably an intentional and malicious attempt to commoditize the craft of engineering to the benefit of a privileged few at the detriment of all software users. LLMs are a recent extreme accelerant to this trend, but they are not the cause, it’s been happening for a while.
For any given business need, I normally consider dozens of approaches to achieve the desired outcomes. Some match the expectations of the ticket author, some don’t, but nonetheless fulfill the actual requirements. Some approaches are high risk, some low. Some manifest their value in that they require no collaboration with other teams, some only work if there is an expert available for integration. All are immediately viable, all have trade-offs, many are secret dead-ends that will make your product less competitive in ways that are utterly illegible to the rest of the organization.
Beyond even that though, there are sometimes viable options that leave the systems we steward in a better place than they were before, and this is important. You want to get exponential? Here’s where it happens. What I mean when I say better here is undefinable, it’s a highly connotatively connected property that encapsulates business necessities, predicting the future, interpersonal relations, technical realities, ethics & culture, etc. These options don’t always exist, but they will tend to stop presenting themselves if an organization habitually avoids/is unable to identify them, and will present themselves more readily in the inverse case.
Exploring the shape of this make better quality across different contexts is the actual game of software engineering, and doing so will take you much, much longer than merely figuring out how to deliver features faster.
Perhaps this is also still just the tutorial. I’ll let you know if I beat it.
* I’d rather stop using the word feature, it belies a false perspective on what good technical work actually is and how it comes to be, but given these blogs posts are supposed to train me how to write without over-qualifying everything I say, I’ll just make do with this footnote.
Something Small
January 5, 2026
Here we go. First “real” blog, 30 minutes, white page. How do I even do this? Should I write sections headings and fill them in recursively? That’s how I do PRD’s and such, although it feels a bit too planned for something like this, I think I’m just going to let my fingers run.
As I said before, the primary blocker I’ve been having in my writing, and my projects in general, is trying to say everything I have to say all at once. I need to pick a small, tiny, minuscule topic, and somehow try to ignore how everything is connected to everything else.
Ach, screw that, let’s talk about one of the big problems. Agreeing on a definition of value.
How on earth are we meant to say one thing is better than another? I’m advanced enough in my career that I don’t bring this up so much at work anymore, as it’s not really a helpful conversation to have when trying to decide on a technical strategy. However, if folks aren’t reasonably aligned on this, your organizations effectiveness is going to be hard capped in a particularly illegible way.
I suspect you’ve probably heard of the concept of “Gel” when it comes to technical teams, I read about this first in 1987’s Peopleware: Productive Projects and Teams, I think that book coined the term. Gel is, to use a personal definition, the tendency for two independent agents in a system to come to similar conclusions without having to explicitly communicate. It’s telepathy by the backdoor, and it can exist when folks values align such that their intuitions and deductions tend to lead them to the same place.
Whilst there are some downsides to gel (monoculture reduces rigor), the upsides are great enough that organizations generally try to encourage it. Some organizations even manage to realize that this isn’t possible without some sort of shared consensus principles to derive value judgments from, which is where the fun begins.
To solve this problem, organizations tend to produce and publish lists of corporate values. In the most common case, these are assembled by consensus, are vapid, self-similar, self-congratulatory documents, and are completely ignored by the majority of people. Nonetheless, I don’t discount these entirely, the are a potential definition of value, and they get credit just for acknowledging that this is something that needs to be considered. You have to watch out though, as is the norm with initiatives concerning formal definition of dynamic concepts such as this, any org that manages to be successful probably didn’t need the artifact in the first place, and conversely any org that really does need something, will almost certainly be incapable of producing anything useful.
More commonly, where shared values actually come from is by humans doing what humans do naturally, manifesting values and culture in an undefined, unguided sort of way. This is best, but is also unpredictable. If you’re smart, you can try to build an environment that encourages this sort of thing, but you can never really guarantee it.
I’m not optimistic on the future of shared values. Work from home, despite all its many upsides, has clearly eroded the natural value-alignment that tends to occur when two people share the same physical space. At least in my experience in the UK, the best of this tended to happen at the pub, in that sort of tribal ritual/group therapy session I’m sure most of us who worked in tech pre-pandemic are familiar with. This was especially advantageous as discussions often didn’t have to be couched and distorted such to avoid damaging fragile executive egos. Keep in mind that any substitute mechanism of alignment that occurs during work hours will be subject to this constraint, so will again be hard capped on its effectiveness.
Some folks might be reading this and be thinking that sort of control over value alignment sounds appealing, we can make sure everyone’s on the same page from base principles right? I’m going to assert that it’s incredibly difficult to actually change a persons values and you shouldn’t even try. In fact I believe it’s impossible by definition, perhaps I’ll explain why in a later post.
Many may think alignment is about changing hearts and minds, but it isn’t, it’s more akin to intimacy. There is a gestalt understanding of what everyone in your org thinks and feels, it’s ludicrously multi-dimensional, impossible to pin down, inherently inaccurate, but it is there. If you can tap into it, you can massively reduce your communication overhead, and be able to work through concepts that would be actually impossible to deliberate on otherwise. Yes, there are going to be elements of it that make you uncomfortable, that you might find icky, counter-productive or even offensive. Nonetheless, acceptance is a prerequisite to intimacy, which is a prerequisite to alignment, which is a prerequisite to high performing organizations.
If you’re looking for a prescription here at the end, I don’t really have one beyond the solution that solves every organizational problem. Hire only the right people, surrender control almost entirely, and hope.
Writing Muscles
January 4, 2026
I suspect this is rather a familiar story. You’re a regular person in tech, you consume a lot of tech media, and you’ve had a long and storied enough career that you’ve got things to say.
However, outside of a dry technical writing context, you’ve never written before. You try nonetheless, and discover that it’s rather difficult to get the words out.
Christmas has just come and gone, and I have dozens of half-finished essays sitting on my hard-drive. They are all similar in that they try to address enormous, philosophically dense topics, and they all fizzle out as I find I don’t have good enough writing muscles to communicate what I mean in a way that satisfies me
This does not come as a surprise to me. I’m a bit of a psychedelically inclined person, and something that becomes evident in that space is that it’s a simple enough thing to experience “the grand truth.” What isn’t easy is bringing those revelations out of that dynamic space into a form that won’t dissolve when exposed to the winds of consensus reality. Revelation isn’t work, what is work is pulling those concepts upwards, negotiating with them, and mocking their conceptual dependencies well enough that they can be communicated in a way that doesn’t require absolute connotative alignment between individuals.
Boy, that sounds like absolute nonsense, am I going to explain what I mean by the above? No I’m not, that’s the point. I am not yet capable of doing so, I haven’t developed the muscles for it.
Therefore, I’m going to write a blog a day for the next two weeks. Call it exposure therapy. They won’t be significant or groundbreaking or particularly well written, but they are going to exist.