This week, AWS went down, along with a quarter of the internet. It’s funny how much we rely on cloud infrastructure even for services that should natively work offline.
That is, “funny” as long as you’re not a customer of said services trying to do something important to you. I know how frustrating it was when Grammarly stopped correcting my writing during the outage, even if it’s anything but a critical service to me.
While AWS engineers were busy trying to get the services back online, the internet was busy mocking Amazon. Elon Musk’s tweet got turbo-popular, quickly ge…
This week, AWS went down, along with a quarter of the internet. It’s funny how much we rely on cloud infrastructure even for services that should natively work offline.
That is, “funny” as long as you’re not a customer of said services trying to do something important to you. I know how frustrating it was when Grammarly stopped correcting my writing during the outage, even if it’s anything but a critical service to me.
While AWS engineers were busy trying to get the services back online, the internet was busy mocking Amazon. Elon Musk’s tweet got turbo-popular, quickly getting several million pageviews and sparking buzz from Reddit to serious pundits.
Admittedly, it was spot on. No wonder it spread like wildfire. I got it as a meme, like an hour later, from a colleague. It would fit well with some of my snarky comments about AI, wouldn’t it?
However, before joining the mocking crowd, I tried to look up the source.
Don’t Trust Random Tweets
Finding the article used as a screenshot was easy enough. It was a CNBC piece on Matt Garman. Except the title didn’t say anything about how much AI-generated code AWS pushes to production.
Fair enough. Media are known to A/B test their titles to see which gets the most clicks. So I read the article, hoping to find a relevant reference. Nope. Nothing. Nil.
The article, as the title clearly suggests, is about something completely different.
I tried to google up the exact phrase. It returned only a Redit/X trail of the original “You don’t say” retort. Googling exact quotes from the CNBC article did return several links that republished the piece, but all used the original title, not the one from the smartass comment. It didn’t seem CNBC had been A/B testing the headline.
By that point, I was like, compare these two pictures. Find five differences (the bottom one is the legitimate screenshot).
Top picture from the tweet Elon Musk shared. Bottom from the actual CNBC article.
So yes, jokes on you, jokers.
Except no one cares, really. Everyone laughed, and few, if anyone, cared to check the source. Few, if anyone, cared to utter “sorry.”
Trustworthiness as the New Currency
I received Musk’s tweet as a meme from my colleagues. It went through at least two of them before landing in my Slack channel. They passed it with good intent. I mean, why would you double-check a screenshot from an article?
It’s a friggin’ screenshot, after all.
Except it’s not.
This story showcases the challenge we’re facing in the AI era. We have to raise our guard regarding what we trust. We increasingly have to assume that whatever we receive is not genuine.
It may be a meme, and we’ll have a laugh and move on. Whatever. It won’t hurt Matt Garman’s bonus. It won’t have a dent in Elon Musk’s trustworthiness (even if there were such a thing).
It may be a resume, though. A business offer. A networking invitation, recommendation, technical article, website, etc. It’s just so easy to generate any of these.
What’s more, a randomly chosen bit on the internet is already more likely to be AI-generated than created by a human. **Statistically speaking, there’s a flip-of-a-coin chance that this article has been generated by an LLM. **
It wasn’t, no worries. Trust me.
Well, if you know me, I probably didn’t need to ask you for a leap of faith in the originality of my writing. The reason is trustworthiness. That’s the currency we exchange here. You trust I wouldn’t throw AI slop at you.
If you landed here from a random place on the internet, well, you can’t know. That is, unless you got here via a share from someone whom you trust (at least a bit) and you extend the courtesy.
Trust in Business Dealings
The same pattern works in any professional situation. And, sadly, it is as much affected by the AI-generated flood as blogs/newsletters/articles.
When a company receives an application for an open position, it can’t know whether a candidate even *applied *for the job. It might have been an AI agent working on behalf of someone mass-applying to thousands of companies.
While we’re still beating a dead horse of resume-based recruitment, it’s beyond recovery. Hiring wasn’t healthy to start with, but with AI, we utterly broke it.
A way out? If someone you know (or someone known by someone you know) applies, you kinda trust it’s genuine. You will trust not only the act of applying but, most likely, extend it to the candidate’s self-assessment.
Trust is a universal hack to work around the flood of AI slop.
Outreach in a professional context? Same story. Cold outreach was broken before LLMs, but now we almost have to assume that it’s all AI agents hunting for gullible. But if someone you know made the connection, you’d listen.
Networking? Same thing. You can’t know whether a comment, post, or networking request was written by a human or a bot. OK, sometimes it’s almost obvious, but there’s a huge gray zone. In someone you trust does the intro, though? A different game.
The pattern is the same. Trust is like an antidote to all those things broken by AI slop.
Don’t We Care About Quality?
Let me get back to the stuff we read online for a moment. One argument that pops up in this context is that all we should care about is quality. It’s either good enough or not. If it is, why should we care who or what wrote it?
Fair enough. As long as *consuming *a bit of content is all we care about.
If I consider interacting with content in any way, it’s a different game.
With AI capabilities, we can generate almost infinitely more writing, art, music, etc. than what humans create. Some of it will be good enough, sure. I mean, ultimately, most of what humans create is mediocre, too. The bar is not *that *high.
There’s only one problem. We might have more stuff to consume, but we don’t have any more attention than we had.
Now, the big question. Would you rather interact with a human or a bot? If the former, then you may want to optimize the choice of what you consume accordingly.
*Engageability *of our creations will be an increasingly important factor. And it won’t be only a function of what kind of call to action a consumer feels after reading a piece, but also whether they trust there’s a human being on the other side.
It’s trust, again.
Trust Networks as the New Operating System
Relying solely on what we personally trust would be impractical. There are only so many people I have met and learned to trust to a reasonable degree.
Limiting my options to hiring only among them, reading only what they create, doing business only with them, etc., would be plain stupid. So how do we balance our necessarily limited trust circle with the realities of untrustworthiness boosted by AI capabilities?
Elementary. Trust networks.
If I trust Jose, and Jose trusts Martin, then I extend my trust to Martin. If our connection works and I learn that Martin trusts James, then I trust James, too. And then I extend that to James’ acquaintances, as well. And yes, that’s an actual trust chain that worked for me.
By the same token, if you trust me with my writing, you can assume that I don’t link shit in my posts. Sure, I won’t guarantee that I have never ever linked anything AI-generated. Yet I check the links and definitely don’t share AI slop intentionally.
If such a thing happened, it would have been like Musk’s “you don’t say” meme I received—passed by my colleagues with good intent.
The degree to which such a trust network spans depends on how reliably a node has worked so far. A strong connection would reinforce its subnetwork, while a failing (no longer trustworthy) node would weaken its connections.
Strong nodes would allow further connections, while weak ones would atrophy. It is essentially a case of a fitness landscape.
New Solutions Will Rely on Trust Networks
The changes we’ve made to our landscape with AI are irreversible. In one discussion I’ve had, someone suggested a no-AI subinternet.
It’s not feasible. Even if there were a way to reliably validate an internet user as a human (there isn’t), nothing would stop evil actors from copypasting AI slop semi-manually anyway.
In other words, we will have to navigate this information dumpster for the time being. To do that, we will rely on our trust networks.
Whatever new recruitment solution eventually emerges, it will employ extended trust networks. That’s what small business owners in a physical world already do. They reach out to their staff and acquaintances and ask whether they know anyone suitable for an open position.
Content creation and consumption are already evolving toward increasingly closed connections (paywalled content, Substacks, etc.), where we consciously choose what we read and from whom. Oh, and of course, the publishing platforms actively push recommendation engines.
Business connections? Same story. We will evolve to care even more about warm intros and in-person meetings.
Eventually, large parts of the internet will be an irradiated area where bots create for bots, while we will be building shelters of trustworthiness, where genuine human connection will be the currency.
Like hunters-gatherers. Like we did for millennia.
Thank you for reading. I appreciate if you sign-up for getting new articles to your email.
I also publish on Pre-Pre-Seed substack, where I focus more narrowly on anything related to early-stage product development.