ENOSUCHBLOG
Programming, philosophy, pedaling.
Dec 13, 2025 Tags: oss, security
Three weeks ago I wrote about how we should all be using dependency cooldowns.
This got a lot of attention, which is great! So I figured I owe you, dear reader, an update with (1) some answers to common questions I’ve r…
ENOSUCHBLOG
Programming, philosophy, pedaling.
Dec 13, 2025 Tags: oss, security
Three weeks ago I wrote about how we should all be using dependency cooldowns.
This got a lot of attention, which is great! So I figured I owe you, dear reader, an update with (1) some answers to common questions I’ve received, and (2) some updated on movements in large open source ecosystems since the post’s publication.
Common questions and answers
Question: Aren’t cooldowns a self-defeating policy? In other words: if everyone uses cooldowns wouldn’t we be in the exact same situation, just shifted back by the cooldown period?
Answer: I think there are two parts to this:
The observation in the original post is that there are parties other than downstreams in the open source ecosystem, namely secuity partners (vendors) and index maintainers themselves. These parties have strong incentives to proactively monitor, report, and remove malicious packages from the ecosystem. Most importantly, these incentives are timely, even when user installations are not. 1.
Even with a universal exhortation to use cooldowns, I think universal adoption is clearly not realistic: there are always going to be people who live at the edge. If those people want to be the proverbial canaries in the coal mine, that’s their prerogative!
Or in other words, we certainly all should be using cooldowns, but clearly that’s never going to happen.
Question: What about security updates? Wouldn’t cooldowns delay important security patches?
Answer: I guess so, but you shouldn’t do that! Cooldowns are a policy, and all policies have escape hatches. The original post itself notes an important escape hatch that already exists in how cooldowns are implemented in tools like Dependabot: cooldowns don’t apply to security updates.
In other words: ecosystems that are considering implementing cooldowns directly in their packaging tools should make sure that users can encode exceptions1 as necessary.
Question: Doesn’t this incentivize attackers to abuse the vulnerability disclosure process? In other words, what stops an attacker from reporting their own malicious release as a vulnerability fix in order to bypass cooldowns?
Answer: Nothing, in principle: this certainly does make vulnerability disclosures themselves an attractive mechanism for bypassing cooldowns. However, in practice, I think an attacker’s ability to do this is limited by (at least) three factors:
Creating a public vulnerability disclosure on a project (e.g. via GitHub Security Advisories) generally requires a higher degree of privilege/comprehensive takeover than simply publishing a malicious version. Specifically: most of the malicious publishing activity we’ve seen thus far involves a compromised long-lived publishing credential (e.g. an npm or PyPI API token), rather than full account takeover. We might seek that kind of full ATO in the future, but it’s a significantly higher bar (particularly in the presence of in-session MFA challenges on GitHub). 1.
The name of the game continues to be maximizing the window of opportunity, which is often shorter than a single day. At this timescale, hours are significant. But fortunately2 for us, propagating a public vulnerability disclosure takes a nontrivial amount of time: CVEs take a nontrivial amount of time to assign3, and ecosystem-level propagation of vulnerability information (e.g. into RUSTSEC or PYSEC) typically happens on the timeframe of hours (via scheduled batch jobs).
Consequently, abusing the vulnerability disclosure process to bypass cooldowns requires the attacker to shorten their window of opportunity, which isn’t in their interest. That doesn’t mean they won’t do it (especially as the update loop between advisories and ecosystem vulnerability databases gets shorter), but it does stand to reason that it disincentivizes this kind of abuse to some degree. 1.
Stealth. Creating a public vulnerability disclosure is essentially a giant flashing neon sign to security-interested parties to look extremely closely at a given release. More specifically, it’s a signal to those parties that they should diff the new release against the old (putatively vulnerable) one, to look for the vulnerable code. This is the exact opposite of what the attacker wants: they’re trying to sneak malicious code into the new release, and are trying to avoid drawing attention to it.
Question: What about opportunistic abuse of the vulnerability disclosure process? For example, if 1.2.3 is vulnerable and 1.2.4 is a legitimate security update, what stops the attacker from publishing 1.2.5 with malicious code immediately after 1.2.4?
Answer: This is a great example of why cooldown policies (and their bypasses) struggle to be universal without human oversight. Specifically, it demonstrates why bypasses should probably be minimal by default: if both 1.2.4 and 1.2.5 claim to address a vulnerability and both require bypassing the cooldown, then selecting the lower version is probably the more correct choice in an automatic dependency updating context.
Ecosystem updates
This is a somewhat free-form section for ecosystem changes I’ve noticed since the original post.
If there are others I’ve missed, please let me know!
Python
uv has added support for dependency cooldowns via relative --exclude-newer values. Recently released versions of uv already include this feature.
For example, a user can do --exclude-newer=P7D to exclude any dependency updates published within the last seven days.
Documentation: uv - dependency cooldowns.
References: astral-sh/uv#16814
pip is adding an absolute point-in-time feature via --uploaded-prior-to. This is currently slated to be released with pip 26, i.e. early next year.
In addition to the “absolute” cooldown feature above, pip is also considering adding a relative cooldown feature similar to uv’s. This is being tracked in pypa/pip#13674.
References: pypa/pip#13625, pypa/pip#13674
Rust
- cargo is discussing a design for cooldowns in rust-lang/cargo#15973. This discussion predates my blog post, but appears to have been reinvigorated by it.
Ruby
- There’s been some discussion in the Bundler community Slack about adding cooldowns directly to the gem.coop index, e.g. providing index “views” like
/view/cooldown/7d/for index-level cooldowns. I think this is a very cool approach!
.NET
- The NuGet community is discussing a cooldown design in NuGet/Home#14657.
JavaScript
pnpm has had cooldowns since September with v10.16, with minimumReleaseAge! They even have a cooldown exclusion feature.
yarn added cooldown support one month after pnpm, via npmMinimalAgeGate
npm does not have cooldown support yet. There appear to be several discussions about it, some of which date back years. npm/rfcs#646 and npm/cli#8570 appear to have most of the context.
npm/cli#8802 is also open adding an implementation of the above RFC.
Go
- Go is discussing the feasibility/applicability of cooldowns in golang/go#76485.
GitHub Actions
pinact has added a –min-age flag to support cooldowns for GitHub Actions dependencies.
Renovate and Dependabot already do a decent job of providing cooldowns for GitHub Actions updates, including support for hash-pinning and updating the associated version comment.
Security updates are the most obvious exception, but it also seems reasonable to me to allow people to encode exceptions for data-only dependencies, first-party dependencies, trusted dependencies, and so forth. It’s clearly a non-trivial and non-generalizable problem! ↩ 1.
For some definition of “fortunate”: clearly we want TTLs for vulnerability disclosures to be as short as possible in normal, non-malicious circumstances! ↩ 1.
Unlike GHSAs, which are typically assigned instantly. For better or worse however, CVE IDs continue to be the “reference” identifier for ecosystem-level vulnerability information propagation. ↩