AppSec & Supply Chain SecurityFebruary 3, 2026
ReversingLabs looked at last year’s Software Supply Chain Security Report in the rear-view mirror. Here’s what RL got right — and wrong.

ReversingLabs’ 2025 Software Supply Chain Security Report made some prediction on what risks and security initiatives would take hold last year. Now that the Software Supply Chain Security Report 2026 has been released, it’s only fair to look back at last year’s report to see which of RL’s predictions came to p…
AppSec & Supply Chain SecurityFebruary 3, 2026
ReversingLabs looked at last year’s Software Supply Chain Security Report in the rear-view mirror. Here’s what RL got right — and wrong.

ReversingLabs’ 2025 Software Supply Chain Security Report made some prediction on what risks and security initiatives would take hold last year. Now that the Software Supply Chain Security Report 2026 has been released, it’s only fair to look back at last year’s report to see which of RL’s predictions came to pass.
Here’s how RL’s predictions for the state of software supply chain security in 2025 fared.
Download: Software Supply Chain Security Report 2026Join discussion: Report webinar
Did AI/ML supply chain risks get real?
A year ago, it was a no-brainer to see the dangers associated with the move of generative AI into enterprises — especially its wide adoption in software development. Experts were already raising alarms over how gen AI and machine learning can be exploited by attackers. For example, in late 2024, RL’s Dhaval Shah noted that Python’s popular Pickle-format files that serialize ML models are “inherently unsafe” because they allow embedded Python code to run when the model is loaded. Observations like that informed RL’s prediction last year that:
Serialized ML models need to be vetted with the same rigor as any other software package for supply chain risks.
A year later, the AI/ML threats to software supply chains are concerningly real. As early as February 2025, RL researchers saw the threat made real with the discovery of the NullifAI campaign, which took place on Hugging Face, an open-source platform dedicated to the development and sharing of ML models. Attackers specifically abused the Pickle AI model format to distribute malware on the platform.
A similar campaign also hit the Python Package Index (PyPI) last year. In that incident, RL identified three malicious PyPI packages that posed as a Python software development kit (SDK) for users of Alibaba AI labs. Once installed on a victim’s machine, the malicious packages deliver an infostealer payload hidden inside a PyTorch model, which is basically a zipped Pickle file.
Those incidents should be proof enough that Pickle is a known unsafe data format, but developers and others in ML operations (MLOps) continue to use it. Compounding the problem: Security tools are slow to detect malicious behavior in ML files, which are not viewed as a medium for distributing executable code.
***The verdict? Yes, AI/ML supply chain risks became very real in 2025, and RL’s call for vetting Pickle files still stands. ***
Did companies level up their security?
Besides expecting threats to software supply chains to grow in 2025, RL predicted in last year’s report that 2025 would see a shift in the security marketplace as pressure built on “software producers and … their customers to level up their software supply chain security.”
As the report put it:
Threats may lurk both in obscure and well-known open-source packages. Developers should not equate package downloads and reputation with package security.
“Level up” in this case meant that software producers and enterprises would embrace broader security capabilities such as “detecting unexpected and unexplained changes in the constitution and behavior of applications and updates.”
Did 2025 see that kind of transformation in the cybersecurity market? There’s some evidence to support that conclusion. In April, for example, JPMorgan Chase CISO Patrick Opet penned an open letter that declared that “there is a growing risk in our software supply chain and we need your action.”
Opet urged JPMC’s software and technology providers to rethink their development priorities and approach to security in an era dominated by cloud-based infrastructure and SaaS applications. “We need sophisticated authorization methods, advanced detection capabilities, and proactive measures to prevent the abuse of interconnected systems,” Opet wrote.
The U.S. Department of Defense (DoD) also revamped its software procurement program with the unveiling of its Software Fast Track Initiative (SWFT) in May, which called for “cybersecurity and SCRM (supply chain risk management) requirements” as well as “rigorous software security verification processes.” But positive developments like those were offset by a steady stream of security incidents that suggest that software supply chain security is still a back-burner issue for many software producers.
Take the November 2025 outbreak of the npm registry-native worm Shai-hulud — a threat that first appeared in September and compromised more than 1,000 widely used open-source packages. In December, the crypto wallet vendor Trust Wallet disclosed that a hack of the company’s Google Chrome extension that resulted in the theft of approximately $8.5 million in crypto assets had been the result of that November Shai-hulud outbreak. The hack led to the leaking of Trust Wallet developer GitHub secrets and API keys and the distribution of a compromised version of that Trust Wallet browser extension that bypassed the company’s standard release process.
Such incidents suggest that awareness of software supply chain risks still lags and that security and supply chain integrity continue to be a lower priority for many organizations than things like rapid development release cycles, robust feature development, and third-party integrations.
***The verdict? Yes and no. There is clear evidence that the software marketplace is shifting and that greater supply chain transparency and security are priorities for security-minded customers such as the federal government, but 2025 also showed that software producers have been slow to prioritize supply chain risks and attacks over bold feature development and rapid release cycles. ***
Are companies addressing Nth-party risks?
Finally, the 2025 report predicted that during 2025, software supply chain protections would expand beyond the basics of vulnerability detection and software bills of materials (SBOMs). Security in 2025 would mean embracing a broader notion of Nth-party risks.
This notion of ‘Nth party risk’ is a growing concern, especially given the growing complexity of software deployments, with the embrace of cloud computing, a heavy reliance on APIs, and the growth of no-code applications and AI- and ML-derived applications.
Keeping IT environments and data secure in 2025 “is about more than chasing down ‘high’ and ‘critical’ CVEs in your environment,” the report said. “It’s about recognizing the threats lurking in the open- and closed-source software and services that are the foundation of your business.”
So did that happen? Almost certainly. Was there an actual transformation in security practices by software producers and end-user organizations? Not exactly.
First, the signs of progress. Federal agencies including the Food and Drug Administration (FDA), the Cybersecurity and Infrastructure Security Agency (CISA), and the National Institute of Standards and Technology (NIST) all published guidelines on securing AI infrastructure and AI-powered devices, as well as on detecting AI-powered threats. In the European Union, the Cyber Resilience Act, the Digital Operational Resilience Act, and updates to the EU Product Liability Directive set clear standards for software security, integrity, and support, transforming the market dynamics for vendors selling products and services into the EU.
There were also private-sector efforts to address systemic Nth-party risks. OWASP published guidelines on adapting cyber-incident response to the age of AI-powered threats. And in February, the Open Source Security Foundation published a framework to help open-source projects improve their security postures, and GitHub implemented new security features on npm and the GitHub platforms to head off common attack methods (like GitHub Action abuse) and strengthen supply chain governance.
One transformative development was the announcement by AI vendor Anthropic that it is integrating its Claude Code with GitHub Actions to create an automated security reviews feature that uses a /security-review command. This AI-powered security review of developer workflows can speed code reviews and identify issues earlier in the development process.
Offsetting these positive developments, however, organizations remain deeply invested in legacy technologies and approaches that were developed to address more traditional cyber risks and threat actors and are inadequate in the age of AI.
The verdict? Yes:* The transformation among organizations to address supply chain threats and Nth-party risk is clearly under way. But mind that asterisk: *It is still early days, and the shift will require a huge re-assessment of risks, the adoption of new security tooling, and the reallocation of resources in ways that prioritize security in product development and Nth-party risks — even at the cost of faster development cycles and product innovation. Will that happen? Stay tuned.