>Running npm install is not negligence. Installing dependencies is not a security failure. The security failure is in an ecosystem that allows packages to run arbitrary code silently.
No, your security failure is that you use a package manager that allows third-parties push arbitrary code into your product with no oversight. You only have "secutity" to the extent that you can trust the people who control those packages to act both competently and in good faith ad infinitum.
Also the OP seemingly implies credentials are stored on-filesystem in plaintext but I might be extrapolating too much there.
Same thing with IDE plugins. At least some are full-featured by the manufacturer, but I couldn't get on with VS Code as for every small feature I had to install some random plugin (even if popular, but still developed by who-knows-who).
I think the OP is aware of that and I agree with them that it’s bad practice despite how common it is.
For example with AWS, you can use the AWS CLI to sign you in and that goes through the HTTPS auth flow to provide you with temporary access keys. Which means:
1. You don’t have any access keys in plain text
2. Even if your env vars are also stolen, those AWS keys expire within a few hours anyway.
If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.
I will agree that It does take effort to get your cloud credentials set up in a convenient way (easy to access, but without those access keys in plain text). But if you’re doing cloud stuff professionally, like the devs in the article, then you really should learn how to use these tools.
> stored in our database which was not compromised
Personally I don't really agree with "was not compromised"
You say yourself that the guy had access to your secrets and AWS, I'd definitely consider that compromised even if the guy (to your knowledge) didn't read anything from the database. Assume breach if access was possible.
It depends on what kind of access we're talking about. If we're talking about AWS resource mutations, one can trust CloudTrail to accurately log those actions. CloudTrail can also log data plane events, though you have to turn it on, and it costs extra. Similarly, RDS access logging is pretty trustworthy, though functionality varies by engine.
> This is one of the frustrating realities of these attacks: once the malware runs, identifying the source becomes extremely difficult. The package doesn't announce itself. The pnpm install completes successfully. Everything looks normal.
Sounds like there’s no EDR running on the dev machines? You should have more to investigate if Sentinel One/CrowdStrike/etc were running.
I have been thinking about this. How do I make my git setup on my laptop secure? Currently, I have my ssh key on the laptop, so if I want to push, I just use git push. And I have admin credentials for the org. How do I make it more secure?
1) Get 1Password, 2) use 1Password to hold all your SSH keys and authorize SSH access [1], 3) use 1Password to sign your Git commits and set up your remote VCS to validate them [2], 4) use GitHub OAuth [3] or the GitHub CLI's Login with HTTPS [4] to do repository push/pull. If you don't like 1Password, use BitWarden.
With this setup there are two different SSH keys, one for access to GitHub, one is a commit signing key, but you don't use either to push/pull to GitHub, you use OAuth (over HTTPS). This combination provides the most security (without hardware tokens) and 1Password and the OAuth apps make it seamless.
Do not use a user with admin credentials for day to day tasks, make that a separate user in 1Password. This way if your regular account gets compromised the attacker will not have admin credentials.
One approach I started using a could of years ago was storing SSH private keys in the TPM, and using it via PKCS11 in SSH agent.
One benefit of Microsoft requiring them for Windows 11 support is that nearly every recent computer has a TPM, either hardware or emulated by the CPU firmware.
It guarantees that the private key can never be exfiltrated or copied. But it doesn't stop malicious software on your machine from doing bad things from your machine.
So I'm not certain how much protection it really offers on this scenario.
That's what I do. For those of us too lazy to read the article, tl;dr:
ssh-keygen -t ed25519-sk
or, if your FIDO token doesn't support edwards curves:
ssh-keygen -t ecdsa-sk
tap the token when ssh asks for it, done.
Use the ssh key as usual. OpenSSH will ask you to tap the token every time you use it: silent git pushes without you confirming it by tapping the token become impossible. Extracting the key from your machine does nothing — it's useless without the hardware token.
There is no defense against a compromised laptop. You should prevent this at all cost.
You can make it a bit more challenging for the attacker by using secure enclaves (like TPM or Yubikey), enforce signed commits, etc. but if someone compromised your machine, they can do whatever you can.
Enforcing signing off on commits by multiple people is probably your only bet. But if you have admin creds, an attacker can turn that off, too. So depending on your paranoia level and risk appetite, you need a dedicated machine for admin actions.
It's more nuanced than that. Modern OSes and applications can, and often do, require re-authentication before proceeding with sensitive actions. I can't just run `sudo` without re-authenticating myself; and my ssh agent will reauthenticate me as well. See, e.g., https://developer.1password.com/docs/ssh/agent/security
The malware can wait until you authenticate and perform its actions then in the context of your user session.
Really, it's pointless. Unless you are signing specific actions from an independent piece of hardware [1], the malware can do what you can do. We can talk about the details all day long, and you can make it a bit harder for autonomously acting malware, but at the end of the day it's just a finger exercise to do what they want to do after they compromised your machine.
You can add a gpg key and subkeys to a yubikey and use gpg-agent instead of ssh-agent for ssh auth. When you commit or push, it asks you for a pin for the yubikey to unlock it.
1 store my ssh key in 1Password and use the 1Password ssh agent. This agents asks for access to the key(s) with Touch ID. Either for each access or for each session etc. one can also whitelist programs but I think this all reduces the security.
There is the FIDO feature which means you don’t need to hackle with gpg at all. You can even use an ssh key as signing key to add another layer of security on the GitHub side by only allowing signed commits.
I’ve started to get more and more paranoid about this. It’s tough when you’re running untrusted code, but I think I’ve improved this by:
not storing SSH keys on the filesystem, and instead using an agent (like 1Password) to mediate access
Stop storing dev secrets/credentials on the filesystem, injecting them into processes with env vars or other mechanisms. Your password manager could have a way to do this.
Develop in a VM separate from your regular computer usage. On windows this is essential anyway through using WSL, but similar things exist for other OSs
This is what agents are for. You load your private key into an agent so you don't have to enter your passphrase every time you use it. Agents are supposed to be hardened so that your private key can't be easily exfiltrated from them. You can then configure `ssh` to pass requests through the agent.
There are lots of agents out there, from the basic `ssh-agent`, to `ssh-agent` integrated with the MacOS keychain (which automatically unlocks when you log in), to 1Password (which is quite nice!).
This is a good defense for malware that only has read access to the filesystem or a stolen hard drive scenario without disk encryption, but does nothing against the compromised dev machine scenario.
Keep in mind that not every agent is so naive as to allow a local client to connect to it without reauthenticating somehow.
1Password, for example, will, for each new application, pop up a fingerprint request on my Mac before handling the connection request and allow additional requests for a configurable period of time -- and, by default, it will lock the agent when you lock your machine. It will also request authentication before allowing any new process to make the first connection. See e.g. https://developer.1password.com/docs/ssh/agent/security
This seems to be the standard thing people miss. All the things that make security more convenient also make it weaker. They boast about how "doing thing X" makes them super secure, pat on the back and done. Completely ignoring other avenues they left open.
A case like this brings this out a lot. Compromised dev machine means that anything that doesn't require a separate piece of hardware that asks for your interaction is not going to help. And the more interactions you require for tightening security again the more tedious it becomes and you're likely going to just instinctively press the fob whenever it asks.
Sure, it raises the bar a bit because malware has to take it into account and if there are enough softer targets they may not have bothered. This time.
Classic: you only have to outrun the other guy. Not the lion.
You memorize it, or keep it in 1Password. 1Password can manage your SSH keys, and 1Password can/does require a password, so it's still protected with something you know + something you have.
Passphrases, when strong enough, are fine when they are not traversing a medium that can be observed by a third party. They're not recommended for authenticating a secure connection over a network, but they’re fine for unlocking a much longer secret that cannot be cracked via guessing, rainbow tables, or other well known means. Hell, most people unlock their phones with a 4 digit passcode, and their computers with a passphrase.
You can set up your repo to disable pushing directly to branches like main and require MFA to use the org admin account, so something malicious would need to push to a benign branch and separately be merged into one that deploys come from.
There's nothing wrong with pushing to main, as long as you don't blindly treat the head of the main branch as production-ready. It's a branch like any other; Git doesn't care what its name is.
They can't with git by itself, but if you're also signed in to GitHub or BitBucket's CLI with an account able to approve merges they could use those tools.
password-protect your key (preferably with a good password that is not the same password you use to log in to your account). If you use a password it's encrypted; otherwise its stored on plaintext and anybody who manages to get a hold of your laptop can steal the private key.
That’s weird, pnpm no longer automatically runs lifecycle scripts like preinstall [1], so unless they were running a very old version of pnpm, shouldn’t they have been protected from Shai-Hulud?
Let me understand it fully. That means they updated dependencies using old, out of date package manager. If pnpm was up to date, this would no have happened? Sounds totally like their fault then
We don't have a clear explanation of the destructive behavior, right? It looks like it had no real purpose, and there were much more effective ways of destroying their repos. Very script kiddie-like, which does not really fit the main complexity of the virus. Very surprising.
Given that all the stolen credentials were made public, I was hoping that someone would build a haveibeenpwned style site. We know we were compromised on at least a few tokens, but it would be nice to be able to search using a compromised token to find out what else leaked. We’ve rotated everything we could think of but not knowing if we’ve missed something sucks.
It was a really noisy worm though, and it looked like a few actors also jumped on the exposed credentials making private repos public and modifying readmes promoting a startup/discord.
The approach the attacker took makes little sense to me, perhaps someone else has an explanation for it? At first they monitored what's going on and then silently exfiltrated credentials and private repos. Makes sense so far. But then why make so much noise with trying to force push repositories? It's Git, surely there's a clone of nearly everything on most dev machines etc.
> This incident involved one of our engineers installing a compromised package on their development machine, which led to credential theft and unauthorized access to our GitHub organization.
The org only has 4-5 engineers. So you can imagine the impact a large org will have.
Got any pointers on how to configure this for yarn? I'm not turning anything up in the yarn documentation or in my random google searches.
npm still seems to be debating whether they even want to do it. One of many reasons I ditched npm for yarn years ago (though the initial impetus was npm's confused and constantly changing behaviors around peer dependencies)
Yarn is unfortunately a dead-end security-wise under current maintainership.
If you are still on yarn v1 I suggest being consistent with '--ignore-scripts --frozen-lockfile' and run any necessary lifecycle scripts for dependencies yourself. There is @lavamoat/allow-scripts to manage this if your project warrants it.
If you are on newer yarn versions I strongly encourage to migrate off to either pnpm or npm.
>Running npm install is not negligence. Installing dependencies is not a security failure. The security failure is in an ecosystem that allows packages to run arbitrary code silently.
No, your security failure is that you use a package manager that allows third-parties push arbitrary code into your product with no oversight. You only have "secutity" to the extent that you can trust the people who control those packages to act both competently and in good faith ad infinitum.
Also the OP seemingly implies credentials are stored on-filesystem in plaintext but I might be extrapolating too much there.
Same thing with IDE plugins. At least some are full-featured by the manufacturer, but I couldn't get on with VS Code as for every small feature I had to install some random plugin (even if popular, but still developed by who-knows-who).
It wasn't in their product. It was just on a devs machine
I think the OP is aware of that and I agree with them that it’s bad practice despite how common it is.
For example with AWS, you can use the AWS CLI to sign you in and that goes through the HTTPS auth flow to provide you with temporary access keys. Which means:
1. You don’t have any access keys in plain text
2. Even if your env vars are also stolen, those AWS keys expire within a few hours anyway.
If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.
I will agree that It does take effort to get your cloud credentials set up in a convenient way (easy to access, but without those access keys in plain text). But if you’re doing cloud stuff professionally, like the devs in the article, then you really should learn how to use these tools.
> Also the OP seemingly implies credentials are stored on-filesystem in plaintext but I might be extrapolating too much there.
Doesn't really matter, if the agent is unlocked they can be accessed.
> stored in our database which was not compromised
Personally I don't really agree with "was not compromised"
You say yourself that the guy had access to your secrets and AWS, I'd definitely consider that compromised even if the guy (to your knowledge) didn't read anything from the database. Assume breach if access was possible.
There are logs for accessing aws resources and if you don't see the access before you revoke it then the data is safe
Unless the attacker used any one of hundreds of other avenues to access the AWS resource.
Are you sure they didn’t get a service account token from some other service then use that to access customer data?
I’ve never seen anyone claim in writing all permutations are exhaustively checked in the audit logs.
It depends on what kind of access we're talking about. If we're talking about AWS resource mutations, one can trust CloudTrail to accurately log those actions. CloudTrail can also log data plane events, though you have to turn it on, and it costs extra. Similarly, RDS access logging is pretty trustworthy, though functionality varies by engine.
> This is one of the frustrating realities of these attacks: once the malware runs, identifying the source becomes extremely difficult. The package doesn't announce itself. The pnpm install completes successfully. Everything looks normal.
Sounds like there’s no EDR running on the dev machines? You should have more to investigate if Sentinel One/CrowdStrike/etc were running.
I have been thinking about this. How do I make my git setup on my laptop secure? Currently, I have my ssh key on the laptop, so if I want to push, I just use git push. And I have admin credentials for the org. How do I make it more secure?
1) Get 1Password, 2) use 1Password to hold all your SSH keys and authorize SSH access [1], 3) use 1Password to sign your Git commits and set up your remote VCS to validate them [2], 4) use GitHub OAuth [3] or the GitHub CLI's Login with HTTPS [4] to do repository push/pull. If you don't like 1Password, use BitWarden.
With this setup there are two different SSH keys, one for access to GitHub, one is a commit signing key, but you don't use either to push/pull to GitHub, you use OAuth (over HTTPS). This combination provides the most security (without hardware tokens) and 1Password and the OAuth apps make it seamless.
Do not use a user with admin credentials for day to day tasks, make that a separate user in 1Password. This way if your regular account gets compromised the attacker will not have admin credentials.
[1] https://developer.1password.com/docs/ssh/agent/ [2] https://developer.1password.com/docs/ssh/git-commit-signing/ [3] https://github.com/hickford/git-credential-oauth [4] https://cli.github.com/manual/gh_auth_login
I already use 1password and have it already installed. Will try this out. Thanks!
One approach I started using a could of years ago was storing SSH private keys in the TPM, and using it via PKCS11 in SSH agent.
One benefit of Microsoft requiring them for Windows 11 support is that nearly every recent computer has a TPM, either hardware or emulated by the CPU firmware.
It guarantees that the private key can never be exfiltrated or copied. But it doesn't stop malicious software on your machine from doing bad things from your machine.
So I'm not certain how much protection it really offers on this scenario.
Linux example: https://wiki.gentoo.org/wiki/Trusted_Platform_Module/SSH
macOS example (I haven't tested personally): https://gist.github.com/arianvp/5f59f1783e3eaf1a2d4cd8e952bb...
Or use a FIDO token to protect your SSH key, which becomes useless without the hardware token.
https://wiki.archlinux.org/title/SSH_keys#FIDO/U2F
That's what I do. For those of us too lazy to read the article, tl;dr:
or, if your FIDO token doesn't support edwards curves: tap the token when ssh asks for it, done.Use the ssh key as usual. OpenSSH will ask you to tap the token every time you use it: silent git pushes without you confirming it by tapping the token become impossible. Extracting the key from your machine does nothing — it's useless without the hardware token.
There is no defense against a compromised laptop. You should prevent this at all cost.
You can make it a bit more challenging for the attacker by using secure enclaves (like TPM or Yubikey), enforce signed commits, etc. but if someone compromised your machine, they can do whatever you can.
Enforcing signing off on commits by multiple people is probably your only bet. But if you have admin creds, an attacker can turn that off, too. So depending on your paranoia level and risk appetite, you need a dedicated machine for admin actions.
It's more nuanced than that. Modern OSes and applications can, and often do, require re-authentication before proceeding with sensitive actions. I can't just run `sudo` without re-authenticating myself; and my ssh agent will reauthenticate me as well. See, e.g., https://developer.1password.com/docs/ssh/agent/security
The malware can wait until you authenticate and perform its actions then in the context of your user session.
Really, it's pointless. Unless you are signing specific actions from an independent piece of hardware [1], the malware can do what you can do. We can talk about the details all day long, and you can make it a bit harder for autonomously acting malware, but at the end of the day it's just a finger exercise to do what they want to do after they compromised your machine.
[1] https://www.reiner-sct.com/en/tan-generators/tan-generator-f... (Note that a display is required so you can see what specific action you are actually signing, in this case it shows amount and recipient bank account number.)
You can add a gpg key and subkeys to a yubikey and use gpg-agent instead of ssh-agent for ssh auth. When you commit or push, it asks you for a pin for the yubikey to unlock it.
1 store my ssh key in 1Password and use the 1Password ssh agent. This agents asks for access to the key(s) with Touch ID. Either for each access or for each session etc. one can also whitelist programs but I think this all reduces the security.
There is the FIDO feature which means you don’t need to hackle with gpg at all. You can even use an ssh key as signing key to add another layer of security on the GitHub side by only allowing signed commits.
You can put the ssh privkey on the yubikey itself and protect it with a pin.
You can also just generate new ssh keys and protect them with a pin.
I’ve started to get more and more paranoid about this. It’s tough when you’re running untrusted code, but I think I’ve improved this by:
not storing SSH keys on the filesystem, and instead using an agent (like 1Password) to mediate access
Stop storing dev secrets/credentials on the filesystem, injecting them into processes with env vars or other mechanisms. Your password manager could have a way to do this.
Develop in a VM separate from your regular computer usage. On windows this is essential anyway through using WSL, but similar things exist for other OSs
Your SSH private key must be encrypted using a passphrase. Never store your private key in the clear!
And what do you do with the passphrase, store it encrypted with a passphrase?
This is what agents are for. You load your private key into an agent so you don't have to enter your passphrase every time you use it. Agents are supposed to be hardened so that your private key can't be easily exfiltrated from them. You can then configure `ssh` to pass requests through the agent.
There are lots of agents out there, from the basic `ssh-agent`, to `ssh-agent` integrated with the MacOS keychain (which automatically unlocks when you log in), to 1Password (which is quite nice!).
This is a good defense for malware that only has read access to the filesystem or a stolen hard drive scenario without disk encryption, but does nothing against the compromised dev machine scenario.
Keep in mind that not every agent is so naive as to allow a local client to connect to it without reauthenticating somehow.
1Password, for example, will, for each new application, pop up a fingerprint request on my Mac before handling the connection request and allow additional requests for a configurable period of time -- and, by default, it will lock the agent when you lock your machine. It will also request authentication before allowing any new process to make the first connection. See e.g. https://developer.1password.com/docs/ssh/agent/security
This seems to be the standard thing people miss. All the things that make security more convenient also make it weaker. They boast about how "doing thing X" makes them super secure, pat on the back and done. Completely ignoring other avenues they left open.
A case like this brings this out a lot. Compromised dev machine means that anything that doesn't require a separate piece of hardware that asks for your interaction is not going to help. And the more interactions you require for tightening security again the more tedious it becomes and you're likely going to just instinctively press the fob whenever it asks.
Sure, it raises the bar a bit because malware has to take it into account and if there are enough softer targets they may not have bothered. This time.
Classic: you only have to outrun the other guy. Not the lion.
See my comment above; not every SSH agent is alike.
You memorize it, or keep it in 1Password. 1Password can manage your SSH keys, and 1Password can/does require a password, so it's still protected with something you know + something you have.
One option is to remember it.
I don’t think that’s considered secure enough, see the other answers and the push for passkeys.
I mean, if passphrases were good for anything you’d directly use them for the ssh connection? :)
Passphrases, when strong enough, are fine when they are not traversing a medium that can be observed by a third party. They're not recommended for authenticating a secure connection over a network, but they’re fine for unlocking a much longer secret that cannot be cracked via guessing, rainbow tables, or other well known means. Hell, most people unlock their phones with a 4 digit passcode, and their computers with a passphrase.
Add a password or hardware 2-factor to your ssh key. And get a password manager with the same for those admin credentials.
You can set up your repo to disable pushing directly to branches like main and require MFA to use the org admin account, so something malicious would need to push to a benign branch and separately be merged into one that deploys come from.
Pushing directly to main seems crazy - for anything that is remotely important I would use a pull request/merge request pattern
There's nothing wrong with pushing to main, as long as you don't blindly treat the head of the main branch as production-ready. It's a branch like any other; Git doesn't care what its name is.
Depends on the use case of the repo.
But the attacker could just create a branch, merge request and then merge that?
They can't with git by itself, but if you're also signed in to GitHub or BitBucket's CLI with an account able to approve merges they could use those tools.
We require review on PRs before they can be merged.
password-protect your key (preferably with a good password that is not the same password you use to log in to your account). If you use a password it's encrypted; otherwise its stored on plaintext and anybody who manages to get a hold of your laptop can steal the private key.
That’s weird, pnpm no longer automatically runs lifecycle scripts like preinstall [1], so unless they were running a very old version of pnpm, shouldn’t they have been protected from Shai-Hulud?
1: https://github.com/pnpm/pnpm/pull/8897
Let me understand it fully. That means they updated dependencies using old, out of date package manager. If pnpm was up to date, this would no have happened? Sounds totally like their fault then
At the end of the article, they talk about how they've since updated to the latest major version of pnpm, which is the one with that change
Yeah, I thought that was the main reason to use pnpm. Very confused.
Maybe the project itself had a postinstall script? It doesn't run lifecycle scripts of dependencies, but it still runs project-level ones.
We don't have a clear explanation of the destructive behavior, right? It looks like it had no real purpose, and there were much more effective ways of destroying their repos. Very script kiddie-like, which does not really fit the main complexity of the virus. Very surprising.
It hides the malware's trail, and disguises which keys were leaked, making rotation harder
Really appreciate the transparency here. Post-mortems like this are vital for the industry.
I'm curious was the exfiltration traffic distinguishable from normal developer traffic?
We've been looking into stricter egress filtering for our dev environments, but it's always a battle between security and breaking npm install
Wouldn’t the IP allowlist feature on the GitHub organisation work wonders for this kind of attack?
Given that all the stolen credentials were made public, I was hoping that someone would build a haveibeenpwned style site. We know we were compromised on at least a few tokens, but it would be nice to be able to search using a compromised token to find out what else leaked. We’ve rotated everything we could think of but not knowing if we’ve missed something sucks.
Doesn't it publish the repos to your Github account? Just clone and look at what was stolen.
On the follow up Wiz blog they suggested that the exfiltration was cross-victim https://www.wiz.io/blog/shai-hulud-2-0-aftermath-ongoing-sup...
As the sibling comment said, the worm used stolen GitHub credentials from other victims, and randomly distributed the uploads between victims.
Also everything was double base64 encoded which makes it impossible to use GitHub search.
The Torvalds commits were a common post infection signature, common in the random repos that published secrets (Microsoft documented https://www.microsoft.com/en-us/security/blog/2025/12/09/sha...)
It was a really noisy worm though, and it looked like a few actors also jumped on the exposed credentials making private repos public and modifying readmes promoting a startup/discord.
The approach the attacker took makes little sense to me, perhaps someone else has an explanation for it? At first they monitored what's going on and then silently exfiltrated credentials and private repos. Makes sense so far. But then why make so much noise with trying to force push repositories? It's Git, surely there's a clone of nearly everything on most dev machines etc.
Malware sometimes suffers from feature creep too.
> This incident involved one of our engineers installing a compromised package on their development machine, which led to credential theft and unauthorized access to our GitHub organization.
The org only has 4-5 engineers. So you can imagine the impact a large org will have.
Points for an excellent post-mortem.
I am loving the ancient Lovecraftian horror vibe of these exploit names. Good for raising awareness, I guess!
AFAIK Shai-Hulud is the sandworm in Frank Herbert's Dune (but also an American metalcore band)
Shai Hulud is the god that lives inside the sandworms in Dune.
Noted!
NPM post-install scripts considered harmful.
There has to be a tool that allows you (or an AI) to easily review post-install scripts before you install the package.
As mentioned in the article, good NPM package managers just do this now.
pnpm does it by default, yarn can be configured. Not sure about npm itself.
Got any pointers on how to configure this for yarn? I'm not turning anything up in the yarn documentation or in my random google searches.
npm still seems to be debating whether they even want to do it. One of many reasons I ditched npm for yarn years ago (though the initial impetus was npm's confused and constantly changing behaviors around peer dependencies)
enableScripts: false in .yarnrc.yml https://yarnpkg.com/configuration/yarnrc#enableScripts
And then opt certain packages back in with dependenciesMeta in package.json https://yarnpkg.com/configuration/manifest#dependenciesMeta....
Yarn is unfortunately a dead-end security-wise under current maintainership.
If you are still on yarn v1 I suggest being consistent with '--ignore-scripts --frozen-lockfile' and run any necessary lifecycle scripts for dependencies yourself. There is @lavamoat/allow-scripts to manage this if your project warrants it.
If you are on newer yarn versions I strongly encourage to migrate off to either pnpm or npm.