It should be no surprise that running untrusted code in a GitHub Actions workflow can have unintended consequences. It’s a killer feature, to automatically run through a code test suite whenever a pull request is opened. But that pull request is run in some part of the target’s development environment, and there’s been a few clever attacks found over the years that take advantage of that. There’s now another one, what Legit Security calls Github Environment Injection, and there were some big-name organizations vulnerable to it.
The crux of the issue is the
$GITHUB_ENV file, which contains environment variables to be set in the Actions environment. Individual variables get added to this file as part of the automated action, and that process needs to include some sanitization of data. Otherwise, an attacker can send an environment variable that includes a newline and completely unintended environment variable. And an unintended, arbitrary environment variable is game over for the security of the workflow. The example uses the
NODE_OPTIONS variable to dump the entire environment to an accessible output. Any API keys or other secrets are revealed.
This particular attack was reported to GitHub, but there isn’t a practical way to fix it architecturally. So it’s up to individual projects to be very careful about writing untrusted data into the
Your Tires Are Leaking (Data)
Back a few years ago, [Mike Metzger] gave a DEFCON talk about TPMS, Tire Pressure Monitoring Systems. This nifty safety feature allows sensors in car tires to talk to the infotainment center, and warn when a tire is low. [Drew Griess] decided to follow up on this bit of info, and see just how practical it would be to use and abuse these gizmos.
An RTL_SDR and the very useful rtl_433 project do the job quite nicely. Add an antenna, and the signals are readable over fifty feet away. It really becomes interesting when you realize that each of those sensors have a unique ID sent in each ping. Need to track a vehicle? Just follow its tires!
SHA is dead, long live SHA
NIST has formally announced the retirement of SHA-1 at the end of 2030, with the recommendation to move to SHA-2 or SHA-3 as soon as is possible. Which seems a bit odd, as SHA-1 has been considered broken for quite some time, most notably in the wake of the SHAttered demonstration from 2017, where two PDFs were generated with matching SHA-1 hashes. The latest iteration of that attack puts the cost of generating a collision, where the attacker controls both inputs, at a measly $45,000 of compute. The wheels of official change turn slowly at times.
OpenAI, Security Researcher
One of the tedious bits of reverse engineering is to work through the various functions, guess their purpose, and rename everything to something useful. If only there was a way to automate the process. Enter Gepetto, a project from [Ivan Kwiatkowski], that asks OpenAI’s Davinci-003 model to describe what a decompiled function does. It’s packaged as an IDA Pro plugin, but the concept should apply to other decompilers, too. Step two is to fold that description back into the AI model, and ask it to name the function and variables. The normal warning applies — the AI chat engine will always generate a description that sounds good, but it may be wildly inaccurate.
Sovrin and Decentralized Vulnerabilities
The folks at CyberArk took a look at the Decentralized IDentity (DID) landscape, and found a spectacularly bad vulnerability in the open source Sovrin network. So first, DID is an attempt to do something genuinely useful on the blockchain, in this case storing identity information. Want to prove that your WordPress account is owned by the same person as your Twitter or Mastodon account? DID can help. The version of this idea that really gets our open source juices flowing is Self-Sovereign Identity, a DID network that allows the end users to have ultimate control over their own data. And for all that goodness, the network is still made up of servers running potentially vulnerable code. The
POOL_UPDGRADE command is limited to authorized administrators of the given pool, but the code behind it uses a validate-then-authenticate paradigm.
Let’s chat about that for a moment. The order of operations can really matter. The first place I really had to think about this concept was while working on Single Packet Authorization in the Fwknop project. Those packets were a bit of request data, both encrypted and then authenticated with a shared key. Which should happen first? Did we want the data to get signed first, and then encrypted? Definitely no. The problem is when the message is received on the other side, the decryption process would happen first, on potentially untrusted data. If there was a vulnerability in the data parsing code, it could be triggered by an unauthenticated user. Instead, the Fwknop project intentionally used the encrypt-then-authenticate approach. So when receiving the incoming packet, the first step was to check the authentication, and drop the packet if it wasn’t from a known user.
Back to Sovrin, where the processing of an incoming command first went through a validation step, before checking for an authorized source. Part of that validation is to look at the packages in the upgrade command, and make a call to
dpkg to verify that it’s a real package, using a simple concatenation to generate the command. And using
subprocess.run with shell set to True. So it’s trivially exploitable with a semicolon and whatever command you want to run. And to make matters way worse, the upgrade command gets forwarded through the pool automatically, all before the authentication check. It’s not often that a vulnerability is self-worming. This one has a well-deserved 10.0 CVSS score. This one was privately disclosed back in May, and fixed less than a month later.
Bits and Bytes
Okta is having a rough year. After several breaches earlier this year, Okta’s private GitHub repositories were accessed and copied by an attacker. So far, it appears that no customer data was accessed, and to their credit, Okta has a security posture that “does not rely on the confidentiality of its source code as a means to secure its services.” It’s likely that this incident was a follow-on from the previous breach, using credentials obtained in that data.
And breaking just before we hit the presses, Lastpass has revealed more information about the breach they suffered back in November. It’s not good. We made an educated guess that the cause was an access token lost during a previous incident, but the latest news indicates it was a social engineering attack, using captured information. The data lost is troubling: including encrypted data vaults, metadata like URLs, customer name, address, phone number, IP Address, etc.
Thankfully this doesn’t include credit card information, and the Lastpass Zero Knowledge architecture does protect the actual passwords — assuming your master password is sufficiently secure. This isn’t quite a worst-case scenario, as no malicious code was shipped to customers, but it’s just about as bad as could be otherwise. Particularly, be on the lookout for spearphishing and other social engineering attacks, in an attempt to leverage the pilfered information.