Log4Shell (CVE-2021-44228) struck fear in many IT departments, CISOs, and vendors - an easily exploitable remote execution vulnerability in a ubiquitous Java module used by many, many applications. Companies are still reeling from the impact and scrambling to mitigate the issue. But after you've dealt with this, you may very well deal with it again - at least if you don't start considering how you include external code in your development chain. And it doesn't matter what language you use.
Malicious Python Libraries
Log4Shell was specifically Java-based via the commonly used "log4j" version 2 module from the Apache Software Foundation. But back in early November, two Javascript modules used by Node.js developers were compromised to steal passwords. Back in 2019, two Python modules relied on name confusion stole SSH and GPG keys.
These modules are publicly available snippets of code that perform specific functions. Developers, especially those using web application frameworks (Ruby on Rails, Struts, Django… to name a few), to rapidly develop code often pull in these modules as part of their development - why write code to do something someone already wrote the code for and made available publicly? Frameworks and development environments often make this very easy. But unless you are validating the code yourself, you cannot trust it implicitly. While the repositories try to take measures to ensure the integrity of the resources they serve, it is not an easy job. Sometimes it turns out to be a previously undiscovered flaw. Other times it could be a malicious author. And it takes time to discover and remediate issues at the sources, never mind in the development environments that use them.
This doesn't mean you shouldn't use them…unless you want to go back to the days of writing 100% code and possibly introducing more flaws than if you didn't. But what it does mean is that you should set rules and processes around the use of external code.
Best Practices for Development
First off, everyone should document what code they pull in and what version. This makes auditing the code use faster when a security report is issued. In addition, the developer, or a specific person, should be monitoring the modules for updates and especially for security fixes.
Second, you want your development environment to be agile to be able to update in short order if necessary. That could be for two reasons - one, there is a security update, and two, if the function calls are changing, it would require changes to your own code. You don't want to spend the time to update and test many changes when a security update requires you to update to a much later version.
Third, for better peace of mind, consider using a repository proxy system that monitors for security reports and can prevent access from your internal systems to a module flagged as an issue, as well as actively notify you of the prior usage of modules that have been flagged. One such system is Sonatype’s Nexus Repository Pro which supports many languages, repositories, and development tools. (iuvo has no relationship with Sonatype - it is just an example we are familiar with.)
So now your internal development can be prepared for the next major vulnerability that wasn't your fault but is your responsibility.
Contact iuvo Technologies today, and let’s talk about more ways your company can protect itself from harmful vulnerabilities.