Container Security in the Era of the SEC Cybersecurity Rules

Chris Koehnecke
← Slim Blog

The SEC’s recently enacted “Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure”, along with the recent indictment of Solarwinds CISO Tim Brown, were bombshells for the CISO community. 

Tim Brown is a friend to many in the cybersecurity community — he and I both delivered talks at BSides Albuquerque late last year — and while the accusations paint a picture of negligence, the indictment made every CISO think: “If this can happen to Tim, who has been open and transparent about the incident and lessons learned, it can happen to any of us.” 

So it’s incumbent on every CISO out there to deeply understand the new SEC rules. This is especially true if you are a public company now governed by them, but as we all know, if it goes for public markets, expectations will soon be that every company, public or private, abides by similar standards. 

As a former Managing Director at KPMG in the cybersecurity practice, I’ve helped hundreds of CISOs navigate FedRAMP, SOC2, ISO and other compliance frameworks. Now as a CISO myself for Jit, a fast-growing cybersecurity startup, here’s how I’m thinking about the SEC Cybersecurity rules, and a roadmap for how you can effectively evaluate your organization’s security posture, especially when it comes to cloud-native and container security.

Understanding the New SEC Rules from a CISO’s Perspective 

To get started, I’ve gone through the rules thoroughly and there are two clear requirements for publicly-traded companies:

  1. Build a cyber program that addresses your top risks and be able to demonstrate due diligence in building the program. 
  2. Report material cyber incidents to the SEC immediately; that means you need a solid vulnerability management and security monitoring program to detect events and the ability to quickly determine if events need to be escalated to incidents with an incident response (IR) process that is documented and tested.

If you are a CISO, your reaction was likely similar to mine: “Way easier said than done.”  

Many times, as security practitioners we will be a lot more focused on highlighting the positives and what we’re doing well, but have a much more difficult time communicating where we’re failing, and the potential risk and impact of these challenges.  

Vulnerability management has become the cornerstone of most security programs, but the key part is ensuring that these programs truly address the key risks that sophisticated, modern bad actors would use to gain unauthorized access––resulting in a Solarwinds-like incident.  

So how do we really do this? 

As a CISO for leading tech startups (soon to be publicly-traded, we can hope!), I decided to share how I went about learning lessons from this hyped up story, and put into a framework what, to me, are good practices for CISOs to “Keep calm and manage real risk”.

I tapped into my network of CISO peers first, to try and understand how to ensure that my security tooling is delivering quality results while not overwhelming engineering teams. I then repeated this exact same exercise with engineering leaders, and asked how engineering teams can ensure good security coverage.

The responses were exact mirror images of each other: CISOs were largely concerned about missing something big (who isn’t really?!), while engineering leaders were concerned about engineers spending too much time fixing vulnerabilities, adversely impacting delivery velocity.  

So how do we reconcile these two seemingly disparate conflicts of interest when it comes to security and build a program that satisfies the SEC’s new requirements? Read on.   

Building a Comprehensive Cybersecurity Program for Containerized Software 

Good vulnerability management starts with constantly understanding and tracking the tools and technologies running in your environments––this need is critical to both security and engineering teams. 

This is achieved by ensuring engineering environments are well documented, kept updated, and this is possible with the right tools, automation, and diagrams. The quality of our vulnerability management program will be directly impacted by selecting the best security tooling and scanners based on these actual, tracked assets and languages for the most comprehensive coverage, in order to reduce noise, toil, and prioritize remediation based on actual threats.

Once we have our environments documented and tooling selected, the next concerns include environment coverage, analyzing and triaging results, and meeting remediation SLAs (as defined by our risk tolerance and appetite). Below are the primary areas to focus on when building your vulnerability management programs, that if properly implemented will get you the large majority of the way to mitigating security risk.

  1. Perimeter: In the same way that a fence or lock is a first order safety measure to keep out potential threats, today our publicly facing assets are the door into our asset kingdom, and identity is the new perimeter. That’s why we need to be sure that all of our web assets are sufficiently protected, and for this purpose there are excellent tools to help us ensure web application security and API security, such as ZAP or BurpSuite.  By leveraging these tools regularly and before deploying publicly facing assets, we can have sufficient confidence in our perimeter security.
  2. Code: Automation has always been the backbone to progress, and we now have best of breed automated tools to detect code repositories, run code scans (SAST, SCA, Secrets Detection), and even deliver auto-remediation when possible in the form of fix PRs and suggestions at specific lines of code.  This provides enough peace of mind to feel our application code is being properly audited and secured to reduce threats and risk.
  3. Infrastructure: With infrastructure now being largely software-driven through infrastructure as code, we can derive the same security benefits we apply to our application code, to our infrastructure as well. We can leverage automated tools to detect cloud accounts and asset inventories, run infrastructure scans for both the configuration code (IaC, Dockerfiles), and runtime (such as Prowler) enabling us to prioritize security management and maintenance on real risk to our infrastructure.
  4. CI/CD: Many organizations today are high velocity and cloud-first, often leveraging cloud native technology to build and ship software, namely containers.  On top of this, there are quite a few aspects to CI/CD security that we need to be sure we’re covering, that are the direct pathway from the code to production (infrastructure).  While the other parts of our stack have scanners, automation and tooling to give us sufficient confidence in our vulnerability management program, one of the areas that give me pause is whether we are confident that we are using the best container scanners to give us the same peace of mind as the rest of our stack? 

Zooming in on Container Security - Where Do We Start?

When you think about containers and security, it doesn’t need to be overwhelming––containers are simply a means of packaging and shipping software in a uniform way. You can scan containers the same way you scan other parts of your infrastructure, and many good tools exist to do this. 

From my experience working with various companies, you should set up container scanning in two places:  

  • First, in your CI/CD pipeline to ensure that you are catching vulnerabilities at build-time when new code changes come down. This will ensure issues are caught as soon as possible and as close to the development lifecycle, limiting the impact of context switching for developers. 
  • Next, you should be doing daily scans of your containers. New vulnerabilities crop up all the time, so scanning just at build time means you may miss potential threats even if the software artifact isn’t changing. Ideally, this is done with a run-time tool (there are many available), but setting up daily scans on your production container registries can also work.

The final piece, as with any cybersecurity system, is to have a good triage and incident response plan that is documented and in place. Know how you will respond to Criticals, Highs, and other vulnerabilities, and be clear on how severity is determined for your organization. 

Engineering Culture that Powers Security

The final aspect that is critical to having an effective cybersecurity policy is the engineering culture that makes security engineering possible. While engineers are already bogged down with the many “shift-left” disciplines, security is becoming as integral a domain expertise to writing code or managing infrastructure.  

The risks are too great, and the stakes too high to simply assume engineers will understand what they need to do to be secure. Therefore, in order to properly succeed in a vulnerability management program, this has to start with the buy-in of engineers, who understand the importance of enforcing these safety measures, and are not consistently trying to bypass them.  

This starts with selecting the right tooling, providing the greatest amount of visibility to the risk through intelligent prioritization, as well as a shared ownership and responsibility for remediation.

Wrapping Up the Problem Statement

If we take a look at a modern cloud native stack and engineering organization, there’s no doubt that there are plenty of insertion points where threat actors can wreak havoc on our system.  

However, like the microservices or micro-deployment mindset, breaking these down into smaller more manageable areas of vulnerability management will enable us to get a greater handle of our potential risk posture, and mitigate these one by one.

There are excellent open source and commercially available automated tools to cover the many layers of our stack today, and even how we package and ship software through container security––and all of these should be chosen to provide as much coverage to our actual stack to ensure the most reliability.  

Once we have these guardrails in place, and the right culture in the engineering organization who understand the importance of prioritizing and enforcing security, we can keep calm and cloud native all the things.

Embarking on a New Journey

Farewell, Slim — Transitioning to a new and larger mission!

We're excited to share some big news from Slim.AI. We're taking a bold new direction, focusing all our energy on software supply chain security, now under our new name root.io. To meet this opportunity head-on, we’re building a solution focused on transparency, trust, and collaboration between software producers and consumers.

When we started Slim.AI, our goal was to help developers make secure containers. But as we dug deeper with our early adopters and key customers, we realized a bigger challenge exists within software supply chain security ​​— namely, fostering collaboration and transparency between software producers and consumers. The positive feedback and strong demand we've seen from our early customers made it crystal clear: This is where we need to focus.

This new opportunity demands a company and brand that meet the moment. To that end, we’re momentarily stepping back into stealth mode, only to emerge with a vibrant new identity, and a groundbreaking product very soon at root.io. Over the next few months, we'll be laser-focused on working with design partners and building up the product, making sure we're right on the mark with what our customers need.

Stay informed and up-to-date with our latest developments at root.io. Discover the details about the end of life for Slim services, effective March 31, 2024, by clicking here.

Embarking on a New Journey

Farewell, Slim — Transitioning to a new and larger mission!

We're excited to share some big news from Slim.AI. We're taking a bold new direction, focusing all our energy on software supply chain security, now under our new name root.io. To meet this opportunity head-on, we’re building a solution focused on transparency, trust, and collaboration between software producers and consumers.

When we started Slim.AI, our goal was to help developers make secure containers. But as we dug deeper with our early adopters and key customers, we realized a bigger challenge exists within software supply chain security ​​— namely, fostering collaboration and transparency between software producers and consumers. The positive feedback and strong demand we've seen from our early customers made it crystal clear: This is where we need to focus.

This new opportunity demands a company and brand that meet the moment. To that end, we’re momentarily stepping back into stealth mode, only to emerge with a vibrant new identity, and a groundbreaking product very soon at root.io. Over the next few months, we'll be laser-focused on working with design partners and building up the product, making sure we're right on the mark with what our customers need.

Stay informed and up-to-date with our latest developments at root.io. Discover the details about the end of life for Slim services, effective March 31, 2024, by clicking here.