Cloud Defense – FireMon.com https://www.firemon.com Improve Security Operations. Improve Security Outcomes. Thu, 22 Feb 2024 17:06:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.firemon.com/wp-content/uploads/2023/03/256px_FMLogoBug_Color-100x100.png Cloud Defense – FireMon.com https://www.firemon.com 32 32 Empower Incident Response with Real-Time, Just-in-Time Alerts and Access https://www.firemon.com/empower-incident-response-with-real-time-just-in-time-alerts-and-access/ Fri, 03 Nov 2023 15:51:22 +0000 https://www.firemon.com/?p=1681

Here at FireMon we have a bit of a different take on Cloud Security Posture Management. Cloud Defense was built from the ground up to support real-time security operations. Our goal, from day one, has been to help detect and remediate cloud security issues before they become cloud security problems.

Although we support automated remediations, either via the console, ChatOps, or full automated, in many situations it makes more sense to manually review and fix something so you are less likely to experience an unintended consequence. For many issues this should be handled by the team that owns the account/subscription/project, which is why we created our advanced ChatOps and ticketing notifications. By sending issues right to teams in the tools they already use in real-time you empower them to fix things more quickly using their preferred technique.

But sometimes, especially if something is exposed to the Internet at large (and maybe in the middle of the night) you will want SecOps to step in and fix it right away. This kind of break glass access should be restricted, used judiciously, and comprehensively logged.

That’s the example in this video. Watch, in real time (really, there aren’t any cuts) an entire response process from misconfiguration to remediation in less than two minutes:

 

1. Someone creates a snapshot of a storage volume and makes it public.
2. FireMon Cloud Defense instantly alerts the on-call incident responder via Slack.
3. The responder dives into the issue and identifies the exposed resource and AWS account.
4. The responder can even see the API calls that created the issue, and the attribution of who made the changes.
5. The responder then requests JIT access via ChatOps.
6. The manager sees the JIT request and approves it.
7. FireMon Cloud Defense’s Authorization Control feature then notifies the AWS account to create a session and sends the user to a zero-knowledge system to collect credentials (FireMon never has access to credentials).
8. The responder pivots into the AWS account and remediates the issue.
9. Cloud Defense detects the remediation and automatically cleans the issue and also sends out a ChatOps notification of the remediation.

It sounds like a lot, but check out the video to see how smooth and easy it is. This really shows the power of real-time and building a product for security practitioners.

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>
On Least Privilege, JIT, and Strong Authorization https://www.firemon.com/on-least-privilege-jit-and-strong-authorization/ Wed, 18 Oct 2023 18:50:50 +0000 https://www.firemon.com/?p=1652 I’ve been employed as a security professional for over 20 years. I cannot possibly count the number of times I have uttered the words “least privilege”. It’s like a little mantra, sitting on the same bench as “defense in depth” and “insider threat”. 

But telling someone to enforce least privilege and walking out of the room is the equivalent to the doctor telling you to “eat healthier” while failing you on your insurance physical and walking out of the room before over-charging you.

Least privilege is real. It matters. Unlike changing passwords every 90 days, it can have a material impact on improving your security. 

Least privilege is also really hard. Especially at scale. And it doesn’t work for your most important users. 

Why? Because least privilege isn’t the least privileges you need at that moment, they are the least privileges you might ever need to do your job… ever. And when someone needs to do something out of scope from when those privileges were first mapped it kicks off a slow change process that has to cross different teams and managers.

Or sometimes you just have to talk Bob into giving you access. And Bob is kind of a defensive jerk since he doesn’t trust anyone and doesn’t want to be blamed when you screw up.

Even with least privilege, if an attacker gets those credentials (the primary source of cloud native breaches) they can still likely do mean things. Because although least privilege isn’t always too horrible to implement for the average user or employee, it’s really hard to enforce on developers and administrators who, by design, need more privileges.

Just as we have MFA for strong authentication, we need something for strong authorization.

This is where Just in Time (JIT) comes into play. Instead of trying to figure out all the privileges someone needs ahead of time, they can request time-limited permissions at any point in time. I now believe that JIT should be the standard for administrative and sensitive access. 

I recommend that least privilege is a great concept for general user access, but JIT is better for any level of admin/dev/sensitive access in cloud.

Just in Time

JIT is a flavor of PIM/PAM. Privileged Access Management and Privileged Identity Management are systems designed to escalate a user’s privileges. They operate with a lower level until they need to escalate and these systems use multiple techniques to provide expanded access, usually for a time-limited session. Today isn’t the day to get into the nuance, but the advantage is they allow for flexibility while still maintaining security. Someone must request additional privileges when they need them, so even if their credentials are compromised the attacker is still limited.

“JIT” (Just in Time) is one technique for PAM/PIM (or, really, any access). A user has base credentials that might not have access to anything at all, and then their privileges are escalated on request. We use JIT ourselves (and it’s available in Cloud Defense), and Netflix released an open source tool called ConsoleMe based on their internal tool. Azure has a built in (but for an additional fee) service called Entra ID Privileged Identity Management. (Entra ID is what we used to call Azure AD before someone decided it was a good idea to confuse millions of customers for branding purposes) There are more options, these are just examples.

To enhance security, JIT needs to use an out-of-band approval flow and provide time-limited access. Those are the basics. The request and approval should flow through a different path than the normal authentication, like a form of MFA. The difference is that MFA is an out of band factor for authentication (proving you are who you say you are) and JIT is a form of authorization (you request and receive permission to do something). 

Managing Friction

Both least privilege and JIT introduce friction. I mean, everything we do in security introduces some kind of friction, especially Bob. With least privilege the main friction is the overhead to define and deploy privileges, and what breaks when someone doesn’t have privileges they need. With JIT the friction is the process of submitting and receiving an approval.

Having used and researched both least privilege and JIT for a long time, I’ve learned techniques to reduce the friction. In some cases you end up with faster and better processes than how we’ve historically done things

  • The request and approval flow needs to be real time. This means approvals via ChatOps, text messages, or the 5G chip implanted with your COVID vaccine.
  • For lower-privileged access, like read access to some logs, you can and should support a self approval. How does this help? Because it still uses the out of band process and reduces the ability of an attacker to leverage lost/stolen/exposed credentials.
  • You can also support auto-approvals where you don’t even need to click over to self approve. How does this help? You can auto-approve but also use your out of band channel to notify that privileges were escalated. You’ve probably seen this if you ever add a Netflix or Hulu device to your account. Awareness alone can be incredibly effective.
  • If this is for developers, you need to support the command line and other tools they use. Go to them. Make it super-easy to use. If you force them to log into a security tool the project will fail.
  • If approvers aren’t responsive, like instantly, you will fail. Don’t make Bob the only approver.

Bring the capability to your devs/admins in the tools they already use. Make it fast and frictionless. Ideally, make it easier and faster than opening up a password manager or clicking around an SSO portal stuffed with 374 cloud accounts to pick from. Buy Bob some cookies. Chocolate chip. (Oh wait, that’s me).

You can also use automation to reduce friction for least privilege access. The Duckbill Group implemented their own version of automated least privilege using different tech with the help of Chris Farris. Tools like AWS Access Advisor are there to help you monitor used permissions and scope them down. Automation is there to help you implement least privilege at scale, and can also be an adjunct to JIT.

When to use which

Least privilege isn’t a dead concept by any means. It’s still the gold standard for everyday users/employees that need a pretty consistent level of access. JIT is best for more-privileged access, especially to production environments, and especially in cloud where credential exposures are THE biggest source of breaches. Here’s where we use it ourselves:

  • Developer read access to production.
  • Developer change access to production (outside CI/CD). Far more restricted with more approvers required.
  • Admin access to prod accounts.
  • Incident response access.
  • Some dev account access, since it can be faster than going back to the SSO portal, especially when working on the command line.

I no longer think least privilege alone is a valid concept for any significant level of privileged access in cloud (IaaS/PaaS), even when we use strong MFA. It’s too hard to properly scope permissions at scale, over time. JIT is a far better option in these use cases. Least privilege is still very viable when consistent permissions over time are needed, especially combined with good access logging and MFA. JIT is the companion to MFA. It’s the strong authorization to pair with your strong authentication. As we continue to move more critical operations into management planes that are exposed to the Internet, JIT is the way.

]]>
A Paramedic’s Top 2 Tips for Cloud Incident Response https://www.firemon.com/a-paramedics-top-2-tips-for-cloud-incident-response/ Wed, 11 Oct 2023 19:28:10 +0000 https://www.firemon.com/?p=1636

One of the advantages of having a lot of unique hobbies is that they wire your brain a little differently. You will find yourself approaching problems from a different angle as you mentally cross-contaminate different domains. As a semi-active Paramedic, I find tons of parallels between responding to meat-bag emergencies and managing bits-and-bytes emergencies.

I’ve been teaching a lot of cloud incident response over the past few years and started using two phrases from Paramedicland that seem to resonate well with budding incident responders. These memory aids do a good job of helping refine focus and optimizing the process. While they apply to any incident response, I find they play a larger role on the cloud side due to the inherent differences caused predominantly by the existence of the management plane.

Sick or Not Sick

Paramedics can do a lot compared to someone off the street, but we are pretty limited in the realm of medicine. We are exquisitely trained to rapidly recognize threats to life and limb, be they medical or trauma, and to stabilize and transport patients to definitive care. One key phrase that gets hammered into us is “sick or not sick.” It’s a memory aid to help us remember to focus on the big picture and figure out if the patient is in deep trouble.

I love using this one to help infosec professionals gauge how bad an incident is. For cloud, we teach them to identify significant findings that require them to hone in on a problem right then and there before moving on. In EMS, it’s called a “life threat.” Since cloud incident response leverages existing IR skills with a new underlying technology, that phrase is just a reminder to consider the consequences of a finding that may not normally trigger a responder’s instincts. Here are some simple examples:

  • Data made public in object storage (S3) that shouldn’t be.
  • A potentially compromised IAM entity with admin or other high privileges.
  • Multiple successful API calls using different IAM users from the same unknown IP address.
  • Cross-account sharing of an image or snapshot with an unknown account.
  • A potentially compromised instance/VM that has IAM privileges.

When I write them out, most responders go, “duh, that’s obvious,” but in my experience, traditional responders need a little time to recognize these issues and realize they are far more critical than the average compromised virtual machine.

“Sick or not sick” in the cloud almost always translates to “is it public or did they move into the management plane (IAM).”

Sick or not sick. Every time you find a new piece of evidence, a new piece of the puzzle, run this through your head to figure out if your patient is about to crash, or if they just have the sniffles.

Stop the Bleed

Many of you have probably taken a CPR and First Aid class. You likely learned the “ABCs”: Airway, Breathing, and Circulation.

Yeah, it turns out we really screwed that one up.

Research started to show that, in an emergency, people would focus on the ABCs to the exclusion of the bigger picture. Even paramedics would get caught performing CPR on someone who was bleeding out from a wound to their leg. Sometimes it was perfect CPR. You could tell by how quickly the patient ran out of blood. These days we add “treat life threat” to the beginning, and “stop the bleed” is the top priority.

See where I’m headed?

In every class I’ve taught, I find highly experienced responders focusing on their analysis and investigation while the cloud is bleeding out in front of them. Why?

Because they aren’t used to everything (potentially) being on the Internet. The entire management plane is on the Internet, so if an attacker gets credentials, you can’t stop them with a firewall or by shutting down access to a server. If something is compromised and exposed, it’s compromised and exposed to… well, potentially everyone, everywhere, all at once.

Stop the bleed goes hand in hand with sick or not sick. If you find something sick, do you need to contain it right then and there before you move on? It’s a delicate balance because if you make the wrong call, you might be wasting precious time as the attacker continues to progress. Stop the bleed equals “this is so bad I need to fix it now.” But once you do stop the bleed, you need to jump right back in where you were and continue with your analysis and response process since there may still be a lot of badness going on.

My shortlist?

  • Any IAM entity with high privileges that appears compromised.
  • Sensitive data that is somehow public.
  • Cross-account/subscription/project sharing or access to an unknown destination.

There’s more, but that’s the shortlist. Every one of these indicates active data loss or compromise and you need to contain them right away.

Seeing it in Action

Here’s an example. The screenshots are a mix of Slack, the AWS Console, and FireMon Cloud Defense. That’s my toolchain, and this will work with whatever you have. In the training classes, we also use Athena queries to simulate a SIEM, but I want to keep this post short(ish).

Let’s start with a medium-severity alert in Slack from our combined CSPM/CDR platform:

Sick or Not Sick? We don’t know yet. This could be totally legitimate. Okay, time to investigate. I’ll show this both in the platform and in the AWS console. My first step is to see what is shared where. Since the alert has the AMI ID, we can jump right to it:

Okay- I can see this is shared with another account. Is that an account I own? That I know? My tool flags it as untrusted since it isn’t an account registered with the system, but in real life, I would want to check my organization’s master account list just to double-check.

Okay, sick or not sick? In my head, it’s still a maybe. I have a shared image to a potentially untrusted account. But I don’t know what is shared yet. I need to trace that back to the source instance. I’m not bothering with full forensics; I’m going to rely on contextual information since I need to figure this out pretty quickly. In this case, we lucked out:

It has “Prod” in the name, so… I’m calling this “probably sick.” Stop the bleed? In real life, I’d try to contact whoever owned that AWS account first, but for today, I do think I have enough information to quarantine the AMI. Here’s how in the console and Cloud Defense:

Okay, did we Stop the Bleed? We stopped… part of the bleed. We locked down the AMI, but we still don’t know how it ended up there. We also don’t know who owns that AWS account. Can we find out? Nope. If it isn’t ours, all we can do is report it to AWS and let them handle the rest.

Let’s hunt for the API calls to find out who shared it and what else they did. I’m going to do these next bits in the platform, but you would run queries in your SIEM or Athena to find the same information. I’ll do future posts on all the queries, but this post is focused on the sick/bleed concepts.

Okay- I see an IAM entity named ImageBuilder is responsible. Again, because this post is already running long, I checked a few things and here is what I learned:

  • ImageBuilder is an IAM user with privileges to create images and modify their attributes, but nothing more. However, the policy has no resource constraints so it can create an image of any instance. And no conditional restraints so it can share with any account. This is a moderate to low blast radius- it’s over-privileged, but not horribly. I call it, sorta-sick.
  • The API call came from an unknown IP address. This is suspicious, but still only sorta-sick.
  • It is the first time I see that IP address used by this IAM user, and the user shows previous activity aligned with a batch process. Okay, now I’m leaning towards sick. Usually we don’t see rotating IP addresses for jobs like this, it smells of a lost credential:
  • That IAM user can continue to perform these actions. Unless someone tells me they meant to do this, I’m calling it Sick and I’m going to Stop the Bleed and put an IAM restriction on that user account (probably a Deny All policy unless this is a critical process, and then I’d use an IP restriction).

In summary:

  • I found an AMI shared to an unknown account: Sick
  • That AMI was for a production asset: Sick and Stop the Bleed
  • The action was from an IAM user with wide privileges to make AMIs and share them, but nothing else: Maybe sick, still investigating.
  • The IAM user made that AMI from an unknown, new IP address: Sick, Stop the (rest of the) Bleed.
  • There is no other detected activity from the IP address: Likely contained and no longer Sick
  • I still don’t know how those credentials were leaked: Sick, and time to call in our traditional IR friends to see if it was a network or host compromise.

I went through this quickly to highlight how I think of these issues. With just a few differences this same finding would have been totally normal. Imagine we realized it was shared with a new account we control but wasn’t registered yet. Or the AMI was for a development instance that doesn’t have anything sensitive in it. Or the API calls came from our network, at the expected time, or from an admins system and they meant to share it. This example is not egregious, but it is a known form of data exfiltration used by active threat actors. As I find each piece of information I evaluate if it’s Sick or Not Sick and if I need to Stop the Bleed.

How is this different in cloud? Because the stakes are higher when everything potentially touches the Internet. We need to think and act faster, and I find this memory aid helpful, to keep us on track.

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>
How and Why FireMon Pioneered Real-Time CSPM https://www.firemon.com/how-and-why-firemon-pioneered-real-time-cspm/ Tue, 10 Oct 2023 15:38:46 +0000 https://www.firemon.com/?p=1626

Two years ago, FireMon elevated its game by introducing real-time features in our Cloud Defense platform. This was a significant development because it transformed our tool from a basic safety checker into a full-fledged cloud security guardian. Real-time capability is crucial for advancing tools from basic vulnerability assessment to a comprehensive cloud security operations platform. However, our journey towards real-time was not driven by customer requests; rather, it was motivated by our commitment to delivering improved efficiency and enhanced security operations.

Why We Built Real-Time:

Our initial goal was not to create a Cloud Security Posture Management (CSPM) tool. We began by building a cloud security automation platform with the aim of helping organizations address cloud security vulnerabilities more rapidly and bridging the gap between security and DevOps/Cloud Operations. While this may seem like a subtle distinction, it meant that we entered the CSPM market with a different perspective.

  • Inefficiency of time-based scans: Initially, like everyone else, we relied on time-based scans. However, they proved to be slow, even when distributed, and could potentially exceed a customer’s service limits.
  • Stale data: Periodic scans resulted in customers viewing outdated information. Even scanning every 15 minutes could lead to alerting a development team about something they had already resolved.
  • Real-time nature of security operations: Responders need to have real-time awareness of events, alerts, and configurations.
  • Efficiency for us: It’s not selfish to consider that dealing with timing and capacity planning in a multi-tenant system becomes challenging when everything is time-based.

This isn’t to say that time-based scans don’t have their place; we still use them for our Free tier, and we perform daily sweeps for all our Pro accounts to ensure nothing slips through the cracks.

Building Real-Time (The AWS Way):

Today, we will focus on how we enable real-time functionality for AWS. In future posts, we will provide details on how we implement it for Azure and GCP. We underwent several iterations, and thanks to AWS, the system we have now is remarkably efficient.

  • EventBridge to Lambda to API: Initially, we forwarded events from EventBridge to an API gateway through a Lambda function deployed in customer environments. It worked but was not highly efficient.
  • EventBridge to… EventBridge: AWS enhanced EventBridge, allowing customers to send events directly to us. Now, all we needed to do was deploy an EventBridge Rule in customer accounts. We didn’t even require special authentication because the AWS event headers are tamper-proof, and we discard anything not associated with a customer.
  • Updating on change: We keep track of changes such as updates and deletions, capturing resource details. This initiates an update in our Discoverer service for that specific item.
  • Trigger chain: The update hits the Inventory, and any change here triggers the Lambda functions for checks. All checks for a specific type of resource occur simultaneously, and findings are evaluated against alert and remediation rules.
  • Instant alerts: This setup triggers an alert (or automated remediation) within just 5-15 seconds after a change, and all parts of the system are updated with consistent data (e.g., compliance). Most customers send alerts to ChatOps (Slack/Teams), but they can also send them via email, create a JIRA ticket, or forward them to a SIEM.

Real-Time Benefits:

Transitioning to real-time elevated Cloud Defense, finally enabling security operations as we had always envisioned. Without real-time capability, CSPM tools are essentially just another type of vulnerability scanner. There’s nothing wrong with vulnerability scanners; we use them ourselves. However, since cloud misconfigurations can become exposed to the internet instantly, we believe the response cycle needs to be much tighter.

  • Up-to-date inventory: With real-time functionality, what you see in Cloud Defense accurately reflects the current configuration of your AWS account.
  • Immediate checks: Security and compliance checks occur as changes are made, promptly identifying misconfigurations. You won’t be left exposed for 15 minutes to 24 hours, which is the scanning frequency of time-based tools.
  • Complete understanding of changes: Cloud Defense tracks the API that triggered the change, the identity responsible for the API call, and the impact on the resource (including changes and check results) from start to finish. This comprehensive tracking allows for change tracking, examination of other API calls from the same IAM entity, exploration of resources connected to the affected resource, and other powerful analysis capabilities.
  • Enabling security operations: With Cloud Defense, you gain insight into who made a change, when it was made, the security implications, and the ability to filter and forward information to facilitate rapid remediation, whether manual or automated. No more emailing spreadsheets. This transformation elevates the platform into a complete operational tool.

Our Cloud Defense platform demonstrates how real-time CSPM should be done. From our initial days of time-based scans to the swift transition to real-time monitoring, we have enhanced your ability to use CSPM as a security operations tool and introduced new methods of safeguarding your cloud deployments. Adding real-time capability to Cloud Defense was not just about a flashy feature; it was a game-changer in making cloud security robust, quick, and reliable.

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>
How Cloud Defense Free is Cheaper than Open Source/DIY CSPM https://www.firemon.com/how-cloud-defense-free-is-cheaper-than-open-source-diy-cspm/ Tue, 10 Oct 2023 15:38:36 +0000 https://www.firemon.com/?p=1624

We are big supporters of open-source security tools and even employ some of them ourselves. However, it’s not always the right answer. Deploying and managing the infrastructure and software updates becomes your responsibility. These tools don’t always scale effectively and may lack a complete user experience. Furthermore, you shoulder the cost of the infrastructure, and even top-notch tools often lose their maintainers and lack support.

Going Free Instead of OSS

When we made the decision to contribute to the community, we contemplated open-sourcing all or part of our platform. However, due to its complexity, it wasn’t well-suited for that kind of release, and creating a version fit for release would have required a significant amount of additional effort. We simply didn’t have enough developers to convert it over, and user maintenance would have been quite extensive. Instead, we chose to release a free version. While it may not offer all the bells and whistles, it’s free, has unlimited scope, and will remain free indefinitely without inundating you with marketing messages.

Users still have access to a comprehensive suite of assessments (perhaps even too many—we’re about to make some adjustments to reduce noise) and all the benefits of an enterprise-grade tool. However, Cloud Defense Free does have certain limitations to enable its continued operation. It only checks your deployments once a day, lacks our real-time capabilities, and maintains inventory for a shorter period. For obvious reasons, it doesn’t include everything we’ve developed (such as Just-in-Time authorizations for AWS). After all, we need to support our families. Nevertheless, Cloud Defense Free was designed for those of you that simply require basic CSPM without the burden of paying the ridiculous security tax to get it.

(Seriously, cloud providers should be giving this much away for free).

Benefits Over Open Source CSPM

The advantages are clear: you don’t need to manage infrastructure, host or pay for it, learn how to deploy or configure anything, worry about updates, you can switch it off whenever you want if it isn’t working for you, and you get a constantly updated library of checks. In under 10 minutes, you can be up and running, scale to thousands of accounts, eliminate maintenance concerns, enjoy a pretty good user experience, never spend a dime, and avoid being incessantly bombarded with upgrade emails.

We’re not attempting to compete with open-source CSPM. Some of you may have excellent reasons to choose that route, particularly if you have the time and technical skills and desire things to operate in a specific manner. However, we believe there’s a significant segment of organizations and individuals who could benefit from something more accessible and cost-effective to maintain. This is where Cloud Defense Free comes into play—a valuable addition to your toolkit and our way of supporting the community, even though releasing open source software wasn’t the right fit for us. You can check the cloud security box in 10 minutes or less, for free.

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>
Deep Dive on Real-Time Inventory https://www.firemon.com/deep-dive-on-real-time-inventory/ Wed, 04 Oct 2023 20:37:05 +0000 https://www.firemon.com/?p=1614

Early on at FireMon (well, before we became FireMon), we realized that attempting to live-assess customers’ cloud accounts (including subscriptions/projects) was… problematic. Running that many assessments would quickly hit service limits and could potentially disrupt a customer’s internal API calls. Keep in mind that we started doing this about 7 years ago, before CSPM even existed, and everyone was learning the same lessons.

The first solution we came up with was to collect configuration data once, input it into our own inventory, and then perform our assessments there. This allowed us to reduce our API calls to only what was necessary to retrieve the metadata. Then, we could run multiple assessments based on the same dataset. For a while, this approach worked well. We still performed time-based configuration scans, but we could spread them out more evenly and optimize to minimize the overload of API calls. However, this approach had its own set of issues. What if something changed between our scan and when someone finally went in to manage the alert? Additionally, sweeping through a full AWS service for all resources in that service would still strain against API limits, which are based on the service and region.

We set two challenges for ourselves to address this situation better. First, we aimed to update the inventory in real-time to reduce API call spikes to a given service and ensure that customers never worked with outdated data. Second, we aimed to maintain a history so that customers and investigators could look back and see exactly what changed and how it changed. We’ll delve into the technical architecture later, and it varies slightly for each cloud platform. In brief, by directly connecting to the cloud provider’s event stream, we could identify change API calls, extract the involved resources, update our inventory in real-time, and trigger all our assessments for a given inventory type simultaneously.

While we still support this with a once-a-day/off-hours time-based sweep, transitioning to real-time addressed many issues and produced some interesting benefits. These benefits include:

  • Customers never encounter stale data; everything in the platform should closely match the actual running configuration/state.
  • As we monitor the API calls, we can identify who made those calls. Suddenly, we have complete identity attribution in our inventory.
  • It becomes easy to pinpoint what changed as changes are made, providing comprehensive change tracking.
  • We can run all checks and assessments in real-time as changes occur. This includes RESOLVING issues as someone rectifies them externally, not just identifying new issues.

Boom. A complete real-time, change-tracked, identity-attributed historical inventory! Yes, something like AWS Config provides this functionality natively within the cloud provider. However, aside from being cost-effective, our inventory and assessments are tightly integrated, cover multiple cloud deployments and providers, and offer some pretty impressive capabilities, such as comprehensive search functionalities.

The best way to experience this is through our 90-second video tour!

And here are a few key screenshots:

Main page, displaying a wealth of important data in a single view:

Here’s the change history view, presenting changes with full details and attribution. It also boasts useful features like related events, associated resources, exemptions, and a history of pass/fail findings for the resource:

This History view tracks changes chronologically with a graph depicting activity trends. Clicking on the timeline jumps to that date:

Have you ever needed to know which ephemeral cloud resource owned the IP address that appeared in the logs at a specific point in time? Incident responders love this one…

And that’s the quick overview. In future posts, we’ll provide more insight into the architecture and how we handle this for multi-cloud environments.

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>
The Mysterious Case of the Ephemeral Data Exposure https://www.firemon.com/the-mysterious-case-of-the-ephemeral-data-exposure/ Wed, 04 Oct 2023 17:10:23 +0000 https://www.firemon.com/?p=1611

While we may not actively monitor customer accounts for findings and alerts, we recently had a customer reach out to us for a more proactive role in their journey towards automated remediation. At the customer’s request, we were keeping an eye on a few things when… something interesting happened.

Our CTO received an alert indicating that there was an exposed Public RDS instance in AWS. However, when he checked with the client, it wasn’t there anymore. Adding to the strangeness, a public RDS instance was being created every night, only to be terminated 50 minutes later. This kind of activity could easily be missed during timed assessments. Our CTO promptly informed the client and retrieved the metadata on the terminated instance from our inventory. After a thorough (and quick, it only took a few minutes) investigation of the triggering events and instance configuration, he discovered that a public instance was being created nightly based on the latest snapshot backup of a different database. It was then exposed to a small list of known corporate IP addresses (which was good news) before being terminated shortly afterward.

The Investigation
The client conducted their own investigation and found that this was part of an automation process for ETL that ran in the data center. A scheduled job on the cloud side was responsible for creating the ephemeral instance as public, restricting access to a handful of IP addresses (5, which still seemed like a lot), and then the data center would connect to extract the data. We never found out where the actual data transformation happened, but that isn’t overly relevant to the situation.

This presented an interesting challenge for the security team – the alert was valid, but there was no actual security issue at hand (although there are certainly more secure ways to handle this situation than a public RDS instance). Exempting the instance was not an option since a new one was created every night. Exempting the entire account from the check would also be risky, as it could potentially lead to the oversight of a genuinely exposed RDS instance. Even exempting based on tags posed a risk, as someone could easily change the process to expose the instance to an untrusted IP address.

Lessons Learned
My advice was to focus on fixing the underlying process rather than complicating the assessment side. The reality is that this process is not ideal from a procedural standpoint – allowing public RDS instances is never good form. Sometimes you need them, but they should only be a last resort. Instead, they should be placed in a private subnet and accessed through a dedicated or VPN-based connection from wherever you need.

While this didn’t turn out to be a security exposure, there are still some interesting lessons to learn. First, I refer to this as a “false false positive” since the alert was for a real condition that needed attention, but it did not necessarily pose a risk in this particular scenario. There was no actual data leakage, but the client couldn’t know without an investigation and communication with the team responsible for the resource and process.

Second, this is a tough one to try and fully prevent with Service Control Policies. There is no condition key to prevent public RDS instances, nor are there condition keys to prevent the opening of database ports (or any ports) in security groups.

Third, the ephemeral nature of the instances means that unless you operate in real-time or on a very short cycle, you might miss the exposure. I actually cover this topic in my incident response training, as there are many situations where something can be exposed and extracted within a tight timeframe, only to be destroyed later to eliminate evidence. This is why incident responders always need the capability to jump directly into deployments and should have access to an inventory that allows them to look back (such as AWS Config or a third-party tool like ours). API calls alone may not provide sufficient insight into what is happening since they lack context. In this case, you would detect the exposure, but then need to look directly at the DB Instance (or in inventory) to see what ports are exposed from where.

Fourth, due to the limited preventative options available, detective and corrective controls must be utilized. In this case, you can directly detect the CreateDBInstance API call and check for the PubliclyAccessible=True parameter. Additionally, continuous monitoring with CSPM (again, from your CSP or a vendor like us) for Public RDS Instances is highly recommended. In terms of remediation, one option is to terminate the instance upon detection of its creation. However, a better approach may be to use ModifyDBInstance to remove the PubliclyAccessible parameter. If you do this, it’s important to only implement such automation in a deployment where you are certain that public RDS instances will not be allowed. The day you disrupt an expected and authorized database connection that’s been running for 3 years because you failed to communicate with the team is probably a good day to pull out that resume.

Ultimately, this incident did not pose a security risk for the customer. However, it did highlight the need for more secure processes, and they are actively exploring options to handle things in a more secure manner. I find this example particularly interesting because ephemeral data exposures, leaks, and exfiltration are genuine concerns, and what we initially discovered seemed indistinguishable from an actual attack. Only after digging in did we, and the client’s security team, realize it was part of an expected process. It’s crucial to work closely with your teams to cultivate good habits, ensure your monitoring is capable of handling the highly volatile nature of the cloud, and understand that when something unusual like this occurs, it is critical to engage with the individuals responsible for the deployment.

In the cloud, sometimes the only way to differentiate between a false positive and a really bad problem is to check with those directly involved. As I wrote in Schrödinger’s Misconfigurations, attackers utilize the same API calls and, unfortunately, identities, rather than relying on some zero-day vulnerability.

Guest Speaker

Rich Mogull

SVP of Cloud Security, FireMon
Rich is the SVP of Cloud Security at FireMon where he focuses on leading-edge cloud security research and implementation. Rich joined FireMon through the acquisition of DisruptOps, a cloud security automation platform based on his research while as CEO of Securosis. He has over 25 years of security experience and currently specializes in cloud security and DevSecOps, having starting working hands-on in cloud nearly 10 years ago. Prior to founding Securosis and DisruptOps, Rich was a Research Vice President at Gartner on the security team.

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>
FireMon Launches a More Powerful CSPM for Less https://www.firemon.com/firemon-launches-a-more-powerful-cspm-for-less/ Mon, 02 Oct 2023 13:37:35 +0000 https://www.firemon.com/?p=1605

In the ever-evolving landscape of cloud security, businesses are on a perpetual quest for comprehensive yet cost-effective solutions to safeguard their cloud infrastructure. FireMon, a pioneer in network security management solutions, has always been at the forefront of this endeavor. In April, the company launched Cloud Defense Free to mitigate the ‘cloud security tax’ that has burdened many organizations.

Now, FireMon is taking another giant leap by introducing Cloud Defense Pro, offering the industry’s best pricing for a top-tier Cloud Security Posture Management (CSPM) solution. This new offering embodies a more powerful CSPM solution for less, making cloud security more accessible and uncomplicated for all.

FireMon now supports two pricing models. Flat pricing is simple at $200 per account per month. But we know some organizations prefer different pricing models, so FireMon can adapt to resource-based or other pricing models to fit customers’ preferred patterns.

Jody Brazil, CEO of FireMon, expresses the company’s vision, “We’re on a mission to redefine the economics of cloud security. Cloud Defense Pro is a reflection of that mission, presenting an unrivaled CSPM solution that’s not only highly affordable but also feature-rich, empowering businesses to enhance their cloud security seamlessly.”

Cloud Defense Pro is engineered with a gamut of powerful features, including:

  • Real-time Security and Compliance Monitoring: Stay ahead of attackers by detecting problems right when they happen.
  • Real-time, Change-tracked, Identity-attributed Historical Inventory: A dynamic cloud inventory that tracks changes in real-time, attributes who made the change, and keeps a historical timeline. This even includes a history of when checks passed or failed, and everything is searchable.
  • Real-time Threat Detectors: Enhance your Security Information and Event Management (SIEM) with real-time threat detection. These reduce response cycles and can fully integrate with event routing and ChatOps to improve the signal and reduce the noise.
  • Granular Event Filtering, Enrichment, and Routing: Get the most out of your cloud provider security alerts with intelligent event management. Filter out noisy alerts from less-important deployments, send alerts directly to the teams that own the environment, and enrich alerts with information on the affected resources, all in Slack or Teams.
  • Just-In-Time (JIT) Authorizations with Policy Restrictions: Defend against lost, stolen, exposed, or abused credentials with CLI and ChatOps-based authorizations to access cloud deployments. Cloud Defense Pro supports advanced capabilities like multiple approvers, out-of-band approvals, location (IP) based restrictions, and can even lock a session to a user’s current IP address to defend against session token theft.
  • Full ChatOps Support for… EVERYTHING: Foster organizational collaboration and expedite remediations by bridging silos with Slack, Teams, JIRA, and other developer-centric tools. Push information right to the teams who run the deployments without having to email spreadsheets or force them to log into a security tool.
  • Automated or ChatOps-based Remediations: Swiftly close security loopholes with automated or ChatOps-based remediations. Let the platform do the work, or keep control in the hands of the teams that run the deployments.

Rich Mogull, SVP of Cloud Security at FireMon, shed light on what’s on the horizon, “We’re soon launching a Cloud Security Maturity Model dashboard with Key Performance Indicators (KPIs) to help organizations better understand and improve their security program. This is a step further in our long-term vision to provide actionable insights that drive better security outcomes.”

For more information and to stay up-to-date, check out FireMon’s Cloud Defense Product Page.

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>
It’s Time to End the Cloud Security Tax https://www.firemon.com/its-time-to-end-the-cloud-security-tax/ Thu, 20 Apr 2023 19:37:02 +0000 https://firemon2023.wpengine.com/?p=697

FireMon is really giving away basic, enterprise-scale Cloud Security Posture Management (CSPM) for free, no strings attached. Because we can, and because we should. 

Remember when you had to buy antivirus for your shiny new computer before you dared use it? Remember how that felt? There’s a reason Microsoft and Apple started including built-in antimalware into their operating systems. No one likes having to spend more money just for basic safety. 

A few weeks ago we launched FireMon Cloud Defense Free-Tier. Our announcement post not only covered the how, but the why. Yes, it’s really free, it’s really enterprise scale, and there are really no strings (or spam) attached. We think it’s just as important to share our motivations as it is to describe what the platform does. 

When we started building a cloud security product 5 years ago, we recognized the importance of identifying security misconfigurations, but we also saw that as something we assumed would be built into all the cloud platforms and vulnerability scanners, just as antimalware is now built into operating systems. To us, the real meat of cloud security was going to be in the harder problems, like remediation, intelligent prioritization, and change management. Finding basic misconfigurations is table stakes for cloud security and so fundamental and straightforward that it really isn’t differentiated.  

Flash forward and CSPM still absorbs the majority of cloud security budgets, even when you only want the basics, and companies struggle to find cost effective options. The cloud providers still charge for their own CSPM. There are Open Source tools, but those take a lot of effort and knowledge to scale when you have to manage more than a cloud account or two. And if you want multi-cloud support, you have to start calling commercial vendors. 

This is why we are giving our basic CSPM away for free. There’s no meaningful differentiation in basic misconfiguration scans, and thanks to our architecture, we can run them at very low internal costs. There is no reason people and organizations should have to pay for basic safety, no matter how many deployments they manage 

So what’s our motivation? 

We know some percentage of Free users will move onto our paid plans. We think we have some awesome and highly differentiated capabilities, like a change-tracked inventory and real-time assessments. The more people see those the better, but adding arbitrary time or account limits won’t be what makes you move to a paid plan.  

And lastly, we actually care about contributing to the community. Our team has a long history of security community involvement and releasing free resources and tools where we can, both before and during our time at FireMon. We think enterprise scale CSPM for free can help a heck of a lot of people. 

Why not Open Source? Because our platform is kinda complex on the back end. We aren’t looking for free labor to write our code for us, and you can set up Cloud Defense Free-Tier within a few minutes and not worry about maintaining Lambda concurrency or DynamoDB capacity. We can afford to do this, at scale, so there’s no reason to make you do the work. 

Check it out and sign up here. Send us your feedback, good or bad, because that’s what we get in return. Tell your friends. 

We can’t eliminate every security tax, but we just killed off this one. 

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>
Understanding Desired Outcomes: How We Selected the Cloud Defense Free Feature Set https://www.firemon.com/understanding-desired-outcomes-how-we-selected-the-cloud-defense-free-feature-set/ Fri, 14 Apr 2023 19:35:17 +0000 https://firemon2023.wpengine.com/?p=696

When we decided to launch a free version of FireMon Cloud Defense we knew we would have to balance two key challenges:  

  • We already knew our platform could scale, but could we adapt it to economically scale to support large enterprises for the long term? Needless to say we couldn’t simply release it and hope our AWS bills didn’t force us into receivership. 
  • With the economic limitations, could we provide a feature set that delivered real value to users? What would that value be? What problems would it solve?

The reality is “free” (as in beer) is never totally free since using anything takes time and effort. We don’t look at the Cloud Defense free tier as crumbs we dribble over the edge of the table, we know that if we are asking users to sign up, deploy, and engage with the platform they will only do that if we help them get their jobs done.  

(What do we get out of it? Well, we know some percentage will move up to our paid plans, but the free platform will help us get incredibly valuable feedback on what people want from their CSPM and how they use it and let us test out new ideas).    

In future posts we’ll talk about the technology in more depth, but today we want to walk through our process for how we decided which features to package into the free tier. Since we view this version of the platform as its own product, we decided to use the same methodological approach that guides much of our strategy.  

Defining Desired Outcomes 

Here at FireMon we are big fans of the Jobs to be Done framework for product strategy. The name is a bit of a giveaway, but the framework guides product decisions by focusing on what job the customer is trying to do, and what specific outcomes they expect. This is a gross simplification of the JTBD framework, but you get the idea. Rather than focusing on features you focus on a potential customer’s desired outcomes when using a product and then use this to design features. 

After going through an in-depth process that included research, experience, and interviews, we honed in on a draft set of possible desired outcomes for cloud security professionals: 

  • Improve my knowledge and understanding of our cloud posture across our entire cloud footprint. (visibility) 
  • Minimize the likelihood of a cloud misconfiguration being configured across our cloud footprint. (prevention) 
  • Reduce our cloud security and compliance exposures (volume and time) across our cloud footprint in a decentralized environment. (remediation) 
  • Improve our ability to communicate cloud security issues to management and regulators. 
  • Reduce the potential for loss and abuse of IAM access to our cloud deployments.   
  • Improve our ability to prevent, detect, and respond to cloud attacks 
  • Keep our security up to date with changes to cloud services and platforms across multiple providers. 
  • Reduce security friction and overhead on development and cloud teams without increasing our security risks. 
  • Reduce the risk of a security breach when we deploy using containers. 
  • Reduce the time I spend integrating cloud security with my program with standard APIs and data structures. 

Clearly there are a lot of ways to address each of these problems, so the question to us was, which could we provide with the economic constraints of running a free, hosted platform? It’s very different than providing Open Source Software someone needs to deploy and run on their own. We wanted to build something that was as fast and easy to use as a commercial product (okay, hopefully faster and easier than a lot of the products you’ve used in the past). 

Translating Outcomes to Features 

With that list it was time to see what we could adapt or build:  

  • Improve my knowledge and understanding of our cloud posture across our entire cloud footprint. (visibility) 

Posture isn’t necessarily just about security; posture is how things are configured. Since merely providing a list of misconfigurations wouldn’t communicate posture we knew we would need to build a cloud inventory, at enterprise scale, within our cost constraints. Our platform already supported a real-time inventory but it wasn’t cost-effective to scale that to the free tier.  

We decided we could balance costs and still provide value with once-a-day scans and a 30 day inventory history with change tracking. You’ll notice we aren’t talking about security misconfigurations yet, but we’ll get there. After running some cost modeling we realized we could run this at enterprise scale (thousands of monitored accounts) within our budget, so this checked both boxes of providing value while managing costs.   

This actually took quite a bit of engineering effort since the commercial product primarily supported real-time inventory updates instead of periodic scans. However, we packaged those updates with some other changes we wanted to make that improved overall efficiency so they aligned well, making it an easy decision.  

  • Minimize the likelihood of a cloud misconfiguration being configured across our cloud footprint. (prevention) 

Preventing cloud misconfigurations is a much harder problem than detection. Do you block it in the CI/CD pipeline if they are using Infrastructure as Code? What about manual changes? How do you handle the workflows without adding too much friction or breaking things?   

We knew we couldn’t implement full prevention within a free product at this time. Our current platform handles this via automation, which would be too costly to run at scale for free. However, we have some ideas that might work and are now in our development backlog. 

  • Reduce our cloud security and compliance exposures (volume and time) across our cloud footprint in a decentralized environment. (remediation) 

This has been the bread and butter of Cloud Defense since the first versions of the product. While automated remediation wouldn’t work for our free tier (again, balancing cost and complexity) there was no reason not to run our full suite of security checks.  

But getting a long list of potential security issues doesn’t necessarily help you remediate. One of the other core capabilities of our product is deep ChatOps integration. Out of the box we supported Slack and Teams, but Teams would require more support due to… well… it’s Teams. So we made the decision to fully enable our granular (per-account or project) Slack notifications since there was no material cost to us, and a lot of value to users.  

  • Improve our ability to communicate cloud security issues to management and regulators. 

Our internal costs to run a compliance report are negligible, even for large environments. For compliance, once a day assessments tend to more than meet this desired outcome. We did have to put in some new development effort to support better PDF reports for larger deployments (e.g. hundreds of accounts), but we needed that for our commercial customers anyway. 

  • Keep our security up to date with changes to cloud services and platforms across multiple providers.

Since our free and commercial products use the same library of checks that we are constantly updating, this was enabled out of the box. Our initial engineering effort focused on cost-optimizations for AWS so we decided to launch without Azure or GCP support at the start. Azure is just about ready so users will eventually get full multi-cloud support for free. 

  • Reduce the potential for loss and abuse of IAM access to our cloud deployments.

We have a pretty awesome feature called Authorization Control that materially improves IAM security, but the economics just didn’t work to put it in the free product.  

  • Improve our ability to prevent, detect, and respond to cloud attacks

Our commercial product supports real-time threat detection, but this was another feature where the economics just didn’t line up to support for a free platform due to the high volume of activity we have to monitor in a real-time basis.   

  • Reduce security friction and overhead on development and cloud teams without increasing our security risks. 
  • Reduce the risk of a security breach when we deploy using containers. 
  • Reduce the time I spend integrating cloud security with my program with standard APIs and data structures.

These outcomes all added costs and/or complexity that we didn’t feel we could adequately address in the free product due to either infrastructure, support, or development costs.  

Packaging the Feature Set 

The desired outcomes, combined with our cost analysis, helped us decide which features to package: 

  • Once a day scans 
  • Resource inventory with a 30 day history 
  • The full suite of security checks 
  • Foundational compliance reports 
  • Slack integration 
  • AWS for now, Azure and GCP as we update the platform 

These weren’t always easy decisions. For example, even a 30 day inventory comes with costs, but we didn’t feel just shipping misconfiguration reports would adequately address a user’s visibility needs. We also decided that limiting our security checks or requiring someone to move to the commercial product for compliance reports would also result in a product that didn’t really provide a sufficient outcome.  

This collection addresses the core security visibility desired outcomes, and the communications outcomes to improve both reporting and reduce remediation timelines. And we know there’s value here since these are the desired outcomes people first built cloud security OSS tools to address, and were the origins of the entire Cloud Security Posture Management market. 

The JTBD framework really helped us keep our focus on improving customer outcomes, rather than just peeling off some features that, together, don’t really help anyone. We think the end result is a free platform that delivers real value while being so cost-efficient we can support it for the long haul. 

Check it out, and tell us what you think. FireMon Cloud Defense is a work in progress and a great way for us to improve our ability to help cloud security professions get their jobs done. 

Try it for Free

See for yourself how Cloud Defense can protect your organization

Unlimited usage at no cost!

Sign Up Now

]]>