Other Solutions – FireMon.com https://www.firemon.com Improve Security Operations. Improve Security Outcomes. Mon, 31 Jul 2023 17:41:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.firemon.com/wp-content/uploads/2023/03/256px_FMLogoBug_Color-100x100.png Other Solutions – FireMon.com https://www.firemon.com 32 32 When MFA Isn’t Enough https://www.firemon.com/when-mfa-isnt-enough/ Tue, 20 Sep 2022 18:49:33 +0000 https://firemon2023.wpengine.com/?p=661

Rule number one in cloud security is, “thou shalt use MFA at all times”. Why? Well, when you move to public cloud computing you essentially take all your administrative interfaces, consolidate them into a single portal or API, and then… put them on the Internet protected with a username, password, and (maybe) MFA. Even when you use a federation.

But it turns out, even MFA isn’t always enough as seen in multiple major breaches, including over at Uber.

This is one of the most critical differences to understand between cloud and traditional infrastructure, and it’s why you keep hearing the phrase “identity is the new perimeter”. Before cloud we managed our datacenter from inside private networks (or through controlled access points like VPNs and jump boxes). With cloud all that’s on the Internet by default and it is tough to lock it down, especially now that we more-often support remote employees and administrators.

MFA is an effective ways to protect user/administrator authentication. We have ton of options ranging from a bit more secure (messaging-based MFA) to really darn secure (hardware keys). But the problem even with MFA is that users have persistent access with persistent privileges. If you give a user a role, they can use those permissions anytime after authenticating. Attackers know this and have developed a range of techniques to gain authenticated access. They:

  • abuse static credentials, especially if there is no MFA in use.
  • break past some MFA. For example, they SIM swap to intercept text messages headed to phones.
  • socially engineer administrators to disable MFA or reset to a device under their control.
  • socially engineer users to snag an MFA code.
  • steal session credentials from a user’s system after they have already authenticated, then use them from a system under the attacker’s control.

Once an attacker obtains access, they then start exploring privileges and eventually use those for malicious activity.

Least Privilege is a Lie

The core problem is we provide permissions not based on what someone needs to do at a point in time, but what they might need to do in the future. Users, especially administrators, have the maximum set of permissions for all possible actions. Every security standard on the planet says IAM should be default deny and least privilege, but that’s kind of a lie since the set of “least privileges” is defined by the maximal set of privileges they will need at some time, not all the time.

Even when we allow users to switch roles and use different permissions for different sessions, they still nearly always have access to those roles whenever they want. In reality least privilege should be time-bound, not just user-bound. Historically we manage permissions by assigning roles, and roles have permissions at a scope. “You are an admin on these 5 accounts and a regular user on the other 98”. Often we even assign a user multiple roles; which either combine permissions or they can swap around roles when doing different things.

Imagine how much harder it would be for an attacker if we eliminate persistent privileges. Instead of a user authenticating and having access to all their permissions, they instead have access with a really small set of permissions and have to escalate for anything potentially harmful.

Add Security with Dynamic Authorizations

I’m not proposing we do away with MFA or other authentication-level security. Those are still vitally important, but sometimes they aren’t enough. This is especially true in cloud security due to the greater degree of inherent Internet exposure. but cloud also has advantages, and those include session-based federation and fine-grained authorizations (down to individual API calls).

Instead of providing users with persistent access to all the permissions they might ever need, we provide them with more-limited access and they then request access to escalated privileges. This isn’t a new concept; it’s what privileged user management products have used for years. But those products have often still relies on clunky techniques like proxied sessions and rotating temporary passwords. Cloud is inherently more flexible since the native IAM models are session-based by their nature and privileges are defined in policy documents (typically written in JSON).

This is how tools like our FireMon Authorization Control work. Or take a look at Netflix ConsoleMe for an open source example. Users request access when they need it, and then the platforms use policies to determine what’s needed for approval. When those conditions are met, such as sign-offs from a manager or coworker, the user is authorized to use a role for a session. The platforms can even insert conditions, such as “only allow this session from the IP address that requested it”. These requests are typically out-of-band and use side channels like ChatOps for approvals, making the process purposely noisy, especially for sensitive production access.

Authorization-level security actually makes life easier for everyone, from developers and users to security administrators. You don’t need to know the full set of potential permissions ahead of time. Instead, users ask for what they need when they need it. Policies determine the lowest-friction path to providing that access without compromising security. Tools like ChatOps mean a developer can request access to a new account and get an answer back within seconds, as opposed to having submit a request into a ticketing system that could take days or weeks.

By adding security to authorization we reduce the impact of stolen credentials and hacked authentications, all while reducing friction and overhead.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Prevent Ransomware with Proper Policy Hygiene https://www.firemon.com/prevent-ransomware-with-proper-policy-hygiene/ Thu, 15 Sep 2022 18:42:29 +0000 https://firemon2023.wpengine.com/?p=655

Ransomware attacks typically begin with phishing, credential hacks, or taking advantage of open vulnerabilities. Once the bad actor is in, they rummage around looking for access to their honeypot, a hub of data, to hold hostage. Maintaining good policy hygiene and access control is paramount in preventing and stopping the bad guys before they get to your data.

Remember the Target hack back in 2013? Hackers stole credentials from an HVAC contractor, gained access to the network, pinged around, found the PCI network and injected malware into point of sale devices at every Target in America. Overly permissive access to the network made this possible. Having a clean set of firewall policies and a segmented network would have prevented the bad actor from ever gaining access past what the original victim, the HVAC contractor, required.

Access within an organization should be relegated to just what is necessary to meet the needs of the business: nothing more, nothing less. This is good policy hygiene. Unnecessary complexity caused by things like duplicate/redundant and shadow rules, increases the probability of misconfigurations, human error, and risk. Bad actors rely on humans to make these mistakes, creating paths to use as attack vectors, and they are often not disappointed.

Unnecessary complexity is often a byproduct of day-to-day operations. A port is opened for RDP (remote desktop protocol) for troubleshooting, but is never closed. Access is granted for temporary communication between devices, but is left open as meetings and other priorities fill the day. A rule is created for a resource and not removed once it is decommissioned. The scenarios are endless but the results are the same: rules are created, then forgotten, resulting in policy clutter that causes inadvertent access and exposes security gaps for cyber criminals to leverage. When working with thousands of policies among hundreds of devices and platforms, it is nearly impossible to properly manage these policies manually.

FireMon provides a solution to this problem. By centralizing all of your security policy enforcement data into a single pane, a rule repository, FireMon allows you to manage policies across all of your devices from ground to cloud. It integrates seamlessly with hundreds of vendors, including Splunk, AWS, Swimlane, and Qualys, to consolidate policy management and visibility. With FireMon, you have one place, instead of five, ten, or fifteen different platforms, to investigate a policy, which drastically increases the efficiency of your team. In the first run, FireMon typically finds 30-50% of rules in active policies are unused as well as pervasive overly permissive access throughout our clients’ networks.

FireMon starts with ground zero, an assessment of what is currently being allowed, and an access control list (ACL), then detects any deviation from that in the wrong direction. FireMon looks for access parameters, for certain access routes or vectors, and alerts on abnormalities in real-time. FireMon consolidates policies from many other technologies and has access to disable rules for each technology from one dashboard, raising the total value of the combined security solutions and resulting in a larger return on your total security investment.

When changes are made to your policy environment you should immediately ask, “Did I expect this change? Did I analyze the change for impact: security posture, compliance posture, business operations?” When access is granted, it should be revalidated after a certain timestamp. That revalidation needs to be against a lens of business justification, asking, “Do we still have a need for that access? Is the business justification for that access still valid? Are we granting only what is necessary to meet the needs of the business?” Typically, access that’s granted is greater than what is necessary, which gives way to overly permissive rules. It is imperative these policies are managed to maintain a strong security posture and thwart ransomware attempts.

FireMon is here to help. Reach out to learn more.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
What You Need to Know About Ransomware in AWS https://www.firemon.com/what-you-need-to-know-about-ransomware-in-aws/ Fri, 05 Aug 2022 18:18:27 +0000 https://firemon2023.wpengine.com/?p=643

As bad of an issue ransomware is within data centers, I was a bit skeptical that it was much of a problem in cloud. I, personally, hadn’t run into any incidents and I started to think it was more theoretical than anything else. It turns out, I was a little wrong. Okay, totally wrong. Not only is it a bigger problem than I thought but the attack pattern was different than I expected.

At the AWS re:Inforce conference, I attended an awesome session lead by Kyle Dickinson, Megan O’Neil, and Karthik Ram. The session was totally packed and the room minder turned away dozens of people. This post is a bit of a recap of the session, combined with my own experiences and recommendations. Any errors and omissions are mine, not theirs.

Is ransomware a problem in AWS?

Yep. Turns out more of a problem than I originally thought. Real customers are being affected, it isn’t merely theoretical.

How does it work?

I’ll cover the initial exploit vector in the next question, but there are 4 possible techniques in use, but only 2 of them are really viable and commonly seen:

  • A traditional ransomware attack against instances in AWS. The attacker compromises an instance (often via phishing a user/admin, not always direct compromise), then installs their malware to encrypt the data and spread to other reachable instances. This is really no different than ransomware in a data center since it doesn’t involve anything cloud-specific.
  • The attacker copies data out of an S3 bucket and then deletes the original data. This is the most commonly seen cloud native ransomware on AWS.
  • The attacker encrypts S3 data using a KMS key under their control. This is more theoretical than real, due to multiple factors. It’s a lot easier to just delete an object/bucket than it is to retroactively encrypt one.
  • The attacker does something to data in another storage service to lock/delete the data. I’m being nebulous because this isn’t seen, and most of those services have internal limitations and built-in resiliency that make ransomware hard to pull off.

In summary: S3 is the main target and the attackers copy then delete the data. Instances/servers can also be a target of the same malware used to attack data centers. There are a few theoretical attacks that aren’t really seen in the wild.

How do the attackers get in?

Exposed credentials. Nearly always static access keys, but possibly keys obtained from a compromised instance (via the metadata service). You know, pretty much how nearly all cloud-native attacks work.

What’s the attack sequence?

I’ll focus on the S3 scenario since that’s the cloud native one we want to focus on.

  1. Attacker obtains credentials.
  2. Attacker uses credentials for reconnaissance to determine allowed API calls and identify resources they can access.
  3. Attacker discovers they have S3 write permissions and list/read access to identify buckets. Note that the attacker may not have List privileges but can obtain the bucket names from other sources, like DNS, GitHub, or other locations. This is much less likely.
  4. Attacker copies/moves data to another location, which isn’t necessarily in AWS.
  5. The attacker deletes the source objects/files.
  6. Attacker uploads a ransom note (or emails).

Since this is all automated, the process can start within a minute of a credential exposure.

How can I detect the attack?

Well, if you can’t skip to prevention…

The attacker will usually leave a note with contact information so you can send them Bitcoin, which is convenient. But the odds are most of you will want to identify a problem before that.  Let’s walk through the attack sequence to see where we can pick things up.

First, you will want to enable more-in-depth monitoring of your sensitive buckets. Since this post is already going longer than I’d like I will skip over all the ins and outs of how to identify and manage those buckets and, instead, focus on a few key sources to consider. For cost reasons don’t expect to turn these on for everything:

  • CloudTrail, of course.
  • CloudTrail Data Events for any buckets you care about. This costs extra.
  • GuardDuty.
  • Optional: Security Hub. This is the best way to aggregate GuardDuty and other AWS security services across all your accounts.
  • Maybe: S3 Server Access Logs. If you have CloudTrail Data Events you get most of what you would want. But S3 logs are free to create (you just pay for storage) and do pick up a few events that CloudTrail might miss (e.g. failed authentications). They also take hours to see so aren’t useful in a live-fire incident. Read this for the differences: https://docs.aws.amazon.com/AmazonS3/latest/userguide/logging-with-S3.html

These numbers correspond to the attack sequence listed above:

  1. Self or third party identification of an exposed credential. Usually via scanning common repository, like GitHub. AWS once found one of mine and emailed me. Oopsie.
  2. Your account credential recon detections will work here. Some options include:
    1. GuardDuty findings for instance credential exfiltration. However, this has about a 20 minute delay and there are evasion techniques.
    2. The GetCallerIdentity API call isn’t always bad, but isn’t a call you should see a lot in production accounts.
    3. GetAccountAuthorizationDetails should trigger an alarm every time.
    4. Multiple failed API calls from a single IAM entity.
  3. Now we start focusing on detections that indicate the attacker is focusing in on S3. You likely notice that early detection in these phases can be difficult due to the noise, but keep in mind these will be more viable in situations like production accounts managed via CI/CD with limited human access. Heck, this might motivate you to use more cloud-native patterns.
    1. The GuardDuty S3 findings for Discovery events, which have to be enabled on top of just turning on GuardDuty, depending on how your account and organization are set up. See https://docs.aws.amazon.com/guardduty/latest/ug/s3html for more details.
    2. Filter for failed Read and List Management and Data Events on the S3 service. You might catch the attacker looking around. You could do this in your SIEM but it’s also easy to build CloudWatch Metrics Filters for these.
  4. The attacker is reading the objects and making copies. This may intertwine with the next phase if they read (copy) then delete on each object. The detections for phases 3 and 5 also apply here.
  5. This is the “uh oh” stage. The attacker isn’t merely looking around, they are executing the attack and deleting the copied data.
    1. The GuardDuty exfiltration/impact S3 findings will kick in. Remember, it takes at least 20 minutes to trigger and depending on the number of objects this could be a late indicator.
    2. CloudTrail Insights, if you use them, will alert on the large number of Write events used to move the data.
    3. You can build your own detections for a large number of delete calls. Depending on your environment and normal activity patterns, this could be a low number and trigger faster than GuardDuty. Your SIEM and CloudWatch Metrics Filters are good options.
    4. Mature organizations can seed accounts with canary buckets/objects and alert on any operations touching those buckets.
    5. While it’s a less-common attack pattern, you can alert on use of a KMS key from outside your account.
  6. If you don’t detect the attack until here, call law enforcement and engage the AWS customer incident response team. https://aws.amazon.com/blogs/security/welcoming-the-aws-customer-incident-response-team/

How can I prevent the attack?

To succeed, the attacker needs 3 conditions:

  • Access to credentials
  • Permissions to read and write in S3
  • The ability to delete objects that can’t be recovered

The first layer of prevention is locking down IAM, then using the built-in AWS tools for resiliency. That’s easy to say and hard to do, but here is an S3-focused checklist, and I’ll try to leave out most of the common hygiene/controls:

  • Don’t allow IAM users, who come with static access keys, at all. If that isn’t possible definitely use your tooling to identify any IAM users with S3 delete permissions.
  • Require MFA for SSO/federated users. Always and forever.
  • Have administrators escalate to a different IAM role when they need to perform delete operations. You can even fully separate read and delete permissions into separate roles.
  • If an instance needs access to S3, make sure you scope the permissions as tightly as possible to the minimal required API calls to the minimal resources.
  • Use a VPC endpoint to access the bucket, and layer on a resource policy that only allows delete from the source VPC. Then the attacker can’t use the credentials outside that VPC.
  • Turn on versioning, AWS Backup, and/or bucket replication. All of these will ensure you can’t lose your data. Well, unless you REALLY mess up your IAM policies and let the attacker have at it. Some of these need to be enabled when you create the bucket, so you might need to have a migration operation to pull it off.

You’ll notice I’m skipping Block Public Access. That’s a great feature, but many orgs struggle to implement it at scale since they need some public buckets and it won’t help with attacks using exposed credentials.

This all takes effort and adds costs, so I really recommend focusing on the buckets that really matter at the start. There are some more advanced strategies, especially if you are operating larger environments, that can’t fit in a post but drop me a line if you want to talk about them.

If you only had time for 2 things what would they be?

Assuming I know which buckets matter, I’d turn on versioning and replication. Then I would require privilege escalation for any S3 delete operations.

How can FireMon help?

We have a new IAM product in Beta for Just in Time privileges that should be out soon. We also offer posture checks and threat detectors in DisruptOps to identify risky buckets and alert on malicious activity. Drop me a line if you want to talk about them, or even if you just want some general advice on the AWS options I mentioned in the post.

What new things did you learn in the re:Inforce session?

I didn’t know that S3 ransomware was so common. I also didn’t know that copy-then-delete was the preferred attack technique. I thought it was using KMS encryption and it makes total sense why that is more theoretical and rare. I was familiar with the detectors and defenses, but the AWS speakers did a wonderful job tying them all together in a very clear and usable way. The talk definitely exceeded my expectations.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Ransomware Attacks – The new normal? https://www.firemon.com/ransomware-attacks-the-new-normal/ Tue, 07 Jun 2022 18:15:48 +0000 https://firemon2023.wpengine.com/?p=641

Once again, the world is hit with another ransomware attack. Similar to the WannaCry Ransomware cyberattack last month, Petya is causing major pain among thousands of users, this time crippling banks and infrastructure in what cybersecurity experts called one of the most-devastating digital intrusions of its type. In fact, not only are we seeing an increase in the frequency and sophistication of threats, but security data is growing in volume and complexity, data assembly is labor and time intensive, and infrastructure scale and complexity make it hard to protect the organization.

These issues lead to

  • Difficulty in identifying threats and detecting a breach
  • Increased cost of threat detection and management
  • Inability to respond to constant attacks
  • Slow to translate threats into security policy changes
  • Increased risk of compromise (e.g., data loss, data breach)
  • Skilled employees focused on low-value operational tasks
  • Outages – lost revenue, reduced business productivity and lost opportunity to improve security
  • Data loss, tarnished reputation, cleanup costs and/or breach disclosure

If ransomware attacks are becoming commonplace, organizations need to have tools for reducing their security risk and increase their rapid threat response.

Reducing Security Risk

FireMon’s Risk Analyzer is a Risk Vulnerability Management tool that prioritizes risk remediation efforts. This tool overlays vulnerability data on network security configurations to identify contextual risk (e.g., exploitable hosts), scores vulnerabilities by level of risk to prioritize remediation efforts, and scores firewall rules by the risk they expose to prioritize remediation efforts.

It’s all about being proactive

Cyberattacks are inevitable. The impacts don’t have to be. If an organization is proactive about their security practices, the impacts from these attacks like Petya and WannaCry can be marginalized. Using tools such as Risk Analyzer for contextual risk assessment to find network path vulnerability are key to whether or not an organization will be prepared for next time. And there will be a next time.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Pragmatic Steps Toward Zero Trust https://www.firemon.com/pragmatic-steps-toward-zero-trust/ Tue, 26 Apr 2022 18:13:27 +0000 https://firemon2023.wpengine.com/?p=639

If you ask most security professionals to define zero trust, you’ll get an eye roll and an exasperated sigh. To many, it’s been little more than a marketing exercise—and let’s be honest: a lot of what we’re seen and heard about zero trust over the past decade has been more fluff than substance. The term has been so loosely defined that countless cybersecurity vendors have, at one point or another, claimed to offer some sort of zero trust solution.

It’s easy to see why it has such a bad rep.

Today, though, zero trust has become more tangible. Thanks to NIST SP 800-207 and other concrete documentations and reference architectures, zero trust has been given shape and meaning. And just as importantly, technology has started to catch up with the vision. A pure zero trust architecture may still be out of reach for all but the largest, most well-funded organizations, but that doesn’t mean we can’t all take steps in that direction.

Realistic Goals

The most important step in any journey is the first one, and moving toward zero trust is no different. The first step toward zero trust is planning where you want your journey to end up. The best way to think about that end state is within the context of making access control as granular as possible. That’s really the heart of zero trust, per 800-207, the goal of any zero trust program should be “to prevent unauthorized access to data and services coupled with making the access control enforcement as granular as possible.”

To that end, we’re going to look at a two critical areas of network connectivity: server-to-server connections and user resource access. We’re also going to briefly look at setting up a zero trust pilot program to help overcome the “boiling the ocean” feeling that may come when taking on a project this broad in scope.

Server to Server Security

To say that today’s enterprise networks are complex is to stretch the limits of the word “understatement.” Multiple cloud instances run together with on-premises network hardware comprising devices from a wide range of vendors: all operated by millions of complex policies.

In this environment, excessive access is the standard for policy creation, especially for firewall rules. Nobody wants to be the guy who submarined a project launch or upgrade by accidentally blocking access to a critical service. The obvious problem here is that a network full of overly-broad access is as far from zero trust as possible.

So how do we tackle this? The first step is to continually monitor your environment for any rules granting excessive access and set up guardrails to make sure new rules aren’t too broad.

Along with that, obviously, ensure you’re monitoring logs so you can pick out any aberrant behavior, including conducting TFA log analysis to monitor access privileges.

The end goal is to restrict access as much as possible without interrupting business or slowing down the speed of operations.

User Resource Access

Your employees, customers, and other users are no longer in any single location—and let’s be honest, they haven’t been for quite some time; the pandemic just accelerated the rate of change. The reality today is that a significant portion—in many cases, the majority—of access requests to your critical infrastructure are coming from untrusted networks. Home networks, coffee shops, vacation homes… the perimeter is everywhere and growing. Personal devices are increasingly being used for business operations as well, opening a host of new potential attack vectors and vulnerabilities.

The first step to solving this issue is to adopt a federated access program. Having a consistent set of policies, practices, and protocols in place regardless of what resource is being accessed, or where the access request is coming from is a key step toward zero trust.

These must be implemented intelligently, and with input across the operation—a poorly-implemented access program can both be ineffective at achieving its goals while also resulting in lost productivity as employees struggle to fit existing workflows into new systems.

However, a properly-implemented federated access management program can streamline access while tightening security. And when combined with multi-factor authentication (MFA), goes a long way toward eliminating unauthorized access and increasing the granularity of access control.

Another effective way to secure user access to enterprise resources is by utilizing the SASE capabilities that may organizations already have built into their existing firewalls. Obviously, setting up a SASE from the ground up can be a costly, complex endeavor. However, there are ways to set up the basics without too much effort, as we’ll discuss below.

Zero Trust Pilot Program

Zero trust can appear to be an impossible dream, particularly for those organizations who would benefit the most. Large organizations have thousands of users and servers, and a loss of productivity, even momentary, can bring incredible financial losses.

Further, very few security and IT professionals have experience with many, let alone all, zero trust technologies and workflows. If new systems and workflows aren’t set up property or otherwise negatively impact productivity, there’s a risk of a ripple effect: not only will there be

immediate repercussions, but also that dev teams may go around security in the future, seeing it as a roadblock.

How do we prevent these issues? The simplest way is to avoid them in the first place. Don’t bite off more than you can chew: start with a zero trust pilot program. Pick a business area with relatively simple operations and start there. Ideally, this would be an area with a single (or very few) applications or services. Whenever possible, use technology that you already own—you may be surprised at the amount of zero trust capability that already exists in your environment.

For example, in the previous section we touched upon the benefits of SASE—which doesn’t always require a re-architecting of the network to achieve. Some modern NGFWs have some type of SASE functionality built in, which give enterprises the ability to set up policy-based user access restrictions without additional hardware outlays. Fortinet, for example, offers native capabilities without any additional hardware or subscription cost. Additionally, look at your cloud services to see what capabilities are there, particularly in the area of identity and access management.

Conclusion

Zero trust can appear daunting—and it is if your aim is to reach a pure ZTA. But that doesn’t make it impossible, and it also doesn’t minimize the value of simply going as far as realistically possible. For many organizations, the additional security of a pure ZTA simply isn’t worth the added cost and complexity of its implementation right now—and may not be for quite some time.

The right approach is to evaluate the zero trust capabilities that are within your organizations’ reach and move strategically. Take things one step at a time, and don’t let the sheer scale of the possibilities stop you from taking pragmatic steps that will benefit your security immediately.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Five Tips to Ensure Consistent Security Hygiene https://www.firemon.com/five-tips-to-ensure-consistent-security-hygiene/ Thu, 07 Apr 2022 18:12:08 +0000 https://firemon2023.wpengine.com/?p=638

Security’s focus has always been on protecting against complicated, advanced attacks. The battle between advanced attackers and awesome defenders makes for a great story. You know, good vs. evil.

Many of you have likely been preparing for a cyber-attack from Russia, given their war with Ukraine. The US Government has told us to expect an attack. I’m not sure if Russia will launch significant cyber attacks against the West, but if they do, what kind of attacks will they launch? I’d posit that the answer is the simplest attack that will get the job done. Every nation-state with significant cyber capabilities is sitting on dozens (if not more) of zero-day attacks. But why would they burn a sophisticated attack unless they are forced?

Logical attackers look for the path of least resistance to gain a foothold in your environment. That means taking advantage of the weakest link, and that’s usually simple stuff like misconfigurations and other basic security errors. When someone asks what the best way to protect themselves against these attacks, I typically respond by telling them to do the simple stuff well. You know, blocking and tackling to use a football analogy.

My partner (and DisruptOps co-founder) Rich Mogull has always said that “simple doesn’t scale,” and he’s right. Making a firewall change on two devices isn’t difficult. Enforcing firewall policies on hundreds of devices across the globe is very, very difficult. And doing it right every single time makes it even more challenging.

So let’s talk about solutions to doing the simple stuff well and consistently. Surprisingly enough, it involves a combination of people, process, and technology. And we focus heavily on the process because that’s the best way to achieve consistency. If everyone knows what they are supposed to do and you have the means to track their activities, you tend to get consistent results.

These five tips should provide you with a map to improve security hygiene, as well as your overall security posture.

Tip 1: Get Alignment on Policy

If you don’t know where you are going, you have no idea when you will get there—or even where “there” is. So the first tip is to set your hygiene policy so you know what success looks like. Whether setting a goal to patch within a week or blocking outbound connectivity to specific geographies, having defined and documented policies will ensure everyone is on the same page — before you start blocking stuff.

Tip 2: Expand Visibility

I suspect you’ve heard the adage that if you can’t see it, you can’t manage it. It happens to be true. Once everyone is aligned on the policies, you’ll need to figure out what’s in the environment. To be clear, you should know a bit already. Like your locations and the infrastructure already installed. Maybe you even have a CMDB that (allegedly) has asset information. That’s a start.

Whatever asset list and posture information you have is likely out of date, especially with cloud and SaaS proliferating. So you need a defined process and tooling to ensure you understand the entire technology estate, both on-prem and in the cloud.

Tip 3: Manage Changes

Another critical process to get implemented is change control. Who makes what changes when? This process should be thought through before you learn about Log4j (or the next widespread vulnerability). The key to consistent and successful operations is ensuring that everyone knows their job. In an all-hands-on-deck situation, the last thing you need is uncertainty about roles and responsibilities.

Are there approvals required to make changes? Do the approvers have an RTO (response time objective)? Are there situations where it’s urgent enough to make the change without approval? How much downtime is acceptable? These are the kinds of situations the change control process needs to handle.

Also, be sure to audit who is making changes as part of the process. You’ll want to know who screwed up in the event of a faulty change (I’m only half kidding on that one). And in the event an admin device is compromised, any changes made by the attacker will be logged so you can roll them back quickly.

Tip 4: Continuous Monitoring

At this point, you’ve probably had enough of processes: now you need to do things. That’s the fun part, right? The key to hygiene is monitoring. Like you want to go to the dentist twice a year to check for cavities, you want to watch your infrastructure to make sure everything complies with the policies.

That means checking the devices for configuration changes. As mentioned above, a misconfiguration tends to be the path of least resistance for attackers, so you’ll want to make sure you know if/when a config is changed.

You’ll also want to monitor for available patches. You may wait until the next patch window to apply the patches, but you want to know which devices need to be updated and the relative urgency of the patch so you can effectively plan the work.

Notice that I said “continuous” above, but that is a relative term. Should you be checking configurations every minute? Or every hour? Or every day? It depends, but in general more monitoring is better than less. The best option is actually to look for changes in your log streams. For example, you can set an alert when a change is made to a security group in AWS or a firewall rule in Panorama (if you use Palo Alto firewalls). That trigger can ensure you know about a change as soon as it happens, and if a malicious actor made the change – you can bet that every minute counts.

Tip 5: Automate (almost) Everything

We are big fans of automation. In fact, it’s a core aspect of all our products. Remember that “simple doesn’t scale,” so as your environment gets bigger and more complicated, embracing automation is absolutely critical. Given the security skills gap and challenge of finding and retaining security staff, the more the machines can do, the better.

You can automate applying fixes to your devices, and you can automate rolling back unauthorized changes. You can let the machines monitor information sources that tell you about patches, and those same machines can gather a bunch of information about changes to pinpoint out-of-cycle or unauthorized changes.

Our friends at AWS believe that any time a human changes their infrastructure, it’s a failure to automate. That’s aspirational for a vast majority of companies, but it’s a good vision. As you burn in your processes and see what rote tasks your people are doing over and over again: automate them. There is (in some cases, understandable) hesitation to automate too much. Don’t automate faster than you’re comfortable with, but by the same token don’t let a fear of change hamstring your organization.

To wrap up, you *don’t* want to be the path of least resistance for the attackers. Your security posture will be significantly stronger if you can consistently ensure security hygiene from an operational standpoint. We aren’t saying you’ll be impervious to attack, but you’ll make the attackers work for it.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Integrate Anywhere: API-First Agile Approach | The 5 Critical Success Factors to Agile NSPM https://www.firemon.com/integrate-anywhere-5-critical-success-factors-agile-nspm/ Thu, 08 Oct 2020 14:10:59 +0000 https://firemon2023.wpengine.com/?p=602

In this series, FireMon looks at the five most important capabilities a network operator must build into their management practices in order to keep their environments secure, compliant, and ready to grow. Here is the second: integration.

Pillar #2 – Integrate Anywhere

So many security devices. So much sprawl. So little control or consistency. Network operators are swamped by multitudes of misaligned processes that fracture the work of network, security, and engineering teams. Manual processes around policy management slow responsiveness and lead to redundant efforts, while manual security and compliance checks across a diverse environment hinder deployment and throttle feature delivery. And when a new security tool is added, fitting it into the security stack often requires wholesale changes across the entire environment.

Less than 1 out of 4 network operators surveyed by FireMon report that their organizations are using integrated network security. That means that about 75 percent are still trying to manage their complex, multi-cloud environments on spreadsheets and connecting them with massive code projects that are hard to manage, hard to secure, and hard to prove are compliant.

As networks continue to grow in complexity, these struggles can’t be swept under the rug. Enterprise network operators are eager to use APIs to solve the pain of sprawl, hoping to increase their businesses’ overall efficiency and value, get a better ROI on their security spend, easily adapt innovative business models, and cherry-pick solutions based on features, not compatibility with existing assets.

That all sounds great, and it would be — if all APIs lived up to the hype. But not all APIs are created equal. Some are hard to understand or are poorly written, requiring DevOps to grind out even more complex code to get them to work. Then the new code must be tested, debugged, and tested again, and by then, the benefits of using an API have been whittled down to nearly nothing. When an update is needed or a new vendor is added into the stack, all that code needs to be checked and changed again. Hopefully, there is good documentation. In reality, there probably isn’t.

We at FireMon believe that integration isn’t just another feature – it’s the lifeblood of the modern enterprise, enabling organizations to extract critical data and deliver it instantly where it is most needed. Enterprises need robust, well-defined API structures if they intend to easily exchange information between all their security solutions.

The Agile Approach: API-First

FireMon’s integrations extend our network security policy management capabilities to other tools and platform, no matter where in your environment they reside – public cloud, private cloud, on-premise, or a hybrid mix. The result is a single, infrastructure-agnostic platform that enables two-way sharing of data between security devices, platforms, and applications, so security tasks can be accomplished faster and more easily.

Integrations can be automatic, partially customized, or fully customized, so every use case can be served, and DevOps can use their own toolchain for all integrations. Out-of-box integrations are available for the most popular platforms, including Microsoft Azure, ServiceNow, Cisco ACI, and Swimlane (and many more), but FireMon APIs are flexible enough to accommodate custom code, complete with two-way data sharing and automation.

In addition, FireMon’s agile NSPM platform integrates with SOAR, ITSM, vulnerability management, and DevOps tools. Any functionality can be integrated via code, Swagger UI, or workflow building blocks.

The KPIs of APIs

There are three key attributes to being able to integrate anywhere: extensibility, flexibility, and innovation, AND can meet these four critical KPIs:

  • The API must enhance user experience and encourage greater interaction with the intuitive user interface
  • It must be easy for DevOps to understand so they can focus on spinning up applications
  • It must be flexible enough for DevOps teams to use in novel, innovative cases
  • It must enable businesses to easily trial the FireMon platform in test environments for specific use cases.

Before we roll out any API, we make sure it fills all those requirements, because our APIs have to be easy to use if we’re going to help you become more secure, efficient, and agile.

What’s All This Look Like in Real Life?

FireMon customers report a 40 percent reduction in active rules and a $10.3M cost avoidance over 5 years. These benefits are achieved by reducing complexity – for example, one customer was able to reduce their number of firewall vendors from 9 to 4, gaining visibility across all firewalls with the ability to drill down, automation of their rule request process, automation of their policy audits, and the injection of compliance checks within existing workflows.

FireMon network security integrations are built to be flexible, but the most common use cases are:

Centralize management & orchestration of policies
Get accurate data through a single interface in real-time.

Identify vulnerabilities
Get real-time scans by integrating FireMon’s native visibility features with vulnerability scanners and correlating them with network topology and security configuration data.

Support network changes
Integrate FireMon’s agile NSPM platform with other vendors’ security devices on-premise or in the cloud.

Accelerate change management
Fast-track integrations out of the box with ITSM tools. Extend ticket and routing processes and other workflows to network security policies.

Support enterprise automation
FireMon integrations extend beyond policy management, supporting discovery, risk mitigation and network configuration and management, through both native functionality and integration with third-party tools.

Streamline and speed incident response
Combine threat alerts from SOAR with FireMon rule recommendation and automation.

See for Yourself

See for yourself how you can gain 100 percent visibility into your complex environment and shorten SLA times while enforcing and unifying security policies across your hybrid infrastructure. Schedule a demo today to learn how FireMon can help you get more out of your existing security investments and increase the agility and security of your network right now.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Top 5 Network Security Challenges in 2021 and Beyond https://www.firemon.com/network-security-threats-challenges/ Sat, 30 May 2020 18:27:01 +0000 https://firemon2023.wpengine.com/?p=586

There are a lot of theories about which network security challenge is the most important at any given time. The issue is highly subjective, particularly in this world of advocates, specialists, and vendors, who are each fixated on their particular piece of the puzzle.

But in the end, what matters is that organizations properly align and continuously adjust their activities so they can mitigate or even prevent the most prevalent threats to network security. Because while the threats haven’t changed much – viruses, botnets, access control, and visibility are evergreen challenges – the way malicious actors try to leverage vulnerabilities and the way we fight them changes all the time.

Right now and for the foreseeable future, the choice of weapon is automation. Hackers use automation to find the most valuable data inside a network, conduct brute force attacks, deliver loaders and cryptors, operate keyloggers, execute banking injects, operate bulletproof hosting services, and more. We have to fight fire with fire, and automation is the only way to protect a complex, dynamic network from modern network security threats.

5 Key Challenges in Network Security

This list presents five specific challenges to network security, but they are all children of one overarching network security condition: IT infrastructure complexity. That’s the real issue, and there’s no way around it. The average enterprise has around 500 products in its technology stack and uses more than 1100 APIs. Add in the current COVID-19 pressures that are driving a movement to remote work to the tune of more than 16 million new remote users, and we find ourselves managing more connections, users, and devices than ever before. We need the ability to understand network security challenges and scale our responses at top speed if we want to secure our organizations from threats.

But talking about complexity doesn’t provide any actionable information. So dig into the list below to see which aspects of complexity you can actually manage and how to do it.

1. Misconfiguration proliferation

Perhaps the least glamorous of all security threats, misconfiguration continues to hold a top spot as a serious network security threat. According to Gartner, between now and 2023, 99% of firewall breaches will be caused by misconfigurations rather than firewall flaws. How frustrating that something so fundamental continues to put businesses at risk year after year.

Firewalls are hard to manage because networks are complicated and getting more complicated by the month. In our State of the Firewall report, almost one-third of respondents said their organizations use more than 100 firewalls, and 12 percent use more than 500. At this scale, managing the products, optimizing their rules, and exposing gaps in firewall enforcement is a task that can’t be handled manually. Automation is essential.

But that doesn’t mean full automation – the best solutions provide adaptive control and visibility over networks and firewalls. The goal should be to minimize human error rather than replace humans, because analysis activities during triage and escalation require an understanding of nuance that no machine possesses.

2. Lax control of privileged access

Privileged access abuse is a favored method of hackers because it’s easier for them to exploit existing credentials than to hack into a network. That’s why 74 percent of breaches start with privileged access abuse.

Many organizations focus their firewall management activities on permitting access. That often leads to too many users being granted levels of permissions that are too high. This is a dangerous mistake. In order to make the firewall a more effective security device in the network, risk must be evaluated with the same weight as access.

Credentials alone do not give enough information about whether the user requesting access is legitimate. Credentials need to be authenticated in context with other factors, such as geolocation, IP address, time zones, etc. Privileged access needs to be reviewed regularly – for instance, during COVID-19 work-from-home restrictions, IP addresses and geolocations are going to be out of the norm. These will have to be shifted back to the status quo for users who return to the office in upcoming months.

Automation plays a critical role in reducing privileged access abuse by reducing the accidental errors that lead to misconfigurations and increasing security agility—an essential attribute at any time, but especially during exceptional conditions like those engendered by COVID-19. By eliminating human error that can compromise a network increasingly accessed by remote workers, the operational efficiency of security teams can be maximized and instances of security misconfigurations reduced.

3. Tool interoperability shortcomings

The problem isn’t too many tools. The problem is too many tools that don’t share data seamlessly.

A network is not a single zone. It’s a system of software-defined networks, micro-segmentation, and network rules and assets that create exponential complexity. To try to understand what’s happening in the network, security teams must shift from console to console, struggling to make sense of what one metric means in context with the others. The result is an environment that fosters human error and leaves gaps that adversaries can exploit.

Some organizations think they’ll be safe even if their tools don’t integrate with each other because they do integrate with the SIEM. But SIEMs focus solely on system-generated signals, which means they can miss manually-executed attacks and user-specific anomalies, such as a user in the marketing department logging into a system used by the financial department. According to IT decision-makers, traditional SIEMs are not intuitive, do not provide accessible insights, and produce more data than staff has the capacity to analyze.

Security analytics platforms make data more accessible to more people so it can be consumed and analyzed efficiently. Natural-language search and analytics removes the need to learn a query language. Data collection doesn’t require parsing, which eliminates the prerequisite knowledge normally required to bring different data sources together. A security analytics platform automatically enriches and correlates collected data to speed up the time it takes to discover unusual activity on the network.

4. Lack of visibility

Visibility changes from moment to moment as new devices and endpoints join and leave the network. Typically, there is no way to tell if the network is secure or compliant at any given point in time – at best, security professionals can look back over historical data to tell if the network had been secure at some point in the past. That isn’t actionable information.

Organizations need to understand how and why firewall rules are configured, the consequences of any changes, and how the changes impact security and compliance postures. Few can achieve this, due to common obstacles such as a lack of IT staff availability, poor network management tools, a lack of visibility into app delivery paths, and a lack of IT at remote offices, to name just a few.

Automation can provide the means to see, map, and manage changes to an infrastructure at any given point in time. This is true visibility, and it makes an impact that resonates beyond the SOC. Visibility supports the business as a whole by enabling changes to be made faster and more securely without breaking compliance. The gap between managing network security risk and delivering business opportunities that drive competitive advantages is filled in.

5. Controls that are out of step with infrastructure changes

Security teams are not able to keep up with ever-increasing volumes of vulnerabilities that need to be patched, new applications that need to be tested and deployed, emerging threats that need to be mitigated and, of course, access requests that must be granted, returned for further authentication, or denied. The solution to handling this volume and variety of work is orchestration.

Orchestration is often thought of as synonymous with automation, but that’s not accurate. Automation focuses on executing a particular task, while orchestration arranges tasks to function optimally within a workflow – for instance, by bringing together the entire body of security controls and automating change.

An orchestration solution should be comprehensive, automating network security in every aspect from policy design to implementation. It should support real-time monitoring from a live stream of data to enable instant snapshots of a network’s security posture from moment to moment. And it should scale in all directions, collecting security details and normalizing device rules for storage in a unified database. The solution should provide a single console that provides total network visibility and the ability to command security controls.

How Better Network Security Helped
a Healthcare Organization Achieve
Compliance and HITRUST certification

 

Convey Health Solutions struggled to stay in compliance with healthcare regulations while maintaining over 40 firewalls that relied on manual processes and lacked centralized management. The organization asked FireMon to help them streamline their compliance efforts and automate their change management processes. Convey Health Solutions’ decision was driven by FireMon’s out-of-the-box, customizable compliance assessments, automated rule documentation and reporting, and workflows for rule review and recertification, Now, the healthcare organization can analyze and report in real-time what systems have been calibrated together to prevent unauthorized access and protect critical assets. The business has also been able to clean and push out almost 300 rules that had not been reviewed in over three years and found over 150 “shadow rules” that FireMon helped them identify and remove quickly. The use of FireMon helped Convey Health Solutions achieve its HITRUST certification and shrink its audit time by two-thirds.

Automate your network security with intention

Automation is not without risk. When planned poorly, it will increase operational costs and potentially subject organizations to financial fallout from network security breaches and regulatory fines.

But when done well, automation makes enormous business sense and will deliver on its promises of consistency, cost optimization, ongoing visibility and assessment, and effective management of the organization’s network security profile, as well as supporting proactive risk mitigation. And considering the complex, dynamic networks that organizations must govern across firewalls, applications, databases, data centers, cloud, and containers, automation isn’t optional any more. It’s the only way to stay operational.

Our advice is to automate mindfully. The FireMon approach to network security automation is built on providing a context around access requests to help system administrators and network engineers implement change that enables the business without introducing the new risks that come with handling thousands of change requests daily. Using our intelligent, automated workflow, security administrators can implement the right changes with absolute precision.

Learn more about how FireMon can help your organization improve its network security while driving innovation at the speed of business.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>