Cloud Security Operations – FireMon.com https://www.firemon.com Improve Security Operations. Improve Security Outcomes. Mon, 31 Jul 2023 18:01:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.firemon.com/wp-content/uploads/2023/03/256px_FMLogoBug_Color-100x100.png Cloud Security Operations – FireMon.com https://www.firemon.com 32 32 The Grand Unified Theory of Cloud Governance https://www.firemon.com/the-grand-unified-theory-of-cloud-governance/ Wed, 05 Oct 2022 18:57:18 +0000 https://firemon2023.wpengine.com/?p=665

One of the toughest lessons I’ve learned as I’ve spent over a decade of my life helping organizations build cloud security programs is how it’s governance, not technology, that’s the real challenge. Yes, the cloud is a dark box full of invisible technical razor blades, but those are manageable with a little time and effort. The real pain isn’t around figuring out the tech, but in figuring out how the heck to govern all that tech.

Because the fastest path to failure is to treat cloud governance like your non-cloud IT governance.

Organizations that ignore cloud and let it run wild and free always end up in trouble, and organizations that try to enforce their existing governance end up with… just a different set of troubles.

One advantage of my role as a researcher and advisor was getting to see the inside of a wide range of organizations as they managed these issues, and I saw both successes and failures. Over time, patterns emerge. And when it comes to governance, I saw a few threads that seemed to tie things together. I call this The Grand Unified Theory of Cloud Governance:

  • Cloud has no chokepoints, and thus no gatekeepers.
  • All administrative and management functions are unified into a single user interface that is on the Internet.
    • Protected with a username, password, and, maybe, MFA.
  • Technology evolves faster than governance.

I believe this encapsulates the essential governance challenges of cloud computing, but to flesh it out further:

  • Existing IT governance is the natural outcome of scarcity due to working within physical facilities. We evolved separate teams to manage disparate, complex technologies like networks, servers, and various facets of security.
    • The physical constraints and scarce resources of a datacenter required business units and application/development teams to work with the platform owners like networking to obtain resources.
    • Many of our governance processes depend on this natural scarcity and platform ownership. A random developer can’t simply provision their own public IP address since they don’t have any administrative control of the network.
  • Cloud computing removes scarcity, boundaries and gatekeepers. A full class-B network is only a credit card and a few API calls away. Cloud providers also leverage automation to simplify many aspects of infrastructure management (at least on the surface).
    • Many of the advantages of cloud computing are the direct result of the elimination of resource scarcity, gatekeepers, and manual configuration. Automation, infrastructure as code, CI/CD, result in tremendous operational advantages, but are fundamentally incompatible with the scarcity and gatekeeper-driven existing governance.
  • However, cloud unifies all administrative controls to a single console/portal.
    • Which is Internet-facing and protected by a username and password.
  • Thus, cloud breaks existing governance models and forces organizations into adopting more-distributed governance and shifting resources towards an identity-centric control
  • This is a painful transition, because adopting cloud technologies is faster and easier than changing technology governance models.

It’s this essential conflict of decentralized administration with centralized risk moving at a blistering pace that most challenges governance and security. The most successful enterprise governance efforts accept the need for different governance implementations for cloud and non-cloud environments rather than trying to enforce one implementation across two totally different ecosystems. They run in parallel and unite at the top, but each environment is governed using a model optimized for it’s unique characteristics.

In future posts I’ll run through some of the best ways I’ve seen organizations govern cloud, but since I absolutely hate posts that raise issues and don’t provide answers, here are a few high-level tidbits:

  • Centralize standards, visibility, and monitoring but distribute operations with tools like ChatOps.
  • Provide frictionless flexibility in development, but rigid management in production with tools like CI/CD and infrastructure as code for consistency and auditability.
  • Gatekeep access to critical/regulated data to narrow the scope of critical focus.
  • Manage the IAM perimeter first, not the network.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Schrödinger’s Misconfigurations https://www.firemon.com/schrodingers-misconfigurations/ Mon, 19 Sep 2022 18:47:07 +0000 https://firemon2023.wpengine.com/?p=658

It’s Thursday afternoon and you’re getting ready to leave work a little early because… you can. But then that pesky Deliverer of Notifications (also known as Slack) pops off a new message in your security alerts channel:

Well, darn. Someone just made a snapshot of a storage volume public. Is this an attack? A mistake? Someone who just doesn’t know what the policies are?

Misconfigurations Have Three States of Being

This is something I’ve starting calling Schröedinger’s Misconfigurations since I have a bad habit of using principles of quantum mechanics to explain information security. I’d be stunned if you didn’t already know about Schröedinger’s Cat, the famous thought experiment that Erwin Shröedinger used to illustrate the paradox of quantum superposition to Albert Einstein. The really short version is that if you stick a cat in a box with some poison triggered by radioactive decay, the cat is neither alive nor dead, and is thus in a state of being alive AND dead, until you open the box and check.

Yes, that’s absurd, which was the point. Especially to those of us with cats who DO NOT LIKE BEING TRAPPED IN BOXES. Although I could do an entire blog series about cats crawling into boxes on their own but getting very angry if you put them in boxes and… I digress.

Back to cloud security. The fundamental concept behind the thought experiment is that something exists in multiple simultaneous states until you observe it and that act of observation forces an answer. I am, of course, skewing and simplifying to meet my own needs, so those of you with physics backgrounds please don’t send me angry emails.

The cloud version of this concept is that any given misconfiguration exists in a state of being an attack, a mistake, or a policy violation until you investigate and determine the cause.

There are 5 characteristics of cloud that support this concept:

  • Cloud/developer teams tend to have more autonomy to directly manage their own cloud infrastructure.
  • The cloud management plane is accessible via the Internet.
  • The most common source (today) of cloud attacks is stolen credentials.
  • Many misconfigurations create states that are identical to the actions of an attacker (e.g. making a snapshot public).
  • It’s easy to accidentally create a misconfiguration, and sometimes they are on purpose to meet a need but the person taking the action doesn’t realize it is a security issue.

This concept holds true even in traditional infrastructure, albeit to a far lesser degree since teams have less autonomy. A developer on an app doesn’t typically have the ability to directly modify firewall rules and route tables. In cloud, that’s pretty common, at least for some environments.

Assume Attack Until Proven Otherwise

One of the more important principles of incident response in cloud is that you absolutely must treat misconfigurations as security events, and you have to assume they are attacks until proven otherwise.

This is a change in thinking since security is used to thinking in terms of vulnerabilities and attack surface, but we view those as things we scan for on a periodic basis and, largely treat as issues to remediate. I’m suggesting that in cloud computing we promote detected misconfigurations to the same level as an IDS or EDR alert. They aren’t merely compliance issues they are potential indicators of compromise.

And no, this doesn’t apply to every misconfiguration in every environment. We have to filter and prioritize. Better yet, we have to communicate, because usually the easiest way to figure out if a misconfiguration is a malicious attack is to just ask the person who made the change if they meant to do that.

Since I have to distill things down for training classes, I’ve come up with three primary feeds for security telemetry:

  • Logs
  • Cloud provider events (e.g. Security Hub events)
  • Cloud misconfigurations, which can come from your CSPM tool, OSS scanners, or similar

Most people working in cloud security have already internalized this concept, but we don’t always explain it. If you look at some Cloud Detection and Response (CDR) tools they generate alerts on some misconfigurations. This is different than the default CSPM tool modality that creates findings in reports and on dashboards. Those are important for compliance and general security hygiene, but since attackers do nasty things like share disk images to other accounts or backdoor access to IAM roles, a subset of misconfigurations really need to be treated as if they are indicators of compromise until proven otherwise.

Internally (and in the DisruptOps platform) we handle this with a set of real-time threat detectors that trigger assessments based on identified API calls. It takes about 15-30 seconds to identify a misconfiguration and send it to security and the project owner via Slack (or Teams) like you see above. These alerts are treated the same as a GuardDuty finding or any other Indicator of Compromise, but using ChatOps to validate activities also helps us triage these really quickly without having to perform a deep analysis every time.

The tl;dr recommendation: treat key cloud misconfigurations in near-real-time and treat them as indicators of compromise until proven otherwise. 

No cats were harmed in the drafting of this post.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
How to Select a Change Management Solution https://www.firemon.com/how-to-select-a-change-management-solution/ Thu, 15 Sep 2022 18:43:32 +0000 https://firemon2023.wpengine.com/?p=656

The most common threat to business security is accidental firewall and cloud security group misconfigurations. Manual rule and policy management of complex ground-to-cloud networks introduces countless opportunities for error, and most breaches are attackers taking advantage of this low-hanging fruit. Time-consuming manual changes, fragmented ownership, and policy clutter all contribute to poor policy hygiene. Centralizing and automating your change management across all of your resources is key to preventing misconfigurations that can lead to massive breaches.

Problem 1: Time-consuming Manual Changes

The average enterprise network team is asked to make more than 100 firewall changes per week, and these changes can then take weeks to manually implement. With today’s technology, new environments are created nearly instantaneously. A week-long lag in corresponding policies is not acceptable, and a misconfiguration due to a rushed job can allow attackers in or block legitimate users from mission critical services.

Manual processes prevent network teams from handling the growing complexity of their firewall rule sets, compliance assessment requirements, and next generation devices. Points of exposure are often missed because new leak paths and breach avenues were not detected.

Problem 2: Fragmented Ownership

Historically, an infrastructure team was tasked with application deployment in collaboration with a security team that ensured appropriate security controls were in place based on a corporate-wide policy. Today, however, you have application owners, DevOps, and a wide array of operational programmers deploying code multiple times a week, without security controls. Many of these missing controls are what kept the organization compliant with internal policies, industry regulatory frameworks, and applicable privacy legislation.

Growing complexity without automation is leading to misconfigurations due to human error, while fragmentation without automation is increasing risk to the organization. Just as adding more people can’t keep up with the volume of work, neither can the best technology without efficiency.

Problem 3: Policy Clutter

Having multiple teams regularly updating policies without regard to old policies can lead to duplicate/redundant rules, shadow rules, and unintentional misconfigurations. It can take a long time and a lot of effort to thoroughly clean up your firewall and cloud security policy rule base. The second you’re done cleaning and fine tuning, new requests come along that can easily undo everything you worked so hard to achieve. Worse yet, unauthorized changes can undo everything, and you may never know about it.

Businesses need security-friendly capabilities to prevent misconfigurations and rule errors from creeping into the network and remaining undetected and unremedied for undetermined amounts of time.

Solution: Change Management via Network Security Policy Management (NSPM)

Network Security Policy Management (NSPM) platforms offer centralized change management and are critical to helping you prevent misconfigurations and rule errors from creeping into your network. However, not all NSPMs are created equally. When researching NSPM and change management, ask yourself if your network security policy management solution allows you to quickly and easily:

  1. Create search queries to identify existing rules(or network or service objects) that are affected by a pending policy or configuration change –and export the resulting list to share with team members for remediation.
  2. Convert the search terms into a control for use in ongoing security assessments in any of multiple categories (Allowed Services, Device Properties and Status, Service Risk Analysis and more), allowing you to apply the assessment or control to specific elements or devices within your network, and even write remediation instructions in the event of a failure.
  3. Ensure that any failed controls are automatically flagged in customized reporting –in real time –with device and other relevant details, prioritized by severity.
  4. Visually review compliance across your entire enterprise with a matrix of sources and destinations –data centers, cloud zones,external and internal connections and more –to see at a glance which destinations are accessible from which sources, whether each possible routing meets compliance policies or is even governed by one.

FireMon centralizes your data and automates your policy management. No matter how many firewalls, cloud security groups, and other policy-control devices you have on your network, FireMon knows every detail of every device and intelligently designs rule changes that are optimized for your environment. FireMon’s automated change management dynamically and continuously responds to evolving requirements and environments, even after policies have been deployed.

Defining firewall change management workflows with FireMon enables you to:

  • Effectively design and report policy changes
  • Search ad hoc for problematic changes
  • Receive event-driven alerts
  • Integrate with existing business processes

Manually updating policies is time-consuming and leads to human error. Multiple teams creating policies on the fly can lead to contradicting rules. And the older and larger the organization, the larger the pile of policy clutter. FireMon centralizes your policy data into one dashboard, and allows you to make policy changes quickly, accurately, and easily. Find out more by reading about FireMon’s Change Management solution.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Ransomware is in the Cloud https://www.firemon.com/ransomware-is-in-the-cloud/ Thu, 01 Sep 2022 18:40:47 +0000 https://firemon2023.wpengine.com/?p=654

Visibility, monitoring, and collaboration are the keys to identifying and preventing ransomware from breaching your infrastructure.

In a world of rapid digital transformation, ransomware ranks among the top concerns for cyber security professionals, and with good reason. It is elusive and can pawn even the most secure of organizations. Once the malware enters your network, it can ferret around and hold assets in other parts of your organization hostage.

Currently, ransomware primarily targets vulnerabilities within on-premise network infrastructures. However, as the majority of companies transition to hybrid or purely cloud operations, the bad guys swiftly follow suit. Though we aren’t yet seeing it make headlines, ransomware attacks to the cloud have begun. Amazon Web Services (AWS), the most commonly used cloud platform, recently released the Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF) whitepaper. The guidelines for protecting your cloud directly correlate to the general security best practices of Identify and Protect, Detect and Respond, and Recover. Similar to traditional on-premise network infrastructures, protecting against ransomware in the cloud requires a team effort, using multiple solutions working together for a layered approach.

The entire FireMon product suite (Cloud Security Operations, Cyber Asset Management, and Security Policy Management) provides comprehensive views into network security, data center assets, and cloud posture and assets, displaying how resources are connected to data, how they are configured, and how the network and resources are secured.

FireMon’s cloud security operations product, DisruptOps, is an AWS independent software vendor (ISV), and is designed to integrate with your AWS and/or Azure cloud infrastructure. DisruptOps breaks down barriers between development, security, and operations teams, enabling everyone to become an active defender of your cloud infrastructure. DisruptOps is a cloud security operations platform that aligns with the first two guidelines discussed in the AWS whitepaper: Identify and Protect (prevention) and Detect and Respond.

Identify and Protect

You cannot protect what you cannot see. Similar to the way FireMon’s Cyber Asset Management solution provides this for on-premise resources, DisruptOps can identify systems, users, data, applications, and entities within your cloud network. DisruptOps continuously assesses the posture of the cloud management plane and ensures firewalls are properly configured and managed. DisruptOps identifies, alerts, and remediates cloud misconfigurations (vulnerabilities).

DisruptOps Authorization Control takes prevention a step further by providing frictionless Just in Time authorizations and access using ChatOps. Stolen static credentials, like access keys, are the most common vector for cloud management plane attacks. Attackers will also target the workstations of employees with cloud access, so they can potentially steal usernames and passwords – even with federation. By providing full visibility into user access and requests, Authorization Control provides unparalleled visibility and management of user authorizations. Administrators and developers can request just the access they need, for given time windows, to only the required resources, using ChatOps for frictionless Just in Time authorizations.

This eliminates the possibility of an attacker using stolen user credentials to encrypt data since Authorization Control can require approvals for all use of encryption (or any) cloud management activity. Authorization Control can also restrict access based on tags or IP addresses. Plus, all authorization requests and approvals are fully monitored, logged, and can even be broadcast to teams to provide full visibility into who is doing what.

Detect and Respond

Time is money in regards to ransomware. The DisruptOps Cloud Detection and Response (CDR) capabilities speed up incident response times. DisruptOps includes cloud-native threat detectors for common attacks, and enhances provider alerts through advanced enrichment and routing to separate the signals from the noise. Paired with our built-in actions, responders can move much more quickly and efficiently to adverse cloud events. DisruptOps integrates with cloud monitoring feeds to provide comprehensive visibility into cloud events.

DisruptOps can immediately route alerts not only to security, but directly to the designated cloud account team for immediate investigation and response. Our CDR capabilities identify the needles in the haystack and route them to the account owners and security analysts for rapid identification of potential problems that might otherwise hide until a log review.

Contact us to find out more about FireMon’s Cloud Security Operations.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Implications of the AuthN/AuthZ Gap https://www.firemon.com/implications-of-the-authn-authz-gap/ Wed, 24 Aug 2022 18:32:58 +0000 https://firemon2023.wpengine.com/?p=653

It’s become common knowledge that in cloud, “identity is the new perimeter”. It’s a nice phrase that’s easy to toss into a presentation or an article, but turning it into actionable guidance is a little tougher. Today, I want to focus on just one aspect of cloud IAM I call the “AuthN/AuthZ Gap”. It’s actually an issue anytime you use federation, but the stakes in cloud are higher since the cloud management plane is always internet facing.

First, a short primer on AuthN vs. AuthZ:

  • AuthN is authentication. The act of proving you are an entity. For us mere mortals this is usually a username, password, and maybe MFA.
  • AuthZ is authorization. The act of seeing if an action is allowed. For IaaS cloud this nearly always maps back to an API call, even when you use the web console/portal.

Authentication and Authorization are different tasks, with different flows. When we log into a website we authenticate and that typically creates a session. We don’t have to re-enter our credentials every time we click on something, our browser just sends along a token that’s valid for a time period. Authorizations are typically checked every time we try to do something to make sure we have the right permissions.

If you think about it, once you have that session token it isn’t checked again unless something in the code forces a new check. Your authentication is valid for the session, and thus you can do whatever is within scope of your authorizations. Depending on the platform/system those temporary credentials will still work even if they are revoked or the account is totally deleted. This is the AuthN/AuthZ Gap.

I spend a lot of time on this topic when I teach Cloud Incident Response since I’ve found that even experienced security responders don’t always understand the implications. Imagine the case of some stolen cloud credentials: you revoke or delete the credentials, but the attacker has an active open session and can potentially still execute authorized actions. This is… bad.

How do you resolve it?

Depending on how you are using those credentials the easiest option is usually to change the authorization. Just put a deny policy on the entity (or remove allowed actions), since that is evaluated for every API call made by the attacker. While this is the simplest option, it isn’t always the best since it can break running tasks (stolen credentials aren’t always just tied to users). Some other options, depending on the support from your cloud provider, include:

  • Adding conditionals to restrict the originating IP for the API calls.
  • Denying all sessions created before a particular date/time.

You are more likely to run into this issue when federating into a cloud provider than within the provider. I just ran a test in AWS and I lost access on open sessions pretty quickly after deleting an IAM user. However, the same isn’t necessarily true when federating in from an external identity provider, and even within AWS some services won’t check credentials again during active sessions (e.g. my Session Manager session stayed alive for about 15+ minutes after deleting the role that allowed access).

This isn’t some big unknown vulnerability, it’s something to keep in mind when developing your cloud security controls and incident response playbooks. Different kinds of credentials have different lifespans both between and within cloud providers. Your identity provider will also be a factor, plus where you try and revoke credentials. For example, if you have a user in Active Directory that you then federate into AWS, and that user assumes a role in AWS and retrieves session credentials for that role, you need to revoke or restrict the assumed role credentials even if you restrict the user in AD. AWS will have no idea the user’s session from AD is no longer valid until the end of the session when it goes to revalidate the authentication.

Internally we switched to using our Authorization Control tool which supports session creation with restrictions via ChatOps without adding any friction that might slow our developers down. While it isn’t in general release yet, if you are interested in kicking the tires during early access just email me directly at rich.mogull@firemon.com.

]]>
Goodbye “Kill Chains”, Hello “Attack Sequences” https://www.firemon.com/goodbye-kill-chains-hello-attack-sequences/ Fri, 19 Aug 2022 18:20:57 +0000 https://firemon2023.wpengine.com/?p=646

A few years ago at the RSA Conference I co-presented on the top cloud attack “kill chains”. Shawn Harris @infotechwarrior and I walked through what we considered to be the top 10+ real world cloud attacks. For each attack, we walked through each step, and some attacks had multiple branches to show the different options.

We called these “kill chains”, but technically, a cyber kill chain is a very specific technique for modeling attacks developed by Lockheed Martin. Each attack starts with Reconnaissance and runs through a series of proscribed steps until Actions on Objectives. In the talk, we pointed to Lockheed Martin’s work and how our approach differed since we didn’t limit ourselves to proscribed steps. Instead, we walked through each step for a successful attack. This also differs from the MITRE ATT&CK tool, which instead focuses on categorizing attack techniques into categories but doesn’t require a given attack to go through every stage in order. ATT&CK also includes in-depth modeling for techniques and sub-techniques.

Both tools help us model attacks to identify where we can insert security controls to break them. I aways liked the “chain” in “kill chain” since breaking one link in the steps to a successful attack stops the attack. But… kill chain kind of sounds like something a defense contractor would come up with. ATT&CK takes a different approach and documents adversary TTPs (tactics, techniques, and procedures) into a knowledge base. Both help us describe how attackers work to help us define our defenses.

Inspired by the Cloud

The original motivation for that RSA Conference presentation was the lack of public information on how attackers really break into cloud deployments. Most of the research reflected cool things the researcher was interested in, not necessarily how attacks really succeeded. We looked deeply at ATT&CK and the Cyber Kill chain, merging the concepts a bit to map out the exact series of steps for each of the top cloud attacks. The modeling helps find commonalities and chokepoints that could prevent multiple attacks.

There just isn’t enough collective institutional knowledge on cloud attacks, so defenders need cleaner maps to help them better internalize how these attacks work.

We called them kill chains even though they weren’t, and about six months ago on a call with an org who took our presentation and adopted it internally, they said, “these aren’t really kill chains, they are more like attack sequences.”  I hate that I can’t remember who I was talking with because they deserve all the credit for the term.

Attack Sequence is a much better description for the work since it maps the exact sequence of steps needed for the attack to succeed and can incorporate the different paths that might lead to the same end exploit. I posted that on Twitter and got some great responses:

Building an Attack Sequence

Attack Sequences aren’t a rigid model with pre-defined categories. Those absolutely have their place, but think more in terms of the map that ties together those TTPs to show the start to finish of an attack. Unlike the Lockheed Kill chain, the Attack Sequence can map different paths to the same destination. Let’s look at cloud ransomware as an example:

This model highlights a few points:

  • There are two potential start points – exposed credentials or a compromised workload. There is an entirely different sequence for exposed credentials that includes much more detail, but this sequence can back-reference that one to focus on ransomware attacks.
  • There is a bridge where the attacker can move from a compromised workload with storage access into the management plane, or they can work directly on the data within the workload.
  • Both paths converge again with the uploading of a ransom note.
  • There are other sequences to ransomware, but this focuses on the most common paths. You could obviously create an exhaustive model.
  • TTPs and Indicators of Attack/Compromise can be identified and documented for each stage of the sequence.
  • This is a generic sequence (it applies to most cloud providers), but it isn’t hard to extend this to a version for a provider or even a particular cloud service.
  • Building defenses means breaking each possible path, or where the paths combine. PRO TIP: exposed credentials are seen in the vast majority of cloud attack sequences.

It’s a flexible approach that’s easy to understand. You can keep it high-level like my example or dig deep and model specific indicators. I think it pairs wonderfully with ATT&CK.

Even automated attacks have an adversary behind them. Knowing TTPs and IoCs is important, but so is understanding the big picture of how the attack ties together, and the adversaries different options.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
3 Steps to Reduce Risk in Your Cloud Environment(s) https://www.firemon.com/reduce-risk-cloud-environment/ Fri, 29 Jan 2021 16:26:37 +0000 https://firemon2023.wpengine.com/?p=618

How to Ensure Trust and Security in Enterprise IT and the Cloud

Cloud security risk management should be the same as reducing risk on-premise. Yet more than half of respondents in the recent ActualTech Media MegaCast: Ensuring Trust and Security in Enterprise IT and the Cloud, were not confident that their data is as secure in the cloud as it is on-premise. Businesses are concerned they’ll lose control of their environment, become unable to define and manage their attack surface, and fail to keep up with network management tasks.

Robert Rodriguez, FireMon’s Director of Field Engineering (Connect with Robert on LinkedIn), says that the best path to securing a hybrid environment isn’t some revolutionary idea. “You have to go back to the basics,” he said. “If you’ve been around in this field for a while, you’ve seen what I’m going to talk about. You’ve done it. You know a lot about visibility, threat reduction, and automation. So, if everyone knows about these things, why isn’t everyone doing them?”

 

Corporate imperatives clash with security realities

“Traditional approaches force you to slow down to stay secure,” said Rodriguez. “That doesn’t work in today’s business environment. Security and network teams should be enablers, not roadblocks.”

But while enterprises demand digital transformation, innovation, hybrid cloud, and time-to-market, security and network teams are stuck with too many policies in too many places, more types of devices, more changes, and too many manual processes exacerbated by – of course – the skills gap.

Rodriquez described a call he’d received from a customer who described themselves as ‘putting the no in innovation.’ The company was trying to move quickly but the security and network departments were causing slowdowns as they tried to ensure changes were made securely. “That’s their job,” said Rodriguez. “And making fast changes in environments that include SD-WAN, SASE, branch offices, the cloud, and other complications is a tall order. But it can be done, and it needs to be done.”

A common obstacle to delivering fast secure changes is staffing. “There aren’t enough of us,” said Rodriguez. “I’ve dealt with hundreds of companies and I’ve never heard anyone say they have enough people and their people are fully trained.” And as new technologies roll out at an ever-increasing pace, the problem is getting worse. “CI/CD, SASE, software-defined everything… we rarely get training on new things. We try to figure them out as we go.” All the while, the pace of business gets faster and the pressure on security and network teams gets more intense.

Is there a light at the end of the tunnel? Rodriguez says yes. “There are three things you can do to reduce risk in your hybrid environments, and none of them are revolutionary. They’re not paradigm-breaking.  They are visibility, threat reduction, and automation. Do these today and you’re setting yourself up for an easier and better tomorrow.”

Complete Visibility

“You can’t secure what you don’t know,” said Rodriguez. “What if I owned some apartment buildings and I told you, ‘Hey, go protect my apartment buildings. I don’t really know where they all are. Talk to Joe, he might know.’ That would make your job really hard. And that’s what we’re dealing with in IT. We’re told to secure every single thing that’s out there but we don’t know what’s out there. And we’re not in charge of everything that’s out there, for example we’re not in charge of the cloud provider or SaaS vendors. But we still need complete visibility into everything that’s in our environment.”

Rodriguez said are a lot of great network scanning and discovery technologies on the market, and the cloud has a lot of native tools that will help businesses understand what’s in their environment. “But what you need,” he specified, “is something that can concatenate everything into one place.” Otherwise, the security and network teams end up clicking between consoles and trying to normalize massive amounts of disparate data, and there’s just no way to do that manually and maintain speed of business – or any degree of reliability or accuracy.

Rodriguez gave some examples of how using an automated discovery tool has helped real organizations. “We worked with a government entity that presumed they had about 150,000 endpoints. They had 170,000 thousand. That’s a 12 percent difference. A finance business thought it had 600,000 endpoints but they actually had twice that many, 1.2M.”  These unknown endpoints could be infected with viruses or could have unsanctioned software on them. Their maintenance and security aren’t getting included in budgets.

“Just knowing everything that’s out there is a start,” said Rodriguez. “But you also want to know everything about that box – what software is it running, what it’s connected to, is it in compliance, when does it change, who changed it, are its rules overly permissive, is it allowing known vulnerabilities into the network, and so on. Only when the stragglers are brought into the fold can you manage them properly.”

Complete Visibility in a Nutshell: 5 Attributes

  1. Real-time visibility and change detection
  2. Zero blind spots
  3. No unnecessary access
  4. Every device on the network is identified and classified
  5. Every leak path is identified

Threat Reduction

“Once we have a nice, big inventory of everything in our environment, we want to take a look at the biggest threats and start getting rid of them,” said Rodriguez.

Reactive security is an outdated approach. “Nobody wants to nail the barn door shut after the horse has been stolen,” said Rodriguez. “We want to know what the impact of a change will be before we make it. So, for example, will introducing this new tool expose the environment to new vulnerabilities or break compliance? We want to know that in advance. We want to ensure new access is safe and compliant, and we want to reduce human error.”

5 Steps toward Threat Reduction

  1. Gain complete visibility
  2. Assess risk in real-time
  3. Prioritize vulnerability patching
  4. Perform real-time compliance checks
  5. Start knocking out threats

Automation

“There’s no way to keep up with changes in a hybrid environment using manual processes,“ said Rodriguez. “They just take too long and are too error-prone. You should be trying to eliminate manual processes entirely.”

Rodriguez said the biggest benefit he’s experienced from using automation in his security career was removing unneeded policies. “Now I don’t have my firewall guy sitting there at 2 a.m. during my dark window on a Saturday trying to poke in 20 different entries with 500 IPs.”  No one can be expected to do that type of work and not make mistakes. But a computer can.

Rodriguez says there are two paths to automation. One is just in time automation, which follows the traditional change management process through different phases. “So, for instance, it’s possible to find out if an ACL change will cause new volatility during the design phase.” If it won’t, the change process is allowed to proceed to the next phase. The other approach is total automation, which tells the automation that as long as a change doesn’t break compliance or cause volatility, it’s okay to push it out.

Either approach will free up the highly-skilled people who are currently doing repetitive tasks so they can focus on doing more important and impactful work. Either approach will reduce human error and support compliance by automating configurations. And either approach will optimize efficiency and reduce costs.

4 Ways Automation Helps You Stay Secure

  1. Keep pace with network changes
  2. Run what-if scenarios for new access
  3. Reduce complexity by removing unneeded policies
  4. Remain compliant with real-time checks at every stage

Basic is Good because Basic Works

Four concrete actions businesses can take to achieving greater visibility, threat reduction, and automation are:

  • Buy yourself a good discovery tool to figure out what you have
  • Start figuring out where your vulnerabilities are in the network
  • Assign a team to attack those vulnerabilities one at a time and you’ll see your threat level going down over time
  • And while you’re at it, make some easy changes. Automate things that will give you the greatest benefit for the least amount of effort. This will free up your security team to actually go back into those vulnerabilities to start making your environment more and more secure.

Rodriguez added, “The three steps I’ve talked about today are basic, and basic is good because basic works. If you work on these three things – visibility, threat reduction, and automation – I promise you, you’re going to start having a better and safer tomorrow.”

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Hybrid Cloud Security Best Practices: Top Cloud Security Challenges https://www.firemon.com/cloud-security-challenges-increasing-cloud-complexity/ Fri, 26 Jun 2020 13:52:13 +0000 https://firemon2023.wpengine.com/?p=592

Without question, public cloud providers have made the deployment of applications and services simpler than ever. But while complexity has never been easier, security has never been more difficult.

FireMon’s 2020 State of Hybrid Cloud Security Report found respondents aren’t making much headway against the rapid rise of public cloud adoption. Visibility remains a challenge and organizations still struggle for clarity around shared responsibility for public cloud security.

Hybrid cloud growth is outpacing the ability to secure it

Almost 60 percent of respondents agree or strongly agree that deployment of business services in the cloud has accelerated past their ability to secure them adequately in a timely manner. This number is unchanged since last year, so there’s been no progress on this front. With public cloud adoption growing, it could be argued that ground has been lost.

Why does cloud security remain a challenge? The increasing complexity and sheer scale of hybrid cloud environments have become so easy to scale up that many nuances and security configuration aspects are overlooked in the process. IT and security teams need to collaborate better to prevent serious gaps in cloud security.

Complexity keeps pressure on security professionals to keep up with cloud growth

As adoption of public cloud increases, the need for better clarity on who’s responsible for security increases. There’s an inverse correlation between complexity and visibility, which raises the likelihood of misconfigurations. Misconfigurations, in turn, raise the likelihood of compliance failures.

To solve the complexity problem, we need to understand how it manifests within an organization.

Cloud complexity emerges because public cloud configuration isn’t automatically linked to firewall policy configuration. Public cloud configuration and firewall configuration both determine permissions around data, applications, and user activity, but they are treated as two separate activities. Yet, just like firewalls, public cloud instances accumulate unused and redundant rules. As multiple clouds are connected to the infrastructure and complexity mounts, these zombie rules pile up, causing conflicts and leaving security gaps.

Missing information leads to misconfigurations

A lack of alignment between cloud configuration and overall security policy happens because people aren’t speaking the same language — even though everyone is talking about the same thing.

Assumptions are made about who’s securing what in the public cloud. Business users trust public cloud providers to have all essential security baked in and that all cloud providers handle security the same way. The reality is different.

Poor communications add more stress to the security team’s load because they can’t get all the information they need. They are asked to enable applications on short deadlines without specifics about how the apps should be secured, so they hastily create security policies they hope will serve the needs of the business. But while those policies may be the best possible based on the information available at the time, they won’t be the absolute best because the information they were based on was not complete. Missing information results in misconfigurations that erode compliance and open the door for hackers using automated tools to search the internet for this type of vulnerability.

Security teams always need to know more. They need visibility into each cloud instance. They need to know how AWS, Azure, Google, and niche cloud platforms are secured. But they can’t realistically know all these things.

Alignment, empowerment, and automation are essential

Because every public cloud is configured differently, either security professionals must be in the loop when any new instance is adopted or business users must be empowered with the knowledge to securely deploy these applications themselves. Those options each depend on human actions, which we know to be inconsistent and imperfect.

A more reliable and comprehensive approach is to establish a clear understanding of who is responsible for which security activities right from the start of the process by automating the application of a global security policy to the greatest extent possible. Every deployment should be guided by a centralized policy guideline that promotes best practice cloud security implementation.

How visibility enables secure cloud management

Security efforts in the hybrid enterprise need to focus on knowing where data resides, who can access it, and what controls are in place to govern that access. This is the challenge of visibility.

Visibility enables the creation of solid security policies around applications or resources by providing complete on what is happening within the infrastructure. Visibility also supports compliance by making known the requirements relevant to an application and its impact on security configuration controls before deployment.

Visibility should be real-time and holistic in in order to support a successful cloud management strategy, The types of information gained should include:

  • Application tracking
  • Continuous risk assessment
  • Inventories of cloud applications
  • Number of VM instances
  • Amount of compute
  • Storage requirements
  • Performance levels
  • Effectiveness of security controls

With this information, organizations can ramp up hybrid, public, and multi-cloud deployments at a rapid rate without struggling to fully secure their increasingly complex environments.

Don’t fear complexity, but keep security and compliance aligned

Regardless of where data resides — on-premise, in the public cloud, or in a hyper–converged data center — security and compliance must evolve to stay aligned with the business.

Creating complexity is always going to be easy because public cloud platforms are so simple to scale up as part of a hybrid cloud environment. Rather than trying to fight this inevitable ease of complexity, organizations must put the right people and tools in place, so best practices and security controls are automatically woven into cloud-first strategies.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>
Building Security into Your Cloud-First Business https://www.firemon.com/building-security-into-your-cloud-first-business/ Tue, 23 Jun 2020 18:43:01 +0000 https://firemon2023.wpengine.com/?p=590

Key Terms

  • Cloud
    Software, services, and databases that run on a shared infrastructure.
  • Cloud-first
    The idea that organizations should try to run as many of their processes and workflows in the cloud as possible, only considering other environments after the cloud is ruled out as the most efficient option.
  • Visibility
    A comprehensive knowledge of the presence of all the devices and endpoints in the environment and all rules associated with them.

What is Digital Transformation?

Digital transformation is the overhauling of an entire organization – business model, technologies, processes, customer experiences, organizational structure, and even company culture – to become more efficient and agile through the use of digital technologies like cloud-computing, automation, and APIs.

Why Digital Transformations Fail

Digital transformation requires a cultural shift across the entire organization. The words “But we’ve always done it this way,” need to be exorcised from the company vocabulary. Instead, people should be asking, “How much of this process can we automate to create a smoother customer experience or streamline a process in the cloud?”

Many organizations make the mistake of simply copying their manual processes into a digital format. For instance, one financial services provider created a customer experience that emulated the steps performed by their core banking system, even though about 30 percent of those steps were unnecessary in the digital environment. After investing millions in an innovation lab, all the company had achieved was providing their customers the ability to work through dozens of screens instead of dozens of pieces of paper. The reason for this costly misstep was that the company culture was so accustomed to the limitations of its decades-old core system that business owners never questioned whether things could be done differently. Only 13 percent of organizations have begun to see a return on their investments in enterprise digital transformations – this is an example of why that percentage is so low.

A digital transformation must emanate from the inside out. Dig deep and rethink what must change to support a competitive business model, and don’t worry about what can’t change because of the limitations of legacy systems or organizational structures. Systems and structures can always be changed.

How to Prioritize a Digital Transformation Strategy

Strategic initiatives around digital transformation should contribute to as many of these areas as possible:

  • Customer satisfaction
  • Infrastructure security posture
  • Corporate cost savings
  • People efficiency
  • Meaningful innovation

Say No to Stowaways on Your Digital Transformation Journey

The greatest challenges cited by C-level respondents to a recent FireMon survey on the state of hybrid cloud security are the lack of a centralized view of information across tools, too many tool suites and management consoles to keep up with, and lack of integration across tools.

These problems are the source of more than inconvenience and inefficiencies – they’re security risks. When an organization can’t see exactly what and who is on its infrastructure, it is insecure. So, ultimately, the most critical priority for IT and security professionals who are shifting workloads to the cloud during a digital transformation is visibility.

Visibility is being able to see all the devices and endpoints in the environment and their associated rules, including what has already been put in the cloud. 

True visibility is gained through a single pane of glass that provides a consolidated view of all systems. A single pane of glass is important because without it, IT and security staff are left bouncing from console to console to work with data based on different metrics that are difficult or impossible to collate. Gaining a comprehensive understanding of the state of the infrastructure is like playing a virtual game of whack-a-mole, and the enterprise is always the loser. For enterprises to fully benefit from a cloud-first strategy, their highest priority should be complete visibility into both their new and existing IT environments.

4 Ways Visibility Supports Cloud Migration

1. Before you decide what to automate, make sure it’s worth automating

Use your visibility capabilities to gain a clear picture of what you already have so you don’t waste resources and carry over risk by shifting outdated and non-compliant security to the cloud. Once you can see everything you have and you’ve shored up your security policy, you can automate what should be automated and replicate the appropriate on-premise controls in your cloud environment. And, because you’re not necessarily going to move everything, know what you’re leaving behind and why you’re leaving it, so you can determine whether to repair, replace, or jettison those systems.

2. Clean before you automate

Visibility will expose broken processes and rules that must be fixed or eliminated before migration to the cloud. Carrying them over “to fix later” will create technical debt and institutionalize risk. For example, most firewall rulebases contain hidden, shadowed, redundant, and overlapping rules, any of which may cause network, security, and migration problems. These types of issues must be cleaned up before they can inject risk into the new infrastructure.

3. Orchestrate your automations

End-to-end automation is more than a collection of scripts. Today’s automation can provide real-time visibility, control, and management of the network. But to realize that functionality, everything must work together in an orchestrated manner. Orchestration reduces the complexity of hybrid security, secures applications as they scale, exposes vulnerabilities, removes change request backlogs, and more. The infrastructure of a digitally transformed enterprise is significantly more complex than that of a traditional organization, so orchestration is mandatory because there is simply no way to perform all these tasks manually on the large number systems inherent to a digital business.

4. Re-imagine your security teams as part of the design process

Before migrating, align the teams responsible for security, especially if on-premise and cloud security duties are divided. Better still, unify your security resources into one team so there is no chance for gaps or redundancies.

Protect Your Digital Transformation Strategy with Automated Security

Most enterprises have already begun their digital transformation journeys. But no matter whether they’re just starting out or nearing completion, their common destination is a cloud-first organization that is more profitable, responsive, efficient, and customer-centric.

Complete visibility will help these organizations overcome obstacles on the road ahead. Not only will visibility save them from replicating and automating inefficient processes, it will help them keep security at the forefront of all their operations. Proper configuration of cloud deployments and automation of security policy management can advance their digital transformation efforts and enable them to scale their services and pivot their business models in upcoming years as their markets evolve.

At FireMon, we have been driving innovation that allows customers to see their cloud deployments the same way they see their on-premise infrastructures, even when security configurations differ widely. Digital transformation is an opportunity to create a dashboard that can travel with you far into the future, even as the horizon changes — in this case, to wherever you decide to put workloads and digital assets. But wherever you go, make sure your security controls go with you. You should have the same level of confidence in the cloud as you did on-premise, and the same visibility, if not better.

Seize control of your cloud security today by ensuring visibility and exercising control. Find out how easy FireMon makes it to gain control of cloud security.

Get 9x
BETTER

Book your demo now

Sign Up Now

]]>