Tuesday, 13 October 2020

10 Security Changes Every Sys Admin Should Implement


Every quarter a company that has been taken advantage of by a serious security event, typically ransomware or phishing, but also sometimes events more complicated. In many cases enterprise organizations have 95% of their datacenter and end user computing environments compromised. Not surprisingly, in examining these events I’ve consistently noticed that common themes are present in how they could have been mitigated. I’ve also noticed that the “security” companies and the money spent on security remediation often are not significantly improving the position of an organization. If you however take the steps below, you’ll find yourself in a much better position to dictate the security terms, rather than have them dictated to you.


Change 1: Implement a Zero-Trust End User Computing Environment
The majority of datacenter compromises happen because an individual end user computing device is compromised. That device is used to move from one end user computing device to another, then to the datacenter, until critical credentials are used to pwn the network. This is made possible because of how we’ve constructed networks for the last 20 years. The traditional network was built around the idea of creating a wall on the outside, then allowing largely free communications inside. This was the model because all the applications lived in the datacenter, with exception of very highly regulated external app vendors, like a FIS or Fiserv. We’ve now moved to a model where applications are mostly in the cloud, not hosted in the datacenter. Further, the assumption that “because a device is inside our network it is secure” is obsolete (or was never true).

This is your legacy environment. Everything talks to everything. The only way in (or so its believed) is through the corporate firewalls. Unfortunately, every device is talking through the firewalls. The assumption is that if a device is on the network, the device is “safe” and its allowed to communicate with other parties on the network.
This is your legacy environment during a compromise. In this case, you can see the movement from end user computing device, to another, to a server, to the identity source, to the data… and out.
In the modern environment we see a different picture. You see the implementation of a zero trust modern desktop environment. The end user computing device is Azure AD joined, Intune managed, controlled via conditional access, and not on the same corporate network as anything else. In fact, they aren’t even allowed to talk with each other.
The end goal of implementing a zero-trust modern desktop environment is to prevent the ease of movement in a compromise. This change alone would have mitigated most of the ransomware attacks in the last year, simply because of the higher difficulty to move around the network and compromise server environments.
Another view of this here, from Microsoft’s own internal implementation:
Change 2: Implement Micro-Network Segmentation in the Datacenter.
In conjunction with mitigating movement between end user computing and the datacenter applications, the same activity is important for the datacenter design. We’ve traditionally allowed threats to move between application server at will. Why? Because it is hard to implement micro-network segmentation because of the highly delineated responsibility of our teams. Fortunately, the catalyst of the movement to the cloud is changing this, because of the collapse of skills into common cloud engineering teams, SREs, and DevOps.
Similar to the previous pictures, here is one that shows the movement between applications and each other during a compromise.
The above picture is obviously not a good practice. Why didn’t we implement micro-network segmentation earlier? The diagram below shows the different groups necessary to work together to make this better.
This shows the contrast between a vertical and horizontal network design, moving toward micro-network segmentation of every application.
Here’s another look at it
If you combine this change with the movement of end user computing you’ll end up with a much more architecturally difficult environment to compromise. The change to a network design that leverages micro-network segmentation is much easier when you leverage infrastructure-as-code, app-as-code, config-as-code techniques, because it forces teams to come together. The catalyst to implement these changes is the move to the cloud, as it drives the modernization of teams and the change of how the environment is managed.


Change 3: Protect Cloud Properties with Conditional Access
The cloud applications your teams are accessing are being protected not by the network your end user computing devices are on, but instead by the identity of the user. Understanding the weakness of typical passwords we realize how much we need to advance this to a stronger model. The term conditional access describes a state of combining identity, multi-factor authentication, device health, and ecosystem awareness to protect the applications.
The identity needs to be protected in conjunction with the health and risk profile of the device, especially in the zero-trust model we’ve talked about above. This is combined with protections like MFA or blocking the user.

The identity of the individual is the new control plane, leading to a vast mesh of ability to simplify the access for the company, the apps, and the user, while making security stronger.
An interesting data point to have is less than 10% of Azure environments have MFA enabled for administrator accounts. Let that sink in. Less than 10% have protection beyond the user name and password to their critical cloud administrators. The same situation exists in Office 365. Think about how an identity of your executive assistant, payroll assistant, or accounts payable can be used to compromise your organization via phishing. You need to assume that you are compromised… leverage conditional access to guard the identities and protect against the clear and present danger.

Change 4: Have a clear identity provisioning story
What is your system of record for identity? Don’t say Active Directory. Who is accountable for the individuals that have access to your products and application platforms? They include employees, customers, and partners. Your company needs clear articulation of the system of record and process owner for your most important platforms. What determines if an employee continues to be active in your identity environment or not? In most cases it’s pretty weakly defined and even more weakly managed. Let’s fix it.
Let’s first consider the employee or contractor story. The process for provisioning should start with HR and the owner of whether an individual is an employee or not should be governed by an HR system, including employment (or contracting) start date, and end date. The digital platforms then inherit from the HR system based on role and access.
The second story is the customer story, where the product owner is responsible for who is granted access to the platform or not. That said, there may be millions of customers, so the real control is brought to the customer themselves to control their identity. To help the customers a smart organization will enable their identity layer with not just username and password, but MFA and conditional access based on risk profile. Thankfully, much of this is commodity now with platforms like Azure AD B2C and B2B… vs. needing to develop the identity environment themselves, a company can rely on major platforms like Microsoft to contain this user environment.

The smart organizations here realize that well thought out user provisioning and de-provisioning is the blocking and tackling of running security for the organization. To enable this with conditional access controls is expected, not a nice to have.

Change 5: End to End Kill Chain
The majority of companies I meet for the first time admit that they use less than 10% of the capabilities in their security stack well. Even more important, they’ve purchased many tools that don’t work well together, so instead of spending time improving the security position, they are spending time integrating, or dealing with conflicts, or arguing about platforms. I firmly believe that the time for buying many different non-integrated tools is over. The time for teams working together to bring automation and immediate response to the security platform will bring much more tangible results to the security of any environment. For instance, if a compromised device can immediately tell Office 365 that it should not be able to be on the network, then immediately cuts off the device from the environment, then also provides shared data to all Azure AD customers, I’d say that’s a huge win. If your security strategy is based on one tool needing to capture incidents, put them in an incident management system, then someone read it, then respond… you’re already too far behind.
Here is a best practice kill chain. You can see how every step the compromised machine is stopped. The machine cuts itself off, the other machine blocks it, Office 365 stops it, and non-identifying information is shared to other organizations to protect them too.
A related platform that was recently released at RSA from the Microsoft Office 365 security team is Insider Risk Management. Check it out, especially for understanding what you need to prevent before it happens.

Change 6: Modern Product Security
People aren’t building new products on Windows domain joined virtual servers anymore. Yes, many of the current security threats attack that surface, which is why changes 1 – 5 are so important. That said, we need to protect the applications being built now, which represent the way we’re presenting our companies in the digital market. The modern platforms are built on concepts like containers, serverless, identity, and SaaS distributed architectures. These architectures are being used for internal and external applications. Your security strategy should treat every application like an application you are building to serve an external audience and understand the modern structures surrounding this.
You can see here the various layers of protection a modern app should have before it ever gives information to an internal or external user. Most apps traditionally built for an internal audience do not have even close to this number of layers and that isn’t a good thing. The lack of protection for our legacy apps represents a serious failing in how we’ve built architectures historically. A few aspects of this:
  • Cloud protection with risk inputs
  • App firewall characteristics
  • Identity as a platform and conditional access
  • Layers of exposure in your application or services
  • Functions and exposure model
  • Data behind the application
  • Integrations into other apps and data
  • Back door access that exposes the platform
This is represented below:

Some of the considerations in modern container security include:
  • Cluster design
  • Container network design
  • Cluster RBAC
  • Security for deployment tech
  • Secrets management
  • Secure images & evaluation
  • Runtime privileges
  • Pod security
  • Governance applied through ARM
  • Secure coding and access models
In the serverless space, similarly there are serious advantages, but also considerations. The advantages / comparison includes:
Remember that these scenarios include truly driving secure code that runs on secure platforms. Also, pairing these modern platforms with modern ways of responding to threats attacking them and trying to gain access to your customer data.

The construction of a modern product is going to continue to evolve and move further into even more distributed architectures that leverage mult-party contracts and relationships, such as in Blockchain. The security organization needs to prepare itself for this reality.


Change 7: Policy Enforcement Based Security
With the shift of the IT organization from delivery to enablement the security organization needs to be prepared to deal with an even broader set of potential stakeholders. Many of the ways that the security organization has looked to implement policies won’t work in the modern provisioning ecosystem (some will say they’ve never worked). The IT organization is shifting to implement controls through a governable platform, but one that facilitates sufficient control. If you examine the maturity curve below, you’ll see in the middle the point where the business has implemented controls that enable the business on a cloud platform. If you aren’t there, you need to get there in short order.
The appropriate operationalized cloud is one that leans on best practices of infrastructure/app/configuration-as-code as the mechanism of deployment, allowing for governance of the deployed solution. In addition, at the point of deployment the governance rules are applied and block configurations that do not align with security policies.
The implementation of the goal state facilitates the necessary oversight from the governing security and IT organizations, but lets the application owner have the opportunity to properly direct and build applications that suit the needs of their customers. The security organization is then aiding the business in responding to security incidents based on information sources from governance, policy, compliance, and intrusion, but implementing as much of the response as possible through automation, as human-centric responses are typically too slow.


Change 8: Think Content Not Storage Location
The next change is about content. Take one of your critical financial documents… how have you secured it? Have you placed it in a file share or a document library with constrained access? What happens when that document is moved out of the secure location? In most scenarios the content is now available to be used by whomever wants to use it. Now consider a modern solution that wraps content based on what it is, who is accessing it, where they are access it from, and the health of the system they are using to access it. Technologies like Microsoft Information Protection can provide this capability to a productivity environment… even finding the documents so you don’t have to.

Consider the following diagram, with content leaving the location, but still protected vs. the unprotected document.
You can see in the above that even though we’ve transferred the file to an individual using the content from an un-managed location we’re still applying who, what, where, and health to file access. This is critical to leverage with the mobility of content and the understanding that the location is always temporary.

Change 9: Data Security
We are experiencing an unprecedented opportunity to leverage data to better our companies, customers, and employees. In the same vain however we have a tremendous risk attached to implementing appropriate data governance surrounding what we expose and to whom. A few trends I’m seeing in this space that I think are important to consider:
  • Understand the security categories of your data
  • Understand how that secure data is used and where
  • Understand how you are scoping access to the critical data
  • Expose data in sets centric to the users who need it, not the entire lake
  • Understand where controls can be placed over exposed data
  • Understand the data controls for the platform itself
  • Implement overall data controls for exposure
  • Filter user experiences based on role and function
  • Facilitate an understanding of how data was acquired
  • Be able to audit access to critical data via platform monitoring
  • Understand the modern data architecture for access
  • Understand conditional access controls to data access layers
There certainly are more, but to prepare yourself in this space is to be positioned to deal both with preventing a security incident and understanding one if it occurs. To be caught unprepared is to put yourself in a position to not understand what you are protecting, let alone trying to protect it. The cloud makes this harder in a sense, but also easier because you can apply controls and audit what you couldn’t do well before.

Change 10: Risk Based Management
By this I don’t mean applying an audit methodology. I mean starting with the end in mind. What do you want to prevent? What is the most likely threat? How do I prevent that threat from occuring and wondering, what should I have done? This is the root of what a CISO needs to be thinking about daily. I believe the bold CISOs are looking at the themes we identified above and assertively working to adopt technology that reduces the attack surface and architecturally mitigates threats vs. just applying minor changes. Still operating a legacy end user computing environment like most of the world? Change it. Still not doing micro-network segmentation and moving to the cloud? Change it. Still not doing conditional access for your Office 365 environment? Change it. Now is the time. Take a good, hard look at your business, understand the most likely threats, where your critical data resides, and take steps to prevent it before there is a significant disaster. Yes, we want to innovate, and we should, but if you don’t take steps to protect yourself in a serious and intentional way, you won’t make the difference necessary. Attacking change and most likely risks isn’t about training, or an external audit, or a NIST based security program. It’s about basic understanding of where risks exist and the tenacity to address them.

No comments:

Post a Comment

Featured post

Top 10 Rare Windows Shortcuts to Supercharge Productivity

  Windows Key + X, U, U: This sequence quickly shuts down your computer. It's a great way to initiate a safe and swift shutdown without...