Thursday, 15 October 2020

Internet explorer can't open while user account control is turned off in Windows Server


Sometimes issue is not the exact which we trying to resolve by searching on google, I have faced this isse and search on google to resolve this issue.
I have found multiple fix on internet like enabling UAC in control panel and enabling UAC via register, but some time this issue is not resolved after made these changes.

This issue can be resolve with below simple method without enabling UAC through the Registry or without Enable UAC Using Control Panel Slider bar. 

Go to Internet Explorer options and click on programs.
                                      

Under Programs check Opening Internet Explorer option, Under Choose how you open link make sure Always in Internet Explorer option is selected and Open Internet Exploere tiles on the desktop option is selected.



Once you make sure all the setting according above image, You can try opening Internet Exploer again. 

Hopefully your issue resolved with these settings.


Tuesday, 13 October 2020

10 Security Changes Every Sys Admin Should Implement


Every quarter a company that has been taken advantage of by a serious security event, typically ransomware or phishing, but also sometimes events more complicated. In many cases enterprise organizations have 95% of their datacenter and end user computing environments compromised. Not surprisingly, in examining these events I’ve consistently noticed that common themes are present in how they could have been mitigated. I’ve also noticed that the “security” companies and the money spent on security remediation often are not significantly improving the position of an organization. If you however take the steps below, you’ll find yourself in a much better position to dictate the security terms, rather than have them dictated to you.


Change 1: Implement a Zero-Trust End User Computing Environment
The majority of datacenter compromises happen because an individual end user computing device is compromised. That device is used to move from one end user computing device to another, then to the datacenter, until critical credentials are used to pwn the network. This is made possible because of how we’ve constructed networks for the last 20 years. The traditional network was built around the idea of creating a wall on the outside, then allowing largely free communications inside. This was the model because all the applications lived in the datacenter, with exception of very highly regulated external app vendors, like a FIS or Fiserv. We’ve now moved to a model where applications are mostly in the cloud, not hosted in the datacenter. Further, the assumption that “because a device is inside our network it is secure” is obsolete (or was never true).

This is your legacy environment. Everything talks to everything. The only way in (or so its believed) is through the corporate firewalls. Unfortunately, every device is talking through the firewalls. The assumption is that if a device is on the network, the device is “safe” and its allowed to communicate with other parties on the network.
This is your legacy environment during a compromise. In this case, you can see the movement from end user computing device, to another, to a server, to the identity source, to the data… and out.
In the modern environment we see a different picture. You see the implementation of a zero trust modern desktop environment. The end user computing device is Azure AD joined, Intune managed, controlled via conditional access, and not on the same corporate network as anything else. In fact, they aren’t even allowed to talk with each other.
The end goal of implementing a zero-trust modern desktop environment is to prevent the ease of movement in a compromise. This change alone would have mitigated most of the ransomware attacks in the last year, simply because of the higher difficulty to move around the network and compromise server environments.
Another view of this here, from Microsoft’s own internal implementation:
Change 2: Implement Micro-Network Segmentation in the Datacenter.
In conjunction with mitigating movement between end user computing and the datacenter applications, the same activity is important for the datacenter design. We’ve traditionally allowed threats to move between application server at will. Why? Because it is hard to implement micro-network segmentation because of the highly delineated responsibility of our teams. Fortunately, the catalyst of the movement to the cloud is changing this, because of the collapse of skills into common cloud engineering teams, SREs, and DevOps.
Similar to the previous pictures, here is one that shows the movement between applications and each other during a compromise.
The above picture is obviously not a good practice. Why didn’t we implement micro-network segmentation earlier? The diagram below shows the different groups necessary to work together to make this better.
This shows the contrast between a vertical and horizontal network design, moving toward micro-network segmentation of every application.
Here’s another look at it
If you combine this change with the movement of end user computing you’ll end up with a much more architecturally difficult environment to compromise. The change to a network design that leverages micro-network segmentation is much easier when you leverage infrastructure-as-code, app-as-code, config-as-code techniques, because it forces teams to come together. The catalyst to implement these changes is the move to the cloud, as it drives the modernization of teams and the change of how the environment is managed.


Change 3: Protect Cloud Properties with Conditional Access
The cloud applications your teams are accessing are being protected not by the network your end user computing devices are on, but instead by the identity of the user. Understanding the weakness of typical passwords we realize how much we need to advance this to a stronger model. The term conditional access describes a state of combining identity, multi-factor authentication, device health, and ecosystem awareness to protect the applications.
The identity needs to be protected in conjunction with the health and risk profile of the device, especially in the zero-trust model we’ve talked about above. This is combined with protections like MFA or blocking the user.

The identity of the individual is the new control plane, leading to a vast mesh of ability to simplify the access for the company, the apps, and the user, while making security stronger.
An interesting data point to have is less than 10% of Azure environments have MFA enabled for administrator accounts. Let that sink in. Less than 10% have protection beyond the user name and password to their critical cloud administrators. The same situation exists in Office 365. Think about how an identity of your executive assistant, payroll assistant, or accounts payable can be used to compromise your organization via phishing. You need to assume that you are compromised… leverage conditional access to guard the identities and protect against the clear and present danger.

Change 4: Have a clear identity provisioning story
What is your system of record for identity? Don’t say Active Directory. Who is accountable for the individuals that have access to your products and application platforms? They include employees, customers, and partners. Your company needs clear articulation of the system of record and process owner for your most important platforms. What determines if an employee continues to be active in your identity environment or not? In most cases it’s pretty weakly defined and even more weakly managed. Let’s fix it.
Let’s first consider the employee or contractor story. The process for provisioning should start with HR and the owner of whether an individual is an employee or not should be governed by an HR system, including employment (or contracting) start date, and end date. The digital platforms then inherit from the HR system based on role and access.
The second story is the customer story, where the product owner is responsible for who is granted access to the platform or not. That said, there may be millions of customers, so the real control is brought to the customer themselves to control their identity. To help the customers a smart organization will enable their identity layer with not just username and password, but MFA and conditional access based on risk profile. Thankfully, much of this is commodity now with platforms like Azure AD B2C and B2B… vs. needing to develop the identity environment themselves, a company can rely on major platforms like Microsoft to contain this user environment.

The smart organizations here realize that well thought out user provisioning and de-provisioning is the blocking and tackling of running security for the organization. To enable this with conditional access controls is expected, not a nice to have.

Change 5: End to End Kill Chain
The majority of companies I meet for the first time admit that they use less than 10% of the capabilities in their security stack well. Even more important, they’ve purchased many tools that don’t work well together, so instead of spending time improving the security position, they are spending time integrating, or dealing with conflicts, or arguing about platforms. I firmly believe that the time for buying many different non-integrated tools is over. The time for teams working together to bring automation and immediate response to the security platform will bring much more tangible results to the security of any environment. For instance, if a compromised device can immediately tell Office 365 that it should not be able to be on the network, then immediately cuts off the device from the environment, then also provides shared data to all Azure AD customers, I’d say that’s a huge win. If your security strategy is based on one tool needing to capture incidents, put them in an incident management system, then someone read it, then respond… you’re already too far behind.
Here is a best practice kill chain. You can see how every step the compromised machine is stopped. The machine cuts itself off, the other machine blocks it, Office 365 stops it, and non-identifying information is shared to other organizations to protect them too.
A related platform that was recently released at RSA from the Microsoft Office 365 security team is Insider Risk Management. Check it out, especially for understanding what you need to prevent before it happens.

Change 6: Modern Product Security
People aren’t building new products on Windows domain joined virtual servers anymore. Yes, many of the current security threats attack that surface, which is why changes 1 – 5 are so important. That said, we need to protect the applications being built now, which represent the way we’re presenting our companies in the digital market. The modern platforms are built on concepts like containers, serverless, identity, and SaaS distributed architectures. These architectures are being used for internal and external applications. Your security strategy should treat every application like an application you are building to serve an external audience and understand the modern structures surrounding this.
You can see here the various layers of protection a modern app should have before it ever gives information to an internal or external user. Most apps traditionally built for an internal audience do not have even close to this number of layers and that isn’t a good thing. The lack of protection for our legacy apps represents a serious failing in how we’ve built architectures historically. A few aspects of this:
  • Cloud protection with risk inputs
  • App firewall characteristics
  • Identity as a platform and conditional access
  • Layers of exposure in your application or services
  • Functions and exposure model
  • Data behind the application
  • Integrations into other apps and data
  • Back door access that exposes the platform
This is represented below:

Some of the considerations in modern container security include:
  • Cluster design
  • Container network design
  • Cluster RBAC
  • Security for deployment tech
  • Secrets management
  • Secure images & evaluation
  • Runtime privileges
  • Pod security
  • Governance applied through ARM
  • Secure coding and access models
In the serverless space, similarly there are serious advantages, but also considerations. The advantages / comparison includes:
Remember that these scenarios include truly driving secure code that runs on secure platforms. Also, pairing these modern platforms with modern ways of responding to threats attacking them and trying to gain access to your customer data.

The construction of a modern product is going to continue to evolve and move further into even more distributed architectures that leverage mult-party contracts and relationships, such as in Blockchain. The security organization needs to prepare itself for this reality.


Change 7: Policy Enforcement Based Security
With the shift of the IT organization from delivery to enablement the security organization needs to be prepared to deal with an even broader set of potential stakeholders. Many of the ways that the security organization has looked to implement policies won’t work in the modern provisioning ecosystem (some will say they’ve never worked). The IT organization is shifting to implement controls through a governable platform, but one that facilitates sufficient control. If you examine the maturity curve below, you’ll see in the middle the point where the business has implemented controls that enable the business on a cloud platform. If you aren’t there, you need to get there in short order.
The appropriate operationalized cloud is one that leans on best practices of infrastructure/app/configuration-as-code as the mechanism of deployment, allowing for governance of the deployed solution. In addition, at the point of deployment the governance rules are applied and block configurations that do not align with security policies.
The implementation of the goal state facilitates the necessary oversight from the governing security and IT organizations, but lets the application owner have the opportunity to properly direct and build applications that suit the needs of their customers. The security organization is then aiding the business in responding to security incidents based on information sources from governance, policy, compliance, and intrusion, but implementing as much of the response as possible through automation, as human-centric responses are typically too slow.


Change 8: Think Content Not Storage Location
The next change is about content. Take one of your critical financial documents… how have you secured it? Have you placed it in a file share or a document library with constrained access? What happens when that document is moved out of the secure location? In most scenarios the content is now available to be used by whomever wants to use it. Now consider a modern solution that wraps content based on what it is, who is accessing it, where they are access it from, and the health of the system they are using to access it. Technologies like Microsoft Information Protection can provide this capability to a productivity environment… even finding the documents so you don’t have to.

Consider the following diagram, with content leaving the location, but still protected vs. the unprotected document.
You can see in the above that even though we’ve transferred the file to an individual using the content from an un-managed location we’re still applying who, what, where, and health to file access. This is critical to leverage with the mobility of content and the understanding that the location is always temporary.

Change 9: Data Security
We are experiencing an unprecedented opportunity to leverage data to better our companies, customers, and employees. In the same vain however we have a tremendous risk attached to implementing appropriate data governance surrounding what we expose and to whom. A few trends I’m seeing in this space that I think are important to consider:
  • Understand the security categories of your data
  • Understand how that secure data is used and where
  • Understand how you are scoping access to the critical data
  • Expose data in sets centric to the users who need it, not the entire lake
  • Understand where controls can be placed over exposed data
  • Understand the data controls for the platform itself
  • Implement overall data controls for exposure
  • Filter user experiences based on role and function
  • Facilitate an understanding of how data was acquired
  • Be able to audit access to critical data via platform monitoring
  • Understand the modern data architecture for access
  • Understand conditional access controls to data access layers
There certainly are more, but to prepare yourself in this space is to be positioned to deal both with preventing a security incident and understanding one if it occurs. To be caught unprepared is to put yourself in a position to not understand what you are protecting, let alone trying to protect it. The cloud makes this harder in a sense, but also easier because you can apply controls and audit what you couldn’t do well before.

Change 10: Risk Based Management
By this I don’t mean applying an audit methodology. I mean starting with the end in mind. What do you want to prevent? What is the most likely threat? How do I prevent that threat from occuring and wondering, what should I have done? This is the root of what a CISO needs to be thinking about daily. I believe the bold CISOs are looking at the themes we identified above and assertively working to adopt technology that reduces the attack surface and architecturally mitigates threats vs. just applying minor changes. Still operating a legacy end user computing environment like most of the world? Change it. Still not doing micro-network segmentation and moving to the cloud? Change it. Still not doing conditional access for your Office 365 environment? Change it. Now is the time. Take a good, hard look at your business, understand the most likely threats, where your critical data resides, and take steps to prevent it before there is a significant disaster. Yes, we want to innovate, and we should, but if you don’t take steps to protect yourself in a serious and intentional way, you won’t make the difference necessary. Attacking change and most likely risks isn’t about training, or an external audit, or a NIST based security program. It’s about basic understanding of where risks exist and the tenacity to address them.

Monday, 5 October 2020

Understanding the PowerShell Error Variable

Figure1 - Terminating Error Output

As with any programming language, code will have errors and troubleshooting those problems can be difficult. Thankfully, PowerShell has a rich error object and several powerful tools to help debug your code.

With PowerShell , these tools become even more useful and error handling even easier. As the language evolves and becomes used in more places than ever, being able to quickly and efficiently troubleshoot a problem will prove invaluable to integrating the language into common workflows.

Terminating Errors

As you can see below, the text “This should never be shown” is not shown, as the terminating error stops code execution. The function throw will always return a terminating error.

Non-Terminating Errors

It is more difficult to arbitrarily generate a non-terminating error, but one easy way is to use the Get-ChildItem cmdlet and ask the cmdlet to find a nonexistent directory. As you can tell the command Write-Host "This text will show!", does in fact appear.

 
    Figure2 - Non-Terminating Error Output


You can turn most non-terminating errors into terminating errors by modifying an individual cmdlet’s ErrorAction to Stop. For example, Get-ChildItem "missing_dir" -ErrorAction 'Stop'

Error Views

You might notice that in the previous output, there are two different views of the error information. Figure 1 shows the NormalView of the $ErrorView preference variable. This view was standard and traditional until PowerShell . Starting with PowerShell , the default view has changed to what you see in Figure 2 and that is of the ConciseView. It dispenses with much of the decoration around the output, but as you might be able to tell, some information is not made available.
 

The Error Object Behind the Scenes

Underlying the data behind the error output is the $Error object that is populated by PowerShell when errors are thrown. To view this data, you are able to output and walk through the information. The traditional way to get the last error thrown is by calling $Error[0]. This uses array notation to reference the error.


         Figure4 - $Error Object

If you happen to mistype this command, you will overwrite the first object in the error collection with the new error, so be careful when referencing this object.

As you can see there is the same error as originally shown, but we want to view more of the data. By selecting all of the properties, we are able to see what’s available. As we will talk about in the next section, the Get-Error cmdlet provides a rich view of this data, but it’s important to understand what is going on underneath.

            Figure 5 – Error Object Properties

By walking through each property we can see what information exists between the Get-Error cmdlet and the $Error object itself.

    Figure 6 – Error Object Exception Properties

The New Get-Error Cmdlet

That brings us to the next PowerShell  addition and that is the Get-Error cmdlet. To expand upon the ConciseView and show far more detail, we can run the Get-Error cmdlet and see the expanded details of the last error thrown.

    Figure 3 – Get-Error Output

There is a lot of information shown here, so let’s break down what is useful.
 

Exception

  • Type – Basic Exception Information
  • ErrorRecordMost of this information is from the $Error object itself. The TargetObject, CategoryInfo, and FullyQualifiedErrorId are all duplicated further in the Get-Error output. What is useful is the Exception data.
    • Type – An exception, but could be referencing the parent exception
    • Message – The human-readable error message
    • HResult – Traditional numerical error code that Windows has used since the early days of the operating system
  • ItemName – The same as the TargetObject shown later in the Get-Error output
  • SessionStateCategory – A series of values that errors fall into, this is an enum underneath
  • TargetSite – A set of information that exposes some of the internal PowerShell engine values and where the error itself is coming from
  • StackTrace – This is the actual method signature of where the error itself came from and can help aid in why an error was shown
  • Message – The human-readable error message
  • Source – This is the source of where the error is coming from
  • HResult – As discussed above, the traditional numerical error code from Windows

 

TargetObject

The object that the function, cmdlet, or code targets, in this case D:\\missing_dir
 
 

CategoryInfo

A concatenated view of several different properties, breaking down to the below format:
<Error>: (<TargetObject>:<ObjectType>) [<Originating CmdLet>], <Exception Type>
 
 

FullyQualifiedErrorId

The FullyQualifiedErrorId is Message property of the exception object combined with the fully-qualified name of the class where the exception originated.
 
 

InvocationInfo

  • MyCommand – The originating cmdlet or function throwing the error
  • ScriptLineNumber – Location within the file or ScriptBlock that the error is thrown
  • OffsetInLine – The location within the line that the error was thrown
  • HistoryId – The location from within the Get-History cmdlet that the error was thrown
  • Line – The command throwing the error
  • PositionMessage – Combined information for the error
  • InvocationName – The cmdlet or function throwing the error
  • CommandOrigin – In what context the error was thrown

 

ScriptStackTrace

Contained here is information on where in a script the error occurred. In this case, the error occurred on line 1, but this will reflect the line of the error in the given ScriptBlock.
 
 

Conclusion

Unlike other programming languages, PowerShell provides a very rich error object to figure out what went wrong and help to debug troublesome code. With PowerShell, the ability to decipher errors is even easier with the introduction of the Get-Error cmdlet. Furthermore, the ConciseView of the ErrorAction preference will keep the command line free from clutter and make coding even easier!
 

Featured post

Top 10 Rare Windows Shortcuts to Supercharge Productivity

  Windows Key + X, U, U: This sequence quickly shuts down your computer. It's a great way to initiate a safe and swift shutdown without...