Neon knight

Neon Knight

Security and Network Consulting

Originally Posted: 04 August 2019

I have always thought of Zero Trust (ZT) as a nonsensical term as it is simply not possible to operate any IT infrastructure without some level of trust. It is interesting to note that Gartner have also recently referred to Zero Trust as ‘misnamed’ and have developed their own framework – CARTA, which stands for Continuous Adaptive Risk and Trust Assessment. The fact that Gartner have taken this position endorses the fact that Zero Trust in principal, can bring very real benefits.

The intention of this post is to explain what Zero Trust means, the benefits it can provide, while also highlighting some of the practical considerations for those looking to implement a Zero Trust initiative. 

The term Zero Trust has very much become a buzzword even though many in the security industry struggle to articulate what it actually means. Zero Trust was initially conceived by John Kindervag during his time as Forrester Research (note that he is now at Palo Alto Networks continuing to promote it there).  While a number of perspectives of Zero Trust exist including many vendor marketing spins, fundamentally its about ensuring that trust relationships aren’t exploited. It is not about making an untrusted or high-risk environment trusted, or achieving a trusted state, some common misconceptions I hear. It’s about avoiding trust as a failure point.

Kindervag has argued that the root cause of virtually all intrusions is some violation of a trust relationship. This can mean; 

  • A miscreant gaining access to an openly accessible system through no or insufficient network segmentation 
  • A legitimate user exceeding their intended authority, i.e. privilege or access abuse 
  • A compromised system jumping from one system to another, i.e. east-west attack propagation
  • Compromised account credentials being used to easily facilitate the above scenarios. 

Hence Zero Trust proposed a framework which assumes of no level of trust in the design process. There is certainly good logic in that fundamental assumption, although it will come at a cost. 

In practice, implementing Zero Trust or CARTA requires very tight filtering and deep inspection of traffic flows coupled with a strong user identity function. Much of the deployment of Zero Trust is based on Network Segmentation and utilising network security technology such as Next Generation Firewalls to deeply inspect and enforce traffic flows crossing trust boundaries. Additionally, establishing a trusted Identity for both users and administrators through techniques such as Multi-Factor Authentication (MFA) is also a vital element.

Its here where the practical considerations start. I have seen many organisations struggling with their existing, usually dated segmentation models and tightly associated firewall rule bases. In the majority of cases this infrastructure has been in place for well over a decade (or longer) and has been expanded as the application infrastructure has grown organically (usually through several generations of administrators). Being polite, most have not grown well. In many cases, these messes are both complex and costly to operate often sucking valuable funds from stretched security budgets. In most cases they are no longer an effective solution, often only there to provide a false sense of security or a compliance tick-box.

Today most corporate IT infrastructures are large, support multiple business critical applications with have highly complex transaction flows and dependencies. Understanding this type of environment is a non-trivial task. Let alone re-engineering such an environment against near continuous uptime requirements.

So, what can be achieved? How can organisations proceed?

I’d suggest the place to begin is by identifying the organisations Ten (or so) most valuable information assets, or alternatively a set of potential candidates. Start small and don’t get too ambitious too quickly. Then monitor the traffic flows to those applications and understand them. A number of technologies and/or techniques can be used including an application-aware Next-Generation Firewall located in front of the candidate systems or applications. Monitoring tools including those which ingest Netfow can also be highly effective. I would strongly recommend whatever solution is used it should also be integrated with the organisations Identity system, be that Active Directory or other systems such as Cisco’s Identity Service Engine (ISE). Linking identity into the monitoring provides not just application visibility, but ‘context’ of ‘who’ is accessing the application. For example, why is a Building Management System accessing the Finance System?

Based on what is discovered, and assuming it isn’t to onerous (which may be a big assumption), then the applications can be located within their own security zone such as a Secure Enclave and tight application-level, Identity-Aware filtering constructed, both in and out. This is the heart of Zero Trust. The objective is to not just control what gets in, but also to ensure that data exfiltration from the key assets can’t occur. For example, how often do we hear of Credit Card Databases being easily FTP’ed out of an organisation? ZT typically recommends additional inspections such as IPS, Network AV and Day-Zero Malware interception are deployed at trust boundaries.

So far so good..

If you are fortunate enough to have only a small number of key assets to protect, then you can rinse and repeat this approach (which for the purpose of this post is fairly oversimplified). Tightly protecting your organisations crown jewels is a valuable initiative, if this can be achieved, you’re in a far better position.

However, most environments I see are not this fortunate and its here where things start to get a whole lot harder. 

Firstly, if you are embarking on a larger scale ZT initiative, then it is essential to invest in an Analytics solution which can provide visibility into the traffic flows between all systems/applications/workloads on scale. Maybe some ultra-well organised organisations have achieved this on a spreadsheet, but these are very few in my experience. Of those that have, it requires significant human resource, is always error prone and changes frequently.

OK, so above I recommended identifying and placing critical applications into security zones with tight filtering at trust boundaries. For larger and more complex environments, it’s the same fundamental principal – (1) Expanding it and (2) on-scale being the big differences. It’s here that the tooling becomes essential to identify application dependencies and subsequently create an optimal zone structure. In my experience this is a key area of any ZT design as the complexity of the trust boundary filtering will be dependent on the quality of the zone structure. You want to get this part right. 

A key design complexity is that just because two systems are talking to each other does not mean that the conversation is always benign. Techniques such as ‘Living off the Land’ attacks are designed to mimic legitimate conversations. You can’t just blindly build a set of access-controls on this basis. Human sanity checking is needed. It is this exact problem which has made it so hard for Machine Learning algorithms to identify nefarious activity – in a mass of good what do the small amount of bad conversation look like?

Anyhow, this post has grown larger than I intended. I would like to discuss the use of ML algorithms in solving the Optimal Zoning and Scale issues I noted above. I’ll leave that for a Part Two.

_