I was thinking back over the last few years in Cyber Security and was wondering just how many billions of dollars have gone into this domain. I’m not sure it’s even possible to accurately calculate the figure, but it’s a staggering sum. And guess what, we are still regularly seeing wide spread damage from Ransomware as well as massive scale breaches in the news.

A short while back Andrew Penn, Telstra CEO, wrote a ‘must read’ article describing how Cybersecurity should be viewed and managed at a Board Level. In my previous post I referred to Andrew’s excellent article a ‘Top-Down’ perspective. I am going to again try and complement his article with a further ‘Bottom-Up’ perspective. I made a number of suggestions in my previous blog post on this topic. I want to emphasise a few additional key points which, I believe, should be understood at an executive level.

Over the past decade I have observed some key trends. A key one has been the substantial increase in complexity in just about every aspect of IT including security. This is not helped by the fact organisations have to architect and deploy increasingly sophisticated infrastructure with an increasingly long list of individual elements, conflicting and overlapping technologies - akin to an airline having to build its own planes from individual components.

In many cases, these systems have grown in an organic manner, through numerous staff changes, against project deadlines, and in many cases with the mindset of ”just get it working”.

In a world of complexity, if robust architectural approaches are not followed you will end up with a network or Information System architecture which resembles a ‘Furball’ the cat coughed up. Put another way - a highly complex, interconnected and monolithic mess. Such systems are not reliable, maintainable or securable. Usually the inherent problems will first manifest themselves are security issues. Just like chinks in a set of armour. New and pervasive technologies like Cloud and IoT integrations will only continue to add to the problem space.

I use the ‘Furball’ analogy as I want to highlight the need for well architected Information Systems and the consequence of not doing so. Unravelling a Furball is at best a very expensive proposition, at worst, a point of no return. This whole industry is in desperate need of standardised architectural approaches which can be applied to common business and organisational situations more universally, as opposed to today’s “roll your own” approach. But that, along with the need for Security Automation, is a topic for another post.   

Achieving solid architectures to facilitate today’s business needs requires people with strong technical skills. Or as Gilfoyle from HBO’s incredibly funny series Silicon Valley so eloquently puts it (amongst other things) “it takes talent and sweat” (Just google “Silicon Valley, what Gilfoyle does”). I use this example as I want to highlight the need for serious investment in in-house technical security expertise and the people who can provide it.

A lot is being written about the shortage of skilled security professionals and how bad the problem is. In many case I see this excuse used as a cop out. We are only going to find our way out of this whole sad and sorry mess when organisations start seriously investing in that in-house technical security expertise. Not outsourcing the problem, or moving responsibility somewhere else. Accepting it and developing key skills In-House. Not just developing that expertise, but ensuring clear bidirectional communication lines exist between those domain experts and executive management. Executive management should at least conceptually understand the challenges being encountered at the coalface and likewise, the technical staff must align with business goals and business risk minimisation needs. While it might sound obvious, I rarely see it working well in practice. So, I put this out as a focus area.

I have heard statements like “we doubled our security budget last year”. That is good, but it’s a relative statement. Was the initial budget anywhere near adequate? It’s not just about allocating more budget. It’s about working knowledgably to achieve that solid architecture and then efficiently operationalising security in a manner that acceptably minimise cyber risks to the organisations information assets.

I have said this before, and will say it again. Be careful from where you take advice, particularly external advice. Just because a company has setup a Cybersecurity practice and has people with fancy titles does not mean they know what they are doing. There are a lot of new entrants charging a lot of money to provide mediocre advice. If they stuff it up, then sure you can fire them, but it’s a moot point if you get fired too. Hence, I again make the case for investing in and developing your own people.

The current hot, sexy topics in Cybersecurity are things like Next Gen technologies, Threat Hunting, AI, ML and the like. At the same time, virtually all of the major breaches can be attributed to not having the basics in place or a breakdown of what should have been a fundamental process. I’m not saying sophisticated attacks don’t happen as they absolutely do. But in most cases the attackers don’t need to use them as there are far easier options.

So how do you go about it? It is critical to start with the basics… and that part is not actually that hard and it doesn’t require elite level talent. There are many good sources of information. If there was one place to start, have a look at the Australian Signals Directorate (ASD) ‘Essential 8’ and “Strategies To Mitigate Cyber Security Incidents”, or the NIST 800 framework.  In larger organisations, building a community where people can leverage and help each other is a hugely powerful approach when supported from executive levels. Something I always encourage.

 

 

A short while back Andrew Penn, Telstra CEO, wrote a ‘must read’ article describing how Cybersecurity should be viewed and managed at a Board Level. Let’s call Andrew’s excellent article a ‘Top-Down’ perspective. I am going to try to complement his article with my own perspective, which is more a ‘Bottom-Up’ perspective.

In my experience, what are the key reasons for a Cybersecurity failure? What can a board, C-Level and senior management do the prevent a high-profile failure or do to improve the situation?

Firstly, any corporate security initiative must start with support from the top. Without this, security initiatives are doomed. And I’m not talking about throwing good money after bad at security initiatives which are not producing results. It starts with leading from the top and instilling the right culture in the organisation. This is critical. I remember John Chambers, CEO of Cisco once said, “responsibility for security starts with me”. On the flip, I remember one client where it was a standing joke that everyone knew the CFOs five-character password and the fact he forbid the implementation of minimum password size and complexity standards, because “they were too hard to remember”. Needless to say, no one in that organisation took security seriously.

In many senior management circles, I have heard the question – What are our peers doing? I have heard it asked in Australia, New York and several Asian countries. While this is an interesting question, that’s about it. When everyone is wondering about everyone else, it’s a circular situation. It is critical to understand your own information assets, their value, and the business impact if they were compromised. I can not emphasise this enough. With these questions understood, ensure your organisation plots its own path forward. There is a massive problem in the information security business called “Status Quo” – just executing against a checklist is not sufficient in today’s dynamic business environment and rapidly changing threat landscape.

The Wannacry outbreak on 12 May 2017 is a clear example of a Cybersecurity failure on a massive scale. Microsoft released a ‘Critical’ patch on 14 Mar 2017. Organisation had nearly two full months to remediate the underlying vulnerability. What we saw was a huge numbers of systems, many performing critical functions, left exposed. Why? WannaCry was not a new event!

To stay on top of Information and Cyber Security today, an adaptable, agile and innovative culture is required. Security is about People, Process and Technology and it’s an organisations culture which underpins all three (more on these topics shortly). This culture must be established, driven and supported from the top. Yep, that’s probably a big ask, if so, just focus on having it right in your security teams.

This leads us onto ‘People’ –  Getting the most from your people is probably one of the hardest tasks. However, a team staffed with skilled, proactive and innovative people, plugged into the external communities, can be invaluable.

Having spoken to a vast number of people in various capacities over the years, in my experience the above situation is uncommon (apart from large organisation who have dedicated teams for this purpose). Certainly I have seen very clue-full groups which is fantastic, more commonly people understand the issues and risks, but are resource constrained making it difficult to act. Unfortunately, I have also seen many people in positions of responsibility who want to ‘put their heads in the sand’ or are downright wilfully negligent. Often this is because “it’s just too hard” or dealing with the reality doesn’t align with their political agenda. These attitudes can be a hugely dangerous.

Senior management and boards should actively enquire about the organisation’s Threat and Risk Management programs. In particular, how they identify and respond to Cybersecurity threats. The program should consider the companies crown jewels and business outcomes it wishes to avoid. When major system changes are made, or new ones commissioned, senior management should insist on a risk assessment and appropriate testing. For the larger or high profile projects, an outside organisation should be engaged to perform these assessments.

Reporting and metrics – In my experience, there is often a huge communication gap between the usually technical people at the coal face and the business oriented senior management. Bridging this gap can be difficult. However, the use of good security metrics can provide a helpful mechanism. Appropriate metrics should be produced by the security teams or departments to provide senior management and boards a picture of the effectiveness of the organisations security programs.

For example, a solid metrics approach could have articulated the number of critical systems, missing critical patches, ahead of the WannaCry outbreak. For many organisations, this one metric would have been a very loud alarm bell!

In security when nothing happens, it’s a good result. But being able to differentiate good luck from good management is key.

Process – In security, solid process is essential. But those processes need to be kept current and adapted as changes occur. Having an organisation full of people who blindly follow an out of date process, is not a recipe for success.

Technology – I would make two points. It is essential that adequate funding is available to ensure current security technology is deployed. When an organisation makes an investment in a security technology, it is imperative that it is properly deployed and the intended outcome is achieved. I have seen plenty of organisations make sizable security technology investments which were either improperly deployed or not adequately leveraged. Secondly, in a fast-changing landscape, the solution to many security problems may be a new technology. It is important to monitor technology developments and make discretionary budget available to purchase a new technology if it can solve a problem or lower a risk.

In recent times there has been a trend of outsourcing IT problems. In other words, taking a hard problem and to quote Hitchhiker Guide to the Galaxy making it “someone else’s problem”. Some BYOD and Cloud initiatives fall into this category. My perspective – if you can find area’s of IT that are sufficiently commoditised and can be cost effectively outsourced, then go for it. But with that said there are areas of IT, like protecting your Crown Jewells, that are high skill and require appropriate people on staff. I would advise against attempting to outsource these areas and would strongly recommend developing and supporting In-House capabilities. Once you lose key talent, it does not come back in big hurry.

From a budgeting perspective, when applications or new systems are rolled out, the full lifecycle cost should be understood up front, including the cost of a secure initial deployment and the ongoing operational costs. Do not allow the security elements to be unfunded and allow the operational costs to fall onto some other department. Usually this means they get ignored.

Finally, be careful who you take advice from. There is no qualification or certification for a Cybersecurity professional (if we draw a comparison to a Chartered Engineer for example). There are plenty of people touting job titles of ‘Cybersecurity Consultant’ who have only recently entered this domain and have minimal experience.

 _

 

In the last few years Cybersecurity has become a hot domain and as a result there have been a large influx of new people into the field. It is relatively easy to construct a Cybersecurity strategy. There are a significant number of places from which this type of material can be drawn and adapted to individual scenarios. I have seen a number of these strategies produced, of varying quality.

 

While a solid strategy is important, the far harder part of the problem is developing an ‘executable strategy’ and then implementing it. To achieve an effective execution and outcome a deep understanding of the domain and its nuances is critical. Put another way -

 

‘What you want to achieve’ and ‘How you achieve it’ are two very different things!

 

I recently came across the Four Disciplines of Execution (Franklin Covey), also known as 4DX. I could immediately see how aspects of this approach could be applied the execution of a  security strategy. While there are four disciplines, it is the first two that can be easily adapted to this domain, with the last two focusing on Accountability and the Leverage which can be gained from the preceding disciplines. I’ll discuss just the first two.

 

 

Focus on the Vitally Important (High Impact)

 

Cybersecurity and Information Security are complex fields. There are many specialised aspects, both technical and operational. While just about every technical security control or operational process will provide some benefit, not all will provide the same impact or are appropriate for all risk profiles. The key here is not just following the status quo. Its about identifying the organisations most significant risks and applying a strategy and the security controls which will provide the highest impact. In other words, what colour is your risk?

 

There are technologies which can provide the defender a huge advantage over the attacker. Cryptography is an example of one such technology. Although it is now common place, it is a technology which probably provides a million-to-one leverage in favour of the defender. I’m not suggesting this is a silver bullet, just that these sort of 'force multiplying' technologies can move the odds in favour of the defender…. a lot! 

 

 

Measurement and Metrics

 

Understanding both Leading and Effectiveness metrics is a key part of the 4DX strategy. 

 

Given todays profile and media coverage of Cyber attacks, it amazes me how many organisations have no security visibility…. and this includes some large ones. To be able to understand your security posture, and get any sort of feedback on the effectiveness of a security strategy, you must have some level of security visibility. Unfortunately, it is common place for breach detection times to be in be the months, years or never. The sad part is that in most cases, evidence of those breaches is hiding in plain sight.

 

Measurement is always a key part of managing anything. If you have no ability to measure, then any form of ongoing improvement is difficult. The 4DX strategy has a focus on Leading Metrics. This is not to say that final results are not important, they are, but a focus on Leading Metrics enables a clear path to that end result through progressive improvement and demonstrate progress towards a goal. Having measures and metrics provides an ability to have conversations at the C-Level in ‘their language’, which in turn can yield better funding for security initiatives.

 

A path to success will vary based on the many organisationally unique parameters such as the nature of the business, the information assets, the application architecture, risk profile, current maturity levels, etc. So measures and metrics should be crafted on a case-by-case basis.

 

Goal, Question, Metric (GQM) is a methodology originally developed back in the 70s for quantifying software quality. More recently Carnegie Mellon University have updated this process to GQIM - Goal, Question, Indicator, Metric. These methodologies provides a repeatable process for developing effective metrics, including those used within Cybersecurity.

 

In a low maturity organisation, I would firstly recommend driving initiatives which provide the establishment of, or improvement to visibility capability. This may include monitoring parameters like password resets, privileged user account usage, IDS/IPS alerts and their severity, blocked connections through firewalls.

 

Some potential leading measures or metrics focused around general network hygiene could be;

  • Number of machines which are below current OS patch level.
  • Number of machines which are below current application patch levels.
  • Number of machines with critical vulnerabilities.
  • Number of machines which are generally out-of-compliance.
  • Number of users with unneeded administration privileges.
  • Usage of current and secure protocols - TLS. SSH, LDAPS, Valid and strong certificates, etc.
  • Usage of Risky applications - i.e. Peer-to-peer file shares, etc.
  • Number of users who have not completed security awareness training.

 

Improving these fundamentals will almost certainly lead to an improvement in the overall security posture, which in turn will likely result in improvements in effectiveness metrics.

 

If we look at operational security metrics, its all about time. Finding breaches quickly, responding and containing. As such, the following are key metrics which are now commonly used in more mature operational environments;

  • Mean time to Detection
  • Mean time to Verify
  • Mean time to Containment

 

Continuing on, metrics such as ‘Botnet and Malware infections per employee’ provides a high level measure of overall effectiveness. Metrics such as ‘Average cost per breach’ can quantify operational maturity in financial terms, as we know lower maturity organisations have an exponentially higher costs than more mature ones, usually due to the need for emergency responses when things go bad.

 

Security is often unfortunately measured when nothing happens and that can make justification and execution ability difficult. By utilising these techniques hopefully we can make it a more winnable game.

 

 

The concept of Security Zoning, also known as Segmentation, is one of the most important architectural foundations within modern network security design. Security Zoning was first introduced back in the mid 90s when the Firewalls started to hit the market. In those days, firewalls were usually deployed at the Internet Perimeter and the deployment principals were fairly simple (Outside, Inside and DMZ). 

 

Over he last 20 years, the pervasiveness of security zoning has increased significantly moving from its original use at the perimeter to common use inside the organisation, such as within data centres, cloud infrastructure, or controlling access to high value assets. Unfortunately, many zoned architecture deployments are driven by the goal of meeting compliance requirements and not actually being a maximally effective security control.

 

The intention of this post is to show a new way of thinking about the security zoning design approach in an era of Big Data and Data Science. Security is a field that has many amazing and large data sets just waiting to be analysed.

 

Over the last decade we have seen huge growth in network size, speed, connectedness and application mix. Application architectures have both grown and become more mission critical at the same time. In response, the complexity of network security architectures, i.e. firewalls and the associated rules sets, has increased exponentially. Today, many deployments have become un-manageable. Either the operational costs have blown out or organisations have simply given up trying to engineer an effective implementation. I still see many organisation who try to manage their firewall rule sets in a spreadsheet. In most cases, this approach (IMHO) just does not work effectively any more.

 

If we had to boil the problem down, we are dealing with a 'management of complexity issue'. This is a problem which is ripe for the application of Big Data Tools, Data Science and Machine Learning principals. 

 

Big Data tools are able to ingest massive data sets and process those sets to uncover common sets of characteristics. Let's look at just two key potential data sources which could be leveraged to improve the design approach; 

  • Endpoint information - A fingerprint of the endpoint to determine its open port and application profile and hence its potential role.
  • Network flow data - Conversations both within and external to the organisation. In other words, who talks to who, how much, and with which applications.

 

To obtain Endpoint Information, NMAP is a popular, but often hard to interpret, port scanning tool. NMAP can scan large IP address ranges and gather data on the targets, for example open ports, services running on open ports, versions of the service, etc. Feature extraction is a key part of an unsupervised machine learning process and each of these can be considered a ‘feature’ with each endpoint having a value for each of the features. For example, an endpoint with port 80 open, acting as a web server and running Apache.

 

Machine Learning techniques can be used to process the large data sets which would be produced by an enterprise wide scan. Groups of endpoints with common, or closely matching feature value sets can be ‘clustered’ using one of a number of machine learning algorithms. In this case, clusters are distinct groups of samples (IP addresses) which have been grouped together. Different algorithms with different configurations group these samples in different ways with K-Means being one of the most commonly used algorithms.

 

Entry into the domain does not require a deep mathematical understanding (although it helps). Python based machine learning tool kits like Scikit-Learn provide an easy entry point.

 

Flow Information can be output by many vendor's networking equipment, through probes, taps and host based agents. There are a number of tools which can ingest network flow information and place it in a NoSQL data store, such as MongoDB or Parquet

 

With flow information providing detailed information on conversations, Graph Databases like Neo4j ) can be used to construct a relational map. That is, the relationships which exist between different endpoints on the network. Graph Databases can enable this capability in much the same way social media networks like LinkedIn and Facebook show relationships between people.

 

Today, a variety of visualisation tools are available to see this information in a human friendly display format.

 

The real power will emerge when the two sources are combined. Understanding the function of the endpoints, combined with information about their relationships with other endpoints will be a very powerful capability in the design process.

 

I'm not suggesting this is the only answer as many other potential data sources exist. Additionally, I’ll admit have probably oversimplified the situation. However, my point is that by utilising just these two data sources, coupled with some now commonly available Data Science tools, a new and far more effective security zoning design approach can be created. My key goal is to hopefully spawn some new thinking, discussion and projects in this direction.

 

The present Security State of many networks is a pretty sad situation. We are regularly seeing breach discovery times in excess of 200 days, with the discoveries often made by external parties. Those figures are based on the breaches that we know about. I’d would suggest those figures are just the tip of the iceberg.

This is a dreadful situation which simply says that many organisations DO NOT HAVE either sufficient ‘visibility’ into their internal infrastructure, or are not able to effectively process, correlate or analyse the data which does exist.

There are many people in the industry openly stating that the attackers have the advantage. I would not try and argue this point, but there is a lot that can be done. If we view security technologies from a Force Multiplier perspective, there are some technologies which provide only a marginal benefit (compliance activities perhaps.. IMHO), while others provide a very significant advantage to the defender.  

I believe that Security Analytics has the potential to have a profound effect on the security business and provide the defenders a very significant advantage. Effective Analytics providing detection capability, should enable a reduction in those statistics from hundreds of days to hours or minutes.

In the last few years we have seen an explosion in Big Data technology with many Open Source tools now being freely available. The scene is young and changing rapidly. But there are many opportunities for people in Security roles to gain exposure to these technologies. While some investment is required, it is possible to enter this domain at low cost.

At present Security Analytics tools are in their infancy. There are a lot of security companies using the buzzwords of Data Science, Machine Learning (ML) and Artificial Intelligence (AI), with very little to no detail on how they are being used or what capabilities are achieved. In reality most are just performing Correlation and basic statistics. With that said, those activities in themselves are very worthwhile. Coupled with some good visualisations, there is a lot of value in doing just those two things.

To lift the hood on some of the terms used in the Security Analytics Domain;

  • Statistics – is quantifying numbers.
  • Data Mining - discovering and explaining patterns in large data sets.
  • Anomaly detection - detecting what is outside of normal.
  • Machine Learning – learning from and making predictions on data through the use of models.
  • Supervised Machine Learning – The initial input data (or training data) has a known label (or result) which can be learned. The model then learns from the training data until a defined level of error is achieved.
  • Unsupervised Machine Learning – The input data is not labelled and the model is prepared by deducing structures present in the input data.
  • Artificial Intelligence -  automatic ways of reasoning and reaching a conclusion by computers.

Mathematical skills in Probability and Statistics, including Bayesian Models, as well as Linear Algebra are heavily used in these domains.

Today there are an increasing number of security data and telemetry sources available for analysis. These include various security logs from hosts, servers and network security devices such as firewalls, IDS/IPS alerts, flow information, packet captures, threat and intelligence feeds, etc. As network speeds and complexity has increased, so has the volume of the data. While there is a vast amount of security data available, identifying threats or intrusions within this data, can still be a huge challenge.

From my recent research into this space, I can conclude Security Analytics is a hard and complex problem, with the necessary algorithms being literally rocket science. To build any sort of Security Analytics toolsets, it is essential that detailed security domain knowledge be coupled with a knowledge of Big Data and Data Science technologies. There are currently very few people who possess both skill sets, so forming small teams will be essential. While this is a big and somewhat complex field, this fact should not put people off starting. Like any new technology, there will be a learning curve.

Suggestions going forward - I always like to provide some actionable recommendations out of any discussion.

Before you can analyse the data, you need to have the data and easy access to it.

 

Establishing a Security Data Lake.

To address the storage of security data, some organisations are now creating a centralised repository known as a Security Data Lake. This should not be seen as an exercise in replacing SIEM Technology, but an augmentation to these systems. On this topic, I would refer people to an excellent free O’Reilly publication by Raffael Marty, located at;

http://www.oreilly.com/data/free/security-data-lake.csp

Data Lakes are often Hadoop clusters or some other NoSQL database, many of which are now freely available. Establishment of a Security Data Lake should be a starting point.

 

Look to closely monitor your ten to twenty most critical servers.

There needs to be a starting point and monitoring a set of key servers is an excellent and practical starting point.  There are many statistics that can be monitored – root/admin logons, user usage statistics, password resets, user source addresses, port usage statistics, packet size distribution, and many others. Start by visualising this data and use it as an operational tool. Security Analytics will mature over time, getting started provides operational experience that will only grow over time.

Apache Metron ( http://metron.incubator.apache.org ) and PNDA ( http://pnda.io ) are two Open Source projects which could potentially be a starting point for your organisation. Both are worth a serious look.