- Details
- Written by Anthony Kirkham
-
Category: Blog
-
Published: 09 September 2018
-
Hits: 3911
I have not written on this topic lately and thought it time to do an update. People may remember a couple of years ago I was very excited by the prospect of utilising Machine Learning (ML) and Big Data Analytics in solving security problems. While there are a number of Use Cases successfully using ML, solving many other security problems with machine learning is turning out to be very hard. I’ll come back to that part later, but let me start by providing an overview of what I’m seeing in the market and this technology domain.
My first observation is that we currently appear to be at ‘buzzword saturation’, particularly around the topic of Artificial Intelligence (AI) applied to security. I am seeing a lot of people and vendor marketing people in particular using the term AI very liberally. If we consider the Encyclopaedia Britannica definition - “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”, then I don’t believe any true AI security product exist today.
With that said, there have been some very significant advances using a number of related technologies in certain security applications. When vendors talk about using AI, in most cases it likely means they are using some form of ML or statistical analysis…. And, done right, that can still be incredibly useful. Couple that with the fact that there are many freely available ML tool sets. These include TensorFlow, Keras, PyTorch, scikit-learn, just to name a few. So, accessing the technology is not difficult.
The biggest mindset shift which has occurred in the security domain in the last 5 years is the acceptance that a purely preventative strategy is insufficient given the sophistication of many attacks. A preventative strategy needs to be complemented with a detection and response capability. It is here that these technologies can play an important role.
However, the difficulty with ML in many security applications is its reliance on large amounts of labelled data for the algorithms to 'learn'. For many applications, that labelled data doesn't currently exist on the scale that is required. While it has been used successfully in some areas, it is still very early days for most security application areas.
So, what are the key Use Cases?
The two most prominent uses of ML techniques are in Malware Classification and Spam Detection. Both of these have successfully utilised Supervised ML due to the fact that in both cases very large amounts of labelled training data have been available. By that I mean, a human has previously classified the samples, a bit like an image recognition system is trained by feeding it a huge number of pictures of animals with the correct names attached as the label. In the case of Malware classification, ML has worked very well as most new malware is usually an adaptation of some previous or current malware family. Hence the common attributes can be detected using ML approaches. Spam detection works on a similar principal.
There is a lot of promising work occurring in the area known as ‘automating the Level-One analyst’. A notable project in this area is the AI^2 project developed at MIT. For most organisations, the sheer volume of security log messages today is beyond what any human can process. The AI^2 system processes log data looking for anomalies and uses the input of human analysts to train the system. As more training data is fed into the system, the more its operation is fine tuned to identify legitimate security events. While currently the system only achieves about 85% accuracy, this can be highly effective in distilling mountains of data into more useful events that an analyst can investigate. The key element however is the need for the human analysts to train the system. So, don’t expect this or other systems to extrapolate new conclusions without being explicitly trained.
Then there are unsupervised ML techniques. Most commonly this means Clustering. With unsupervised learning there is no knowledge of the categories of the data or even if it can be classified. While it is very successfully used in some specific toolsets such as DNS record analysis and processing Threat Feeds, at present I have not seen any high-impact solution purely based on unsupervised techniques. Going forward, I believe unsupervised ML will play an important role, but as a part of a larger system or combined with other ML techniques.
Neural Networks are a hot topic. These are systems which have been designed to mimic the operation of the human Brain. They are being used in many applications today, most notably in Image Recognition, Speech Recognition and Natural Language Processing. For these systems to operate they require labelled training data and often in huge quantities to perform accurately. Again, I have not seen any significant applications of Neural Networks specific to security at this point in time.
A key personal interest area in this field is in analysing network based Netflow Data records to detect attacks. Netflow is the networking equivalent of a Call Detail Record in the telephony world. You know who spoke to who and for how long, but not the contents of each call. The approach is highly scalable. Learning a ‘known good’ network traffic profile is much harder than it appears on the surface for many reasons. A key one is that virtually any network of any size will have something bad or anomalous happening at any point in time. Without this known good baseline, identifying anything bad is very difficult.
The action is not just happening with the good guys. We are starting to see evidence of ML based tools being embedded in malware with an objective of maximising their impact. In the last month, we saw proof-of-concept code called ‘ DeepLocker’ (as in Deep Learning) demonstrated at Black Hat USA. The code spies on the user and learns their behaviour allowing the ransomware to be triggered by any of a variety of learned conditions. If this is a taste of what’s to come, the security community needs to prepare to face a new level of ML-powered attacks.
Where will it all go. Today, experts suggest that any task that can be easily be performed by a human in about 1 second, is a candidate for automation through AI techniques. A lot more to come in this space I believe.
In conclusion, don’t expect AI to come to the rescue for a while yet. Human Experts are essential to lead security operations and security projects. Given the current skills shortage, an investment to develop those key people into, or maintained as, experts, is an initiative that every business should take very seriously. Look after these people and complement them with an investment in these newer technologies which can make their job easier. For the foreseeable future, experts on staff are still today’s most vital asset.
- Details
- Written by Anthony Kirkham
-
Category: Blog
-
Published: 10 January 2018
-
Hits: 3867
I was thinking back over the last few years in Cyber Security and was wondering just how many billions of dollars have gone into this domain. I’m not sure it’s even possible to accurately calculate the figure, but it’s a staggering sum. And guess what, we are still regularly seeing wide spread damage from Ransomware as well as massive scale breaches in the news.
A short while back Andrew Penn, Telstra CEO, wrote a ‘must read’ article describing how Cybersecurity should be viewed and managed at a Board Level. In my previous post I referred to Andrew’s excellent article a ‘Top-Down’ perspective. I am going to again try and complement his article with a further ‘Bottom-Up’ perspective. I made a number of suggestions in my previous blog post on this topic. I want to emphasise a few additional key points which, I believe, should be understood at an executive level.
Over the past decade I have observed some key trends. A key one has been the substantial increase in complexity in just about every aspect of IT including security. This is not helped by the fact organisations have to architect and deploy increasingly sophisticated infrastructure with an increasingly long list of individual elements, conflicting and overlapping technologies - akin to an airline having to build its own planes from individual components.
In many cases, these systems have grown in an organic manner, through numerous staff changes, against project deadlines, and in many cases with the mindset of ”just get it working”.
In a world of complexity, if robust architectural approaches are not followed you will end up with a network or Information System architecture which resembles a ‘Furball’ the cat coughed up. Put another way - a highly complex, interconnected and monolithic mess. Such systems are not reliable, maintainable or securable. Usually the inherent problems will first manifest themselves are security issues. Just like chinks in a set of armour. New and pervasive technologies like Cloud and IoT integrations will only continue to add to the problem space.
I use the ‘Furball’ analogy as I want to highlight the need for well architected Information Systems and the consequence of not doing so. Unravelling a Furball is at best a very expensive proposition, at worst, a point of no return. This whole industry is in desperate need of standardised architectural approaches which can be applied to common business and organisational situations more universally, as opposed to today’s “roll your own” approach. But that, along with the need for Security Automation, is a topic for another post.
Achieving solid architectures to facilitate today’s business needs requires people with strong technical skills. Or as Gilfoyle from HBO’s incredibly funny series Silicon Valley so eloquently puts it (amongst other things) “it takes talent and sweat” (Just google “Silicon Valley, what Gilfoyle does”). I use this example as I want to highlight the need for serious investment in in-house technical security expertise and the people who can provide it.
A lot is being written about the shortage of skilled security professionals and how bad the problem is. In many case I see this excuse used as a cop out. We are only going to find our way out of this whole sad and sorry mess when organisations start seriously investing in that in-house technical security expertise. Not outsourcing the problem, or moving responsibility somewhere else. Accepting it and developing key skills In-House. Not just developing that expertise, but ensuring clear bidirectional communication lines exist between those domain experts and executive management. Executive management should at least conceptually understand the challenges being encountered at the coalface and likewise, the technical staff must align with business goals and business risk minimisation needs. While it might sound obvious, I rarely see it working well in practice. So, I put this out as a focus area.
I have heard statements like “we doubled our security budget last year”. That is good, but it’s a relative statement. Was the initial budget anywhere near adequate? It’s not just about allocating more budget. It’s about working knowledgably to achieve that solid architecture and then efficiently operationalising security in a manner that acceptably minimise cyber risks to the organisations information assets.
I have said this before, and will say it again. Be careful from where you take advice, particularly external advice. Just because a company has setup a Cybersecurity practice and has people with fancy titles does not mean they know what they are doing. There are a lot of new entrants charging a lot of money to provide mediocre advice. If they stuff it up, then sure you can fire them, but it’s a moot point if you get fired too. Hence, I again make the case for investing in and developing your own people.
The current hot, sexy topics in Cybersecurity are things like Next Gen technologies, Threat Hunting, AI, ML and the like. At the same time, virtually all of the major breaches can be attributed to not having the basics in place or a breakdown of what should have been a fundamental process. I’m not saying sophisticated attacks don’t happen as they absolutely do. But in most cases the attackers don’t need to use them as there are far easier options.
So how do you go about it? It is critical to start with the basics… and that part is not actually that hard and it doesn’t require elite level talent. There are many good sources of information. If there was one place to start, have a look at the Australian Signals Directorate (ASD) ‘Essential 8’ and “Strategies To Mitigate Cyber Security Incidents”, or the NIST 800 framework. In larger organisations, building a community where people can leverage and help each other is a hugely powerful approach when supported from executive levels. Something I always encourage.
- Details
- Written by Super User
-
Category: Blog
-
Published: 16 May 2017
-
Hits: 3987
A short while back Andrew Penn, Telstra CEO, wrote a ‘must read’ article describing how Cybersecurity should be viewed and managed at a Board Level. Let’s call Andrew’s excellent article a ‘Top-Down’ perspective. I am going to try to complement his article with my own perspective, which is more a ‘Bottom-Up’ perspective.
In my experience, what are the key reasons for a Cybersecurity failure? What can a board, C-Level and senior management do the prevent a high-profile failure or do to improve the situation?
Firstly, any corporate security initiative must start with support from the top. Without this, security initiatives are doomed. And I’m not talking about throwing good money after bad at security initiatives which are not producing results. It starts with leading from the top and instilling the right culture in the organisation. This is critical. I remember John Chambers, CEO of Cisco once said, “responsibility for security starts with me”. On the flip, I remember one client where it was a standing joke that everyone knew the CFOs five-character password and the fact he forbid the implementation of minimum password size and complexity standards, because “they were too hard to remember”. Needless to say, no one in that organisation took security seriously.
In many senior management circles, I have heard the question – What are our peers doing? I have heard it asked in Australia, New York and several Asian countries. While this is an interesting question, that’s about it. When everyone is wondering about everyone else, it’s a circular situation. It is critical to understand your own information assets, their value, and the business impact if they were compromised. I can not emphasise this enough. With these questions understood, ensure your organisation plots its own path forward. There is a massive problem in the information security business called “Status Quo” – just executing against a checklist is not sufficient in today’s dynamic business environment and rapidly changing threat landscape.
The Wannacry outbreak on 12 May 2017 is a clear example of a Cybersecurity failure on a massive scale. Microsoft released a ‘Critical’ patch on 14 Mar 2017. Organisation had nearly two full months to remediate the underlying vulnerability. What we saw was a huge numbers of systems, many performing critical functions, left exposed. Why? WannaCry was not a new event!
To stay on top of Information and Cyber Security today, an adaptable, agile and innovative culture is required. Security is about People, Process and Technology and it’s an organisations culture which underpins all three (more on these topics shortly). This culture must be established, driven and supported from the top. Yep, that’s probably a big ask, if so, just focus on having it right in your security teams.
This leads us onto ‘People’ – Getting the most from your people is probably one of the hardest tasks. However, a team staffed with skilled, proactive and innovative people, plugged into the external communities, can be invaluable.
Having spoken to a vast number of people in various capacities over the years, in my experience the above situation is uncommon (apart from large organisation who have dedicated teams for this purpose). Certainly I have seen very clue-full groups which is fantastic, more commonly people understand the issues and risks, but are resource constrained making it difficult to act. Unfortunately, I have also seen many people in positions of responsibility who want to ‘put their heads in the sand’ or are downright wilfully negligent. Often this is because “it’s just too hard” or dealing with the reality doesn’t align with their political agenda. These attitudes can be a hugely dangerous.
Senior management and boards should actively enquire about the organisation’s Threat and Risk Management programs. In particular, how they identify and respond to Cybersecurity threats. The program should consider the companies crown jewels and business outcomes it wishes to avoid. When major system changes are made, or new ones commissioned, senior management should insist on a risk assessment and appropriate testing. For the larger or high profile projects, an outside organisation should be engaged to perform these assessments.
Reporting and metrics – In my experience, there is often a huge communication gap between the usually technical people at the coal face and the business oriented senior management. Bridging this gap can be difficult. However, the use of good security metrics can provide a helpful mechanism. Appropriate metrics should be produced by the security teams or departments to provide senior management and boards a picture of the effectiveness of the organisations security programs.
For example, a solid metrics approach could have articulated the number of critical systems, missing critical patches, ahead of the WannaCry outbreak. For many organisations, this one metric would have been a very loud alarm bell!
In security when nothing happens, it’s a good result. But being able to differentiate good luck from good management is key.
Process – In security, solid process is essential. But those processes need to be kept current and adapted as changes occur. Having an organisation full of people who blindly follow an out of date process, is not a recipe for success.
Technology – I would make two points. It is essential that adequate funding is available to ensure current security technology is deployed. When an organisation makes an investment in a security technology, it is imperative that it is properly deployed and the intended outcome is achieved. I have seen plenty of organisations make sizable security technology investments which were either improperly deployed or not adequately leveraged. Secondly, in a fast-changing landscape, the solution to many security problems may be a new technology. It is important to monitor technology developments and make discretionary budget available to purchase a new technology if it can solve a problem or lower a risk.
In recent times there has been a trend of outsourcing IT problems. In other words, taking a hard problem and to quote Hitchhiker Guide to the Galaxy making it “someone else’s problem”. Some BYOD and Cloud initiatives fall into this category. My perspective – if you can find area’s of IT that are sufficiently commoditised and can be cost effectively outsourced, then go for it. But with that said there are areas of IT, like protecting your Crown Jewells, that are high skill and require appropriate people on staff. I would advise against attempting to outsource these areas and would strongly recommend developing and supporting In-House capabilities. Once you lose key talent, it does not come back in big hurry.
From a budgeting perspective, when applications or new systems are rolled out, the full lifecycle cost should be understood up front, including the cost of a secure initial deployment and the ongoing operational costs. Do not allow the security elements to be unfunded and allow the operational costs to fall onto some other department. Usually this means they get ignored.
Finally, be careful who you take advice from. There is no qualification or certification for a Cybersecurity professional (if we draw a comparison to a Chartered Engineer for example). There are plenty of people touting job titles of ‘Cybersecurity Consultant’ who have only recently entered this domain and have minimal experience.
_
- Details
- Written by Anthony Kirkham
-
Category: Blog
-
Published: 11 March 2017
-
Hits: 3339
In the last few years Cybersecurity has become a hot domain and as a result there have been a large influx of new people into the field. It is relatively easy to construct a Cybersecurity strategy. There are a significant number of places from which this type of material can be drawn and adapted to individual scenarios. I have seen a number of these strategies produced, of varying quality.
While a solid strategy is important, the far harder part of the problem is developing an ‘executable strategy’ and then implementing it. To achieve an effective execution and outcome a deep understanding of the domain and its nuances is critical. Put another way -
‘What you want to achieve’ and ‘How you achieve it’ are two very different things!
I recently came across the Four Disciplines of Execution (Franklin Covey), also known as 4DX. I could immediately see how aspects of this approach could be applied the execution of a security strategy. While there are four disciplines, it is the first two that can be easily adapted to this domain, with the last two focusing on Accountability and the Leverage which can be gained from the preceding disciplines. I’ll discuss just the first two.
Focus on the Vitally Important (High Impact)
Cybersecurity and Information Security are complex fields. There are many specialised aspects, both technical and operational. While just about every technical security control or operational process will provide some benefit, not all will provide the same impact or are appropriate for all risk profiles. The key here is not just following the status quo. Its about identifying the organisations most significant risks and applying a strategy and the security controls which will provide the highest impact. In other words, what colour is your risk?
There are technologies which can provide the defender a huge advantage over the attacker. Cryptography is an example of one such technology. Although it is now common place, it is a technology which probably provides a million-to-one leverage in favour of the defender. I’m not suggesting this is a silver bullet, just that these sort of 'force multiplying' technologies can move the odds in favour of the defender…. a lot!
Measurement and Metrics
Understanding both Leading and Effectiveness metrics is a key part of the 4DX strategy.
Given todays profile and media coverage of Cyber attacks, it amazes me how many organisations have no security visibility…. and this includes some large ones. To be able to understand your security posture, and get any sort of feedback on the effectiveness of a security strategy, you must have some level of security visibility. Unfortunately, it is common place for breach detection times to be in be the months, years or never. The sad part is that in most cases, evidence of those breaches is hiding in plain sight.
Measurement is always a key part of managing anything. If you have no ability to measure, then any form of ongoing improvement is difficult. The 4DX strategy has a focus on Leading Metrics. This is not to say that final results are not important, they are, but a focus on Leading Metrics enables a clear path to that end result through progressive improvement and demonstrate progress towards a goal. Having measures and metrics provides an ability to have conversations at the C-Level in ‘their language’, which in turn can yield better funding for security initiatives.
A path to success will vary based on the many organisationally unique parameters such as the nature of the business, the information assets, the application architecture, risk profile, current maturity levels, etc. So measures and metrics should be crafted on a case-by-case basis.
Goal, Question, Metric (GQM) is a methodology originally developed back in the 70s for quantifying software quality. More recently Carnegie Mellon University have updated this process to GQIM - Goal, Question, Indicator, Metric. These methodologies provides a repeatable process for developing effective metrics, including those used within Cybersecurity.
In a low maturity organisation, I would firstly recommend driving initiatives which provide the establishment of, or improvement to visibility capability. This may include monitoring parameters like password resets, privileged user account usage, IDS/IPS alerts and their severity, blocked connections through firewalls.
Some potential leading measures or metrics focused around general network hygiene could be;
- Number of machines which are below current OS patch level.
- Number of machines which are below current application patch levels.
- Number of machines with critical vulnerabilities.
- Number of machines which are generally out-of-compliance.
- Number of users with unneeded administration privileges.
- Usage of current and secure protocols - TLS. SSH, LDAPS, Valid and strong certificates, etc.
- Usage of Risky applications - i.e. Peer-to-peer file shares, etc.
- Number of users who have not completed security awareness training.
Improving these fundamentals will almost certainly lead to an improvement in the overall security posture, which in turn will likely result in improvements in effectiveness metrics.
If we look at operational security metrics, its all about time. Finding breaches quickly, responding and containing. As such, the following are key metrics which are now commonly used in more mature operational environments;
- Mean time to Detection
- Mean time to Verify
- Mean time to Containment
Continuing on, metrics such as ‘Botnet and Malware infections per employee’ provides a high level measure of overall effectiveness. Metrics such as ‘Average cost per breach’ can quantify operational maturity in financial terms, as we know lower maturity organisations have an exponentially higher costs than more mature ones, usually due to the need for emergency responses when things go bad.
Security is often unfortunately measured when nothing happens and that can make justification and execution ability difficult. By utilising these techniques hopefully we can make it a more winnable game.
- Details
- Written by Anthony Kirkham
-
Category: Blog
-
Published: 25 November 2016
-
Hits: 3104
The concept of Security Zoning, also known as Segmentation, is one of the most important architectural foundations within modern network security design. Security Zoning was first introduced back in the mid 90s when the Firewalls started to hit the market. In those days, firewalls were usually deployed at the Internet Perimeter and the deployment principals were fairly simple (Outside, Inside and DMZ).
Over he last 20 years, the pervasiveness of security zoning has increased significantly moving from its original use at the perimeter to common use inside the organisation, such as within data centres, cloud infrastructure, or controlling access to high value assets. Unfortunately, many zoned architecture deployments are driven by the goal of meeting compliance requirements and not actually being a maximally effective security control.
The intention of this post is to show a new way of thinking about the security zoning design approach in an era of Big Data and Data Science. Security is a field that has many amazing and large data sets just waiting to be analysed.
Over the last decade we have seen huge growth in network size, speed, connectedness and application mix. Application architectures have both grown and become more mission critical at the same time. In response, the complexity of network security architectures, i.e. firewalls and the associated rules sets, has increased exponentially. Today, many deployments have become un-manageable. Either the operational costs have blown out or organisations have simply given up trying to engineer an effective implementation. I still see many organisation who try to manage their firewall rule sets in a spreadsheet. In most cases, this approach (IMHO) just does not work effectively any more.
If we had to boil the problem down, we are dealing with a 'management of complexity issue'. This is a problem which is ripe for the application of Big Data Tools, Data Science and Machine Learning principals.
Big Data tools are able to ingest massive data sets and process those sets to uncover common sets of characteristics. Let's look at just two key potential data sources which could be leveraged to improve the design approach;
- Endpoint information - A fingerprint of the endpoint to determine its open port and application profile and hence its potential role.
- Network flow data - Conversations both within and external to the organisation. In other words, who talks to who, how much, and with which applications.
To obtain Endpoint Information, NMAP is a popular, but often hard to interpret, port scanning tool. NMAP can scan large IP address ranges and gather data on the targets, for example open ports, services running on open ports, versions of the service, etc. Feature extraction is a key part of an unsupervised machine learning process and each of these can be considered a ‘feature’ with each endpoint having a value for each of the features. For example, an endpoint with port 80 open, acting as a web server and running Apache.
Machine Learning techniques can be used to process the large data sets which would be produced by an enterprise wide scan. Groups of endpoints with common, or closely matching feature value sets can be ‘clustered’ using one of a number of machine learning algorithms. In this case, clusters are distinct groups of samples (IP addresses) which have been grouped together. Different algorithms with different configurations group these samples in different ways with K-Means being one of the most commonly used algorithms.
Entry into the domain does not require a deep mathematical understanding (although it helps). Python based machine learning tool kits like Scikit-Learn provide an easy entry point.
Flow Information can be output by many vendor's networking equipment, through probes, taps and host based agents. There are a number of tools which can ingest network flow information and place it in a NoSQL data store, such as MongoDB or Parquet.
With flow information providing detailed information on conversations, Graph Databases like Neo4j ) can be used to construct a relational map. That is, the relationships which exist between different endpoints on the network. Graph Databases can enable this capability in much the same way social media networks like LinkedIn and Facebook show relationships between people.
Today, a variety of visualisation tools are available to see this information in a human friendly display format.
The real power will emerge when the two sources are combined. Understanding the function of the endpoints, combined with information about their relationships with other endpoints will be a very powerful capability in the design process.
I'm not suggesting this is the only answer as many other potential data sources exist. Additionally, I’ll admit have probably oversimplified the situation. However, my point is that by utilising just these two data sources, coupled with some now commonly available Data Science tools, a new and far more effective security zoning design approach can be created. My key goal is to hopefully spawn some new thinking, discussion and projects in this direction.