I have always thought of Zero Trust (ZT) as a nonsensical term as it is simply not possible to operate any IT infrastructure without some level of trust. It is interesting to note that Gartner have also recently referred to Zero Trust as ‘misnamed’ and have developed their own framework – CARTA, which stands for Continuous Adaptive Risk and Trust Assessment. The fact that Gartner have taken this position endorses the fact that Zero Trust in principal, can bring very real benefits.

The intention of this post is to explain what Zero Trust means, the benefits it can provide, while also highlighting some of the practical considerations for those looking to implement a Zero Trust initiative. 

The term Zero Trust has very much become a buzzword even though many in the security industry struggle to articulate what it actually means. Zero Trust was initially conceived by John Kindervag during his time as Forrester Research (note that he is now at Palo Alto Networks continuing to promote it there).  While a number of perspectives of Zero Trust exist including many vendor marketing spins, fundamentally its about ensuring that trust relationships aren’t exploited. It is not about making an untrusted or high-risk environment trusted, or achieving a trusted state, some common misconceptions I hear. It’s about avoiding trust as a failure point.

Kindervag has argued that the root cause of virtually all intrusions is some violation of a trust relationship. This can mean; 

  • A miscreant gaining access to an openly accessible system through no or insufficient network segmentation 
  • A legitimate user exceeding their intended authority, i.e. privilege or access abuse 
  • A compromised system jumping from one system to another, i.e. east-west attack propagation
  • Compromised account credentials being used to easily facilitate the above scenarios. 

Hence Zero Trust proposed a framework which assumes of no level of trust in the design process. There is certainly good logic in that fundamental assumption, although it will come at a cost. 

In practice, implementing Zero Trust or CARTA requires very tight filtering and deep inspection of traffic flows coupled with a strong user identity function. Much of the deployment of Zero Trust is based on Network Segmentation and utilising network security technology such as Next Generation Firewalls to deeply inspect and enforce traffic flows crossing trust boundaries. Additionally, establishing a trusted Identity for both users and administrators through techniques such as Multi-Factor Authentication (MFA) is also a vital element.

Its here where the practical considerations start. I have seen many organisations struggling with their existing, usually dated segmentation models and tightly associated firewall rule bases. In the majority of cases this infrastructure has been in place for well over a decade (or longer) and has been expanded as the application infrastructure has grown organically (usually through several generations of administrators). Being polite, most have not grown well. In many cases, these messes are both complex and costly to operate often sucking valuable funds from stretched security budgets. In most cases they are no longer an effective solution, often only there to provide a false sense of security or a compliance tick-box.

Today most corporate IT infrastructures are large, support multiple business critical applications with have highly complex transaction flows and dependencies. Understanding this type of environment is a non-trivial task. Let alone re-engineering such an environment against near continuous uptime requirements.

So, what can be achieved? How can organisations proceed?

I’d suggest the place to begin is by identifying the organisations Ten (or so) most valuable information assets, or alternatively a set of potential candidates. Start small and don’t get too ambitious too quickly. Then monitor the traffic flows to those applications and understand them. A number of technologies and/or techniques can be used including an application-aware Next-Generation Firewall located in front of the candidate systems or applications. Monitoring tools including those which ingest Netfow can also be highly effective. I would strongly recommend whatever solution is used it should also be integrated with the organisations Identity system, be that Active Directory or other systems such as Cisco’s Identity Service Engine (ISE). Linking identity into the monitoring provides not just application visibility, but ‘context’ of ‘who’ is accessing the application. For example, why is a Building Management System accessing the Finance System?

Based on what is discovered, and assuming it isn’t to onerous (which may be a big assumption), then the applications can be located within their own security zone such as a Secure Enclave and tight application-level, Identity-Aware filtering constructed, both in and out. This is the heart of Zero Trust. The objective is to not just control what gets in, but also to ensure that data exfiltration from the key assets can’t occur. For example, how often do we hear of Credit Card Databases being easily FTP’ed out of an organisation? ZT typically recommends additional inspections such as IPS, Network AV and Day-Zero Malware interception are deployed at trust boundaries.

So far so good..

If you are fortunate enough to have only a small number of key assets to protect, then you can rinse and repeat this approach (which for the purpose of this post is fairly oversimplified). Tightly protecting your organisations crown jewels is a valuable initiative, if this can be achieved, you’re in a far better position.

However, most environments I see are not this fortunate and its here where things start to get a whole lot harder. 

Firstly, if you are embarking on a larger scale ZT initiative, then it is essential to invest in an Analytics solution which can provide visibility into the traffic flows between all systems/applications/workloads on scale. Maybe some ultra-well organised organisations have achieved this on a spreadsheet, but these are very few in my experience. Of those that have, it requires significant human resource, is always error prone and changes frequently.

OK, so above I recommended identifying and placing critical applications into security zones with tight filtering at trust boundaries. For larger and more complex environments, it’s the same fundamental principal – (1) Expanding it and (2) on-scale being the big differences. It’s here that the tooling becomes essential to identify application dependencies and subsequently create an optimal zone structure. In my experience this is a key area of any ZT design as the complexity of the trust boundary filtering will be dependent on the quality of the zone structure. You want to get this part right. 

A key design complexity is that just because two systems are talking to each other does not mean that the conversation is always benign. Techniques such as ‘Living off the Land’ attacks are designed to mimic legitimate conversations. You can’t just blindly build a set of access-controls on this basis. Human sanity checking is needed. It is this exact problem which has made it so hard for Machine Learning algorithms to identify nefarious activity – in a mass of good what do the small amount of bad conversation look like?

Anyhow, this post has grown larger than I intended. I would like to discuss the use of ML algorithms in solving the Optimal Zoning and Scale issues I noted above. I’ll leave that for a Part Two.

 

In the last week we have seen a spectacular report out of Bloomberg in relation to malicious hardware implants within Supermicro server motherboards. The implications of this report are potentially huge. However, the technical details disclosed are minimal and a large number of unanswered questions exist.

Personally, I subscribe to the adage of “where there’s smoke, there must be at least some amount of fire”. Subsequent reports have claimed that it was not just Supermicro motherboards affected, but that the problem could be far more widespread affecting other vendors as well. With all that said, I acknowledge that this whole situation has not been substantiated and there is a chance it could be inaccurate, grossly exaggerated, or completely false. However, for the purpose of this post let’s put that debate aside and assume that the reports are correct.

The first point I would like to make is that hardware inspection is a highly specialised field and there are currently very few vendor organisations either experienced or equipped to perform this work. This means that for the vast majority of organisations, hardware inspection is not going to be a viable option.

So, I want to discuss options for network monitoring and visibility. But first the problem.

From the information available to date, it has been suggested that the malicious modifications have been made to the management controller of server motherboards, Cisco calls this an IMC (Intelligent Management Controller), Dell call it a DRAC (Dell Remote Access Controller) and HPE calls this ILO (Integrated Lights Out). All of these devices are essentially a small computer that controls the computer. There is a long history of these devices being notoriously insecure. Furthermore, these management controllers have access to just about every aspect of the server’s hardware providing them more control over the hardware than the operating system itself.

Compromised hardware only takes the attacker so far. At some point the malicious hardware will need to communicate over the network to a Command and Control (C&C) server. Depending on the nature of the implant, malicious communication attempts could originate from either compromised management controller interface, from the server’s operating system, hosted virtual machines or any/all of the above.

In my mind this situation further makes a compelling case for the deployment of network monitoring and analytics. The key issue now is the ability to detect malicious traffic and respond quickly in the event of such an occurrence, whether that attack has stemmed from either a hardware implant, or through unrelated but still malicious activities.

What are my recommendations and the options.

A critical point is that server management controller interfaces should not be routable to the internet. I recommend that they are segmented and are only allowed to communicate with the minimum number of workstations needed to support the operation of those devices. i.e. how many people really need access to the management controller? If you can isolate the kill chain at this point, it is highly likely an attacker won’t be able to gain control and further progress an attack. If server management ports must connect to something on the big bad internet, then enable it very selectively. 

At a network level, I would recommend the deployment of a Sinkhole on the management network. A sinkhole is a part of the network that attracts all traffic which has no other legitimate destination. Sinkholes are an infrequently deployed but incredibly useful for attracting all sorts of traffic which could either be the result of misconfiguration, or, of key interest here, malicious traffic. Once traffic is routed into a sinkhole there are many tools which can be used for analysis. I realise that’s a bit light on in technical detail, but I will aim to publish a subsequent Blog post on Sinkholes in the next week or so.

Let’s now talk about available network based monitoring options. 

When we start talking about monitoring options, the first call out is firewall logs. Assuming the management network is segmented, then whatever firewall is in place will be capable of connection logging. There are many examples where detailed evidence of an attack has been collected in the firewall logs. If you aren’t collecting and archiving your firewall logs, then this is recommendation one. And if you think I’m ‘stating the bleeding obvious’ – you would not believe the number of organisation who fall into the category of ‘people who should know better’, who don’t do this. If this is your organisation and it gets compromised, any forensic investigation will be both exponentially harder and exponentially more expensive! Ignore this advice at your peril.

Netflow - I have been a huge fan of Netflow as a security tool for many years. Netflow is the networking equivalent of a telephony Call Detail Record (CDR). At an IP level it records who spoke to who, how much, and for how long. Like firewall logs, flow records can be exported, collected, analysed and archived. A ley point is that for security applications, you must use full-flow Netflow to capture all conversations at a point-in-the-network as opposed to sampled Netflow.

Analysis tools – Many both commercial and open source tools are available which can be used for both log and Netflow analysis. I won’t call out any commercial options or discuss SIEMs, but the Elasticstack (formerly the ELK) is a very robust and widely deployed open source option. If you have nothing, Elasticstack is a good place to start.

Full Packet Capture – This approach captures all traffic that passes some through some point-in-the-network. I’m not going to elaborate on it too much as it’s a costly approach and generally reserved for serious organisations. However, I will mention one approach I have seen some organisations deploy. It is the use of a full packet capture card that collects in the order of a day to a weeks data in a circular buffer. In the event of an incident being detected the available full packet capture can be copied and stored for investigation. 

What are we looking for?

If we use a Cyber Kill Chain as a foundation, then we wish to look for any evidence of those attack stages within an attack lifecycle. This can range from beacons to a C&C server, download of additional malware or most importantly achievement of the ultimate objective, exfiltration of data (at which point you’re probably pretty screwed). And we must also assume that any potential attack traffic is going to be encrypted. That is another more in-depth topic, but lets just say it makes monitoring the contents of a traffic stream very difficult. 

Geo-Location – Is a widely available feature. It can provide a very quick indication of the termination country of a connection’s remote endpoint. So, if you were to see a connection from inside your network, connecting to a suspicious country, that’s something that requires investigation.

Threat Intelligence – The sheer number of active threats in general including malicious destinations on the Internet is well beyond the vast majority of organisations to track. This is where the use of Threat Intelligence comes in. If anything inside your organisation (and that includes Cloud Infrastructure) speaks to a known malicious internet endpoint, then you want to know about it. Threat Intelligence comes in a variety of forms, including both Open-Source and commercial feeds. The key objective is to correlate the information received from a reputable feed (or feeds) with the traffic ingressing and egressing your network. The goal of Threat Intelligence usage is to quickly identify a malicious event within what will typically be a mountain of network traffic. 

Threat Intelligence works on the assumption that someone has seen an attack previously. So if this is a unique or first time attack, it probably won’t help, but there will be a lot of cases that have been seen before making it a valuable tool.

As the primary focus of this post is compromised server hardware, then monitoring the communication habits of management controller ports should be a key focus. If these devices start trying to talk to unexplained destinations (including trying to resolve unexplained destinations), then prompt investigation is required. 

 

_

I have not written on this topic lately and thought it time to do an update. People may remember a couple of years ago I was very excited by the prospect of utilising Machine Learning (ML) and Big Data Analytics in solving security problems. While there are a number of Use Cases successfully using ML, solving many other security problems with machine learning is turning out to be very hard. I’ll come back to that part later, but let me start by providing an overview of what I’m seeing in the market and this technology domain. 

 

My first observation is that we currently appear to be at ‘buzzword saturation’, particularly around the topic of Artificial Intelligence (AI) applied to security. I am seeing a lot of people and vendor marketing people in particular using the term AI very liberally. If we consider the Encyclopaedia Britannica definition - “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”, then I don’t believe any true AI security product exist today. 

 

With that said, there have been some very significant advances using a number of related technologies in certain security applications. When vendors talk about using AI, in most cases it likely means they are using some form of ML or statistical analysis…. And, done right, that can still be incredibly useful. Couple that with the fact that there are many freely available ML tool sets. These include TensorFlow, Keras, PyTorch, scikit-learn, just to name a few. So, accessing the technology is not difficult.

 

The biggest mindset shift which has occurred in the security domain in the last 5 years is the acceptance that a purely preventative strategy is insufficient given the sophistication of many attacks. A preventative strategy needs to be complemented with a detection and response capability.  It is here that these technologies can play an important role.

 

However, the difficulty with ML in many security applications is its reliance on large amounts of labelled data for the algorithms to 'learn'. For many applications, that labelled data doesn't currently exist on the scale that is required. While it has been used successfully in some areas, it is still very early days for most security application areas.

 

So, what are the key Use Cases?

 

The two most prominent uses of ML techniques are in Malware Classification and Spam Detection. Both of these have successfully utilised Supervised ML due to the fact that in both cases very large amounts of labelled training data have been available. By that I mean, a human has previously classified the samples, a bit like an image recognition system is trained by feeding it a huge number of pictures of animals with the correct names attached as the label. In the case of Malware classification, ML has worked very well as most new malware is usually an adaptation of some previous or current malware family. Hence the common attributes can be detected using ML approaches. Spam detection works on a similar principal.

 

There is a lot of promising work occurring in the area known as ‘automating the Level-One analyst’. A notable project in this area is the AI^2 project developed at MIT. For most organisations, the sheer volume of security log messages today is beyond what any human can process. The AI^2 system processes log data looking for anomalies and uses the input of human analysts to train the system. As more training data is fed into the system, the more its operation is fine tuned to identify legitimate security events. While currently the system only achieves about 85% accuracy, this can be highly effective in distilling mountains of data into more useful events that an analyst can investigate. The key element however is the need for the human analysts to train the system. So, don’t expect this or other systems to extrapolate new conclusions without being explicitly trained.

 

Then there are unsupervised ML techniques. Most commonly this means Clustering. With unsupervised learning there is no knowledge of the categories of the data or even if it can be classified. While it is very successfully used in some specific toolsets such as DNS record analysis and processing Threat Feeds, at present I have not seen any high-impact solution purely based on unsupervised techniques. Going forward, I believe unsupervised ML will play an important role, but as a part of a larger system or combined with other ML techniques.

 

Neural Networks are a hot topic. These are systems which have been designed to mimic the operation of the human Brain. They are being used in many applications today, most notably in Image Recognition, Speech Recognition and Natural Language Processing. For these systems to operate they require labelled training data and often in huge quantities to perform accurately. Again, I have not seen any significant applications of Neural Networks specific to security at this point in time.

 

A key personal interest area in this field is in analysing network based Netflow Data records to detect attacks. Netflow is the networking equivalent of a Call Detail Record in the telephony world. You know who spoke to who and for how long, but not the contents of each call. The approach is highly scalable. Learning a ‘known good’ network traffic profile is much harder than it appears on the surface for many reasons. A key one is that virtually any network of any size will have something bad or anomalous happening at any point in time. Without this known good baseline, identifying anything bad is very difficult. 

 

The action is not just happening with the good guys. We are starting to see evidence of ML based tools being embedded in malware with an objective of maximising their impact. In the last month, we saw proof-of-concept code called ‘ DeepLocker’ (as in Deep Learning) demonstrated at Black Hat USA. The code spies on the user and learns their behaviour allowing the ransomware to be triggered by any of a variety of learned conditions. If this is a taste of what’s to come, the security community needs to prepare to face a new level of ML-powered attacks.

 

Where will it all go. Today, experts suggest that any task that can be easily be performed by a human in about 1 second, is a candidate for automation through AI techniques. A lot more to come in this space I believe.

 

In conclusion, don’t expect AI to come to the rescue for a while yet. Human Experts are essential to lead security operations and security projects. Given the current skills shortage, an investment to develop those key people into, or maintained as, experts, is an initiative that every business should take very seriously. Look after these people and complement them with an investment in these newer technologies which can make their job easier. For the foreseeable future, experts on staff are still today’s most vital asset. 

 

I was thinking back over the last few years in Cyber Security and was wondering just how many billions of dollars have gone into this domain. I’m not sure it’s even possible to accurately calculate the figure, but it’s a staggering sum. And guess what, we are still regularly seeing wide spread damage from Ransomware as well as massive scale breaches in the news.

A short while back Andrew Penn, Telstra CEO, wrote a ‘must read’ article describing how Cybersecurity should be viewed and managed at a Board Level. In my previous post I referred to Andrew’s excellent article a ‘Top-Down’ perspective. I am going to again try and complement his article with a further ‘Bottom-Up’ perspective. I made a number of suggestions in my previous blog post on this topic. I want to emphasise a few additional key points which, I believe, should be understood at an executive level.

Over the past decade I have observed some key trends. A key one has been the substantial increase in complexity in just about every aspect of IT including security. This is not helped by the fact organisations have to architect and deploy increasingly sophisticated infrastructure with an increasingly long list of individual elements, conflicting and overlapping technologies - akin to an airline having to build its own planes from individual components.

In many cases, these systems have grown in an organic manner, through numerous staff changes, against project deadlines, and in many cases with the mindset of ”just get it working”.

In a world of complexity, if robust architectural approaches are not followed you will end up with a network or Information System architecture which resembles a ‘Furball’ the cat coughed up. Put another way - a highly complex, interconnected and monolithic mess. Such systems are not reliable, maintainable or securable. Usually the inherent problems will first manifest themselves are security issues. Just like chinks in a set of armour. New and pervasive technologies like Cloud and IoT integrations will only continue to add to the problem space.

I use the ‘Furball’ analogy as I want to highlight the need for well architected Information Systems and the consequence of not doing so. Unravelling a Furball is at best a very expensive proposition, at worst, a point of no return. This whole industry is in desperate need of standardised architectural approaches which can be applied to common business and organisational situations more universally, as opposed to today’s “roll your own” approach. But that, along with the need for Security Automation, is a topic for another post.   

Achieving solid architectures to facilitate today’s business needs requires people with strong technical skills. Or as Gilfoyle from HBO’s incredibly funny series Silicon Valley so eloquently puts it (amongst other things) “it takes talent and sweat” (Just google “Silicon Valley, what Gilfoyle does”). I use this example as I want to highlight the need for serious investment in in-house technical security expertise and the people who can provide it.

A lot is being written about the shortage of skilled security professionals and how bad the problem is. In many case I see this excuse used as a cop out. We are only going to find our way out of this whole sad and sorry mess when organisations start seriously investing in that in-house technical security expertise. Not outsourcing the problem, or moving responsibility somewhere else. Accepting it and developing key skills In-House. Not just developing that expertise, but ensuring clear bidirectional communication lines exist between those domain experts and executive management. Executive management should at least conceptually understand the challenges being encountered at the coalface and likewise, the technical staff must align with business goals and business risk minimisation needs. While it might sound obvious, I rarely see it working well in practice. So, I put this out as a focus area.

I have heard statements like “we doubled our security budget last year”. That is good, but it’s a relative statement. Was the initial budget anywhere near adequate? It’s not just about allocating more budget. It’s about working knowledgably to achieve that solid architecture and then efficiently operationalising security in a manner that acceptably minimise cyber risks to the organisations information assets.

I have said this before, and will say it again. Be careful from where you take advice, particularly external advice. Just because a company has setup a Cybersecurity practice and has people with fancy titles does not mean they know what they are doing. There are a lot of new entrants charging a lot of money to provide mediocre advice. If they stuff it up, then sure you can fire them, but it’s a moot point if you get fired too. Hence, I again make the case for investing in and developing your own people.

The current hot, sexy topics in Cybersecurity are things like Next Gen technologies, Threat Hunting, AI, ML and the like. At the same time, virtually all of the major breaches can be attributed to not having the basics in place or a breakdown of what should have been a fundamental process. I’m not saying sophisticated attacks don’t happen as they absolutely do. But in most cases the attackers don’t need to use them as there are far easier options.

So how do you go about it? It is critical to start with the basics… and that part is not actually that hard and it doesn’t require elite level talent. There are many good sources of information. If there was one place to start, have a look at the Australian Signals Directorate (ASD) ‘Essential 8’ and “Strategies To Mitigate Cyber Security Incidents”, or the NIST 800 framework.  In larger organisations, building a community where people can leverage and help each other is a hugely powerful approach when supported from executive levels. Something I always encourage.

 

 

A short while back Andrew Penn, Telstra CEO, wrote a ‘must read’ article describing how Cybersecurity should be viewed and managed at a Board Level. Let’s call Andrew’s excellent article a ‘Top-Down’ perspective. I am going to try to complement his article with my own perspective, which is more a ‘Bottom-Up’ perspective.

In my experience, what are the key reasons for a Cybersecurity failure? What can a board, C-Level and senior management do the prevent a high-profile failure or do to improve the situation?

Firstly, any corporate security initiative must start with support from the top. Without this, security initiatives are doomed. And I’m not talking about throwing good money after bad at security initiatives which are not producing results. It starts with leading from the top and instilling the right culture in the organisation. This is critical. I remember John Chambers, CEO of Cisco once said, “responsibility for security starts with me”. On the flip, I remember one client where it was a standing joke that everyone knew the CFOs five-character password and the fact he forbid the implementation of minimum password size and complexity standards, because “they were too hard to remember”. Needless to say, no one in that organisation took security seriously.

In many senior management circles, I have heard the question – What are our peers doing? I have heard it asked in Australia, New York and several Asian countries. While this is an interesting question, that’s about it. When everyone is wondering about everyone else, it’s a circular situation. It is critical to understand your own information assets, their value, and the business impact if they were compromised. I can not emphasise this enough. With these questions understood, ensure your organisation plots its own path forward. There is a massive problem in the information security business called “Status Quo” – just executing against a checklist is not sufficient in today’s dynamic business environment and rapidly changing threat landscape.

The Wannacry outbreak on 12 May 2017 is a clear example of a Cybersecurity failure on a massive scale. Microsoft released a ‘Critical’ patch on 14 Mar 2017. Organisation had nearly two full months to remediate the underlying vulnerability. What we saw was a huge numbers of systems, many performing critical functions, left exposed. Why? WannaCry was not a new event!

To stay on top of Information and Cyber Security today, an adaptable, agile and innovative culture is required. Security is about People, Process and Technology and it’s an organisations culture which underpins all three (more on these topics shortly). This culture must be established, driven and supported from the top. Yep, that’s probably a big ask, if so, just focus on having it right in your security teams.

This leads us onto ‘People’ –  Getting the most from your people is probably one of the hardest tasks. However, a team staffed with skilled, proactive and innovative people, plugged into the external communities, can be invaluable.

Having spoken to a vast number of people in various capacities over the years, in my experience the above situation is uncommon (apart from large organisation who have dedicated teams for this purpose). Certainly I have seen very clue-full groups which is fantastic, more commonly people understand the issues and risks, but are resource constrained making it difficult to act. Unfortunately, I have also seen many people in positions of responsibility who want to ‘put their heads in the sand’ or are downright wilfully negligent. Often this is because “it’s just too hard” or dealing with the reality doesn’t align with their political agenda. These attitudes can be a hugely dangerous.

Senior management and boards should actively enquire about the organisation’s Threat and Risk Management programs. In particular, how they identify and respond to Cybersecurity threats. The program should consider the companies crown jewels and business outcomes it wishes to avoid. When major system changes are made, or new ones commissioned, senior management should insist on a risk assessment and appropriate testing. For the larger or high profile projects, an outside organisation should be engaged to perform these assessments.

Reporting and metrics – In my experience, there is often a huge communication gap between the usually technical people at the coal face and the business oriented senior management. Bridging this gap can be difficult. However, the use of good security metrics can provide a helpful mechanism. Appropriate metrics should be produced by the security teams or departments to provide senior management and boards a picture of the effectiveness of the organisations security programs.

For example, a solid metrics approach could have articulated the number of critical systems, missing critical patches, ahead of the WannaCry outbreak. For many organisations, this one metric would have been a very loud alarm bell!

In security when nothing happens, it’s a good result. But being able to differentiate good luck from good management is key.

Process – In security, solid process is essential. But those processes need to be kept current and adapted as changes occur. Having an organisation full of people who blindly follow an out of date process, is not a recipe for success.

Technology – I would make two points. It is essential that adequate funding is available to ensure current security technology is deployed. When an organisation makes an investment in a security technology, it is imperative that it is properly deployed and the intended outcome is achieved. I have seen plenty of organisations make sizable security technology investments which were either improperly deployed or not adequately leveraged. Secondly, in a fast-changing landscape, the solution to many security problems may be a new technology. It is important to monitor technology developments and make discretionary budget available to purchase a new technology if it can solve a problem or lower a risk.

In recent times there has been a trend of outsourcing IT problems. In other words, taking a hard problem and to quote Hitchhiker Guide to the Galaxy making it “someone else’s problem”. Some BYOD and Cloud initiatives fall into this category. My perspective – if you can find area’s of IT that are sufficiently commoditised and can be cost effectively outsourced, then go for it. But with that said there are areas of IT, like protecting your Crown Jewells, that are high skill and require appropriate people on staff. I would advise against attempting to outsource these areas and would strongly recommend developing and supporting In-House capabilities. Once you lose key talent, it does not come back in big hurry.

From a budgeting perspective, when applications or new systems are rolled out, the full lifecycle cost should be understood up front, including the cost of a secure initial deployment and the ongoing operational costs. Do not allow the security elements to be unfunded and allow the operational costs to fall onto some other department. Usually this means they get ignored.

Finally, be careful who you take advice from. There is no qualification or certification for a Cybersecurity professional (if we draw a comparison to a Chartered Engineer for example). There are plenty of people touting job titles of ‘Cybersecurity Consultant’ who have only recently entered this domain and have minimal experience.

 _