Patching Is Not Always Easy

I don’t go a day without a company e-mailing me about a product that will help with patch management, or claim it will somehow resolve all my GDPR issues that may come in the near future. Social media is awash with vendors and people telling us that we should deploy a patch immediately – that somehow we don’t seem to ‘get it’ that these have to be deployed the microsecond they are released or we are terrible at our jobs.

Let us take a moment to consider the other side – my side.

The environment I work in has many systems which are what we would term ‘critical’ and by this I mean if we screw up somebody can (worst case) die. I know many businesses have critical systems but it’s a whole new ball game when the lives of others rests with you not (to be blunt) fucking up. Trust me when I say a coroner will not award you any points for having patched a system when that potentially resulted in it causing or contributing to a fatality. To give another example, imagine being a cancer patient in need of life saving therapy only to turn up to an appointment and find you cannot be treated as a software patch broke the system. I could provide a dozen more examples but I believe the point has been made – it just isn’t as easy as some people like to make out.

To give a recent example, Microsoft released an update which broke search functionality in Outlook. It took weeks for a fix to come out and during that time our service desk had to send regular reminders to users that we could do nothing about this short of removing the security update. Had we done that we would no longer be patched and therefore at risk – of course the users were not concerned with a marginal risk (due to defence in depth), they only cared about the immediate and actual inability to make use of everyday functions in their mail client. This is a relatively small scale example when you consider how a ‘bad’ patch has resulted in global issues for certain vendors/providers.

Now you have a few ways of approaching patch management –

  • Don’t patch at all
  • Partial patching
  • Full patching

Let us address our three options being mindful that the second and third are somewhat nuanced in terms of how you interpret them.

 

Don’t Patch

I would wholeheartedly agree that this is a poor choice to make unless you have others mechanisms in place to protect the system(s) in question. Typically systems do not have the additional layers and processes in place to properly secure them from exploit. The unfortunate truth is that there are vendors who will deploy a solution into a business and state no patching should be made to their product, or that the customer does so at their own risk. If we look at the healthcare industry vendors often provide equipment as part of a medical device solution. This will have been reviewed, vetted and validated to a specific design – patching alters this and voids the validation hence resulting in organisations not being able to patch such solutions. There are organisations who do not patch at all but I feel most enterprises will fall into the remaining two categories.

 

Partial Patching

I would suggest most organisations fit within this category. Partial patching could indicate only certain systems or devices, only certain software or a combination of the two. This is certainly an improvement over no patching but can still leave you open. Typically organisations will (assuming Microsoft based) look to deploy the monthly updates Microsoft release, whether that is all of them or just those security related is again another choice in all this. If one is lucky enough to have a test environment/deployment group then they will be targeted first and the hope is any and all issues will become manifest and resolved at that stage. Some organisations/SMBs will take the approach or wait two weeks and let somebody else figure out what this patch breaks and then either deploy or hold back. We now find ourselves being told that there is no time to wait and that patches should be deployed immediately which makes any testing impossible if one is to adhere to immediate deployment.

 

Full Patching

Our final option is complete and total patching of all systems and the software on them. While this may be seen as the best option there is certainly a great deal of risk associated with it. I cannot say for absolute certain but I would hazard a guess that more organisations have been affected by a bad patch or repercussions of patching then have been exploited as dramatically as say WannaCry. This is not to say it is a bad choice, simply there are far reaching consequences when it goes wrong.

 

Solution?

I think we would all agree the ideal scenario is one in which additional layers of protection garner us sufficient time to test and debug updates in an isolated user group. While the commentators might shout loudly for us to deploy updates the moment they are available we must balance that against the risk involved in any sort of ‘mission critical’ environment. It is very easy to point the finger at IT departments and tell them what a bad job they are doing and yes, it is true some definitely let the side down. That being said you don’t always know what internal challenges they face. I know first hand how difficult it can be to do all the things auditors/experts say we should – when the budget, manpower or managerial will is not there it doesn’t matter how much you shout at those of us on the coalface; it simply won’t happen.

Let us use the immunisation of humans against various infections as an analogy.

In an ideal world every single person would be vaccinated against every type of infection. Smallpox has been declared eradicated due to our efforts – we patched the problem out of existence. Unfortunately we can’t apply this to every infection. We may not have a vaccine, perhaps it is cost prohibitive or there is some other issue which makes wide scale use difficult. There are also those people who cannot receive vaccines because they are immunocompromised (or something else) and as such the risk is too great. To protect those people we take other measures, this neatly aligns with the problems I am talking about with regard to those critical systems which cannot just be patched.

There is so much more to cyber security than just patch management and this post is in no way meant to detract from that. I would simply like to remind some people that we on the front line do not get to make all the decisions.

Vendors also must be mindful that not patching can no longer be considered appropriate (if it ever was), I understand the QA process takes time and may require independent verification. This brings us back to the need for a layered approach – expect layers to fail or be compromised and act accordingly. While naming and shaming will always occur it is also critical that we take the time to understand why an organisation may not have chosen or been able to deploy countermeasures (patching or otherwise) that could have prevented or limited the impact of a breach. I know many good people who cannot do all that they desire because, as has been mentioned above the budget/manpower is lacking. All too often I hear people comment that they just need to invest more money or hire more people – sure that’s easy if you actually have money lying around waiting for you but most of us are not that lucky.

 


This post could drag on as I rant away and express all my frustrations. I don’t want to do that, what I want is to perhaps start conversation or make others pause to think about the challenges some of us face. As always I am happy to discuss further either in the comments section below or through Twitter – if you have a story or opinion to share let it be heard.

Leave a Reply