What’s Old Is New Again: How Data Security Has Come Full Circle
Exploring the concept of "Policy-Driven Rights" as a security classification
Exploring the concept of "Policy-Driven Rights" as a security classification
Around the turn of the millennium, when Y2K was being fretted over, when Windows ’98 represented the latest and greatest in OS technology and when now ancient tech like Myspace and the iPod weren’t even glimmers in their respective inventors’ eyes, I was a young(er) IT professional working on an interesting project we called "Policy-Driven Rights” (PDR).
PDR was a way to use security classification labels – ‘sensitive’, ‘secret’ and ‘top secret’, for example – within data to dynamically present the right information to the right people at any given point in time. The idea was relatively simple. If you need to know something, you need to use something authoritative (e.g. smartcard, token, password) to demonstrate to the system that you need to know it, and only then will it happily show the information to you.
So here I was, way back when writing information management software with an element of ‘need-to-know’.
The defence industry loved it. My work was perfect for both the military and private players in the government complex. However, to some degree it was over the top, I hit a few stumbling blocks in taking it mainstream. Partially due to the inherent challenges surrounding data classification and partially due to the heavy reliance on data encryption as the core methodology to enable need-to-know while enabling need-to-share. One private mining organisation did take up the technology, but it remained largely untouched in the commercial space. What I discovered was that private players liked the idea of data privacy but did not like the heavy lifting that came with enabling it, nor did they like the vendor lock-in that comes with encrypting data in a singular ecosystem.
Little did I know at the time that, unlike Myspace and the iPod, my little project of two decades ago would never be more relevant than it is today.
Historically, data security has been all about layers – a concept known as ‘defence in depth’. Let’s take a network as an example. First, you have the network firewall and other protections monitoring, "the outside", which form the first line of defence. The next level inwards will be the firewalls of specific applications – host intrusion prevention systems (HIPS), user authentication and the like. You also have anti-virus protections in multiple places. It’s like the medieval protection of a castle. If the moat is bridged, the castle walls come into play. If the walls are breached, doors and gates are locked internally. If one element fails, hopefully, the next element will hold up.
But no matter how deep their defensive strategies ran, medieval castles would still fall. And while defence in depth has proven to be an incredibly effective strategy, times have changed, and digital security has recently been forced to look outside of the 10th-century defensive playbook. So, they looked to the turn of this century instead.
“Let’s pretend,” said security experts, “that every piece of data is on the internet.” We know that to be patently false – organisational intranets, standalone desktops and all manner of other computers and networks remain unconnected to the World Wide Web. But this mindset shift was to play a key role in creating a more robust type of data security.
If we pretend that the world is one network, if we pretend that anyone can access anything, we realise that defence in depth simply isn’t enough anymore. We need the data to be inherently secure and self-protecting. We need to be confident that it can only be seen by the people that should see it, no matter what external security measures are applied to it.
And so we return to the turn of the millennium, where yours truly was directly applying security classification labels to data to ensure only the correct eyes saw it.
But while my original concept of self-defending data appeared at first glance to be a robust solution, many roadblocks remained. How do you create a system that guards against human errors and apathy that exists outside of the defence industry? And how do you get past the roadblock of proprietary data, particularly with the likes of GDPR putting security and individual privacy into ever sharper focus?
When I first came up with the PDR concept, I had this idea of categorising by ‘need-to-know’ and ‘need-to-share’. In terms of ‘need-to-share’, I posited that I should be able to share anything without worrying about infringing on security or privacy because if the information system is self-protecting it will know exactly what the recipient ‘need-to-know’ is. But how do you do that without becoming proprietary or creating vendor lock-in?
Unfortunately, this was a puzzle I didn't solve.
All these years later, I’m now the Chief Technology Officer at Objective Corporation and we’ve done something very interesting. When you combine Inform (Objective’s information management product) with Connect (Objective’s external collaboration tool) all of the data classifications, caveats and security controls native to the Inform Document and Records platform are transparently applied when you share that data with anyone.
If I share a container of data with an external party and it has a piece of information that isn’t allowed to be shared, the system will block that piece from being shared. This is done transparently, without extraneous interaction or the external party’s knowledge.
This type of system solves many of the hardest challenges when combining need-to-know and need-to-share. The information system enables global discovery while ensuring data privacy as a core capability of the Inform platform. In addition, when combined with Connect, the information system transparently enables need-to-share while seamlessly supporting need-to-know, all without the vendor lock-in that single ecosystem data encryption creates.
GDPR states that data should be secure by default, and only be visible to those who need it. But it also says that an individual has the right to control their private data and be forgotten. However, encryption makes that difficult, as the person in charge of ensuring the data is ‘forgotten’ may not have the ability to discover and destroy encrypted content.
Objective’s work is a huge step forward and has echoes of the work I started so many years ago.
Technological and regulatory challenges aside, self-protecting data faces some very human roadblocks. As with any new technology, this entire system turns into a steaming pile of rubbish if users don’t get it, and thus don’t leverage it.
User education will perhaps be the defining element of taking self-protecting data mainstream. When your ISO 27001 information security management consultant says: “we’re going to implement our updated Information Security policy,” before handing out 50x 15-page documents in a language that is barely recognisable as English, what are the chances that an employee will comply?
Like all new security protocols, self-protecting data technology needs to be drip-fed to the masses, its importance needs to be continually reinforced, and everything about it needs to be conveyed succinctly and in plain English.
For better or worse, new security measures often aren’t just roadblocks to potential threats – they are perceived as roadblocks by employees who simply want to do their work. If an employee doesn’t see the value of the new measure, they’ll do what people do when faced with a roadblock – they’ll go around it.
While the technology itself has a lot going for it, it means nothing without buy-in. The hope is that the technology will be as automated as possible, but there will always be a human element, which is any security system’s weakest link. Communication is vital so the stereotypical disconnect between the security professional and the consumer/end-user can be addressed. That’s if self-protecting data technology is going to be successful in the future.
20 years ago, I couldn’t have predicted that the PDR concepts I was working on would come into sharp focus decades later. It’s a real point of pride. Indeed, in their unique ways, Windows ’98, Myspace and the iPod were precursors to the best of modern tech. Now it’s time for information privacy to be a first-class citizen and decades-old concepts are once again providing a path to follow.