While Preparing For The Whyte Walkers, Hackers Breach The HBO Wall
Is your corporate data at risk? Does a dragon cook his fish before he eats it?
The HBO data breach saga continues: New reports indicate that hackers may still have access to HBO’s networks, social media platforms, and heretofore unreleased content. With these same hackers releasing (or threatening to release) scripts of popular programming like “Ballers” and “Game of Thrones,” executives in every industry should take a moment to reflect on their own data security posture and what a leak of this magnitude could mean for their companies, employees, and key stakeholders.
We consistently discuss the topic of securing sensitive data in our blogs because it continues to be one the most critical risks for companies conducting business in the digital age. (Most recently we covered the protection of sensitive data in legal discovery, and the intersection of the GDPR and sensitive data security.) This data security risk is industry agnostic—no business is immune. “Breaches” may occur maliciously or inadvertently, and they happen with increasing frequency. The types of data “breached” range from the obvious—account numbers, social security numbers, personally identifiable information, personal health information—to the less obvious, such as trade secrets, source code, voter demographic and analytics information, and scripts and videos like the ones exposed in the HBO breach.
This breadth of exposure is often missed by the standard data protection practices of IT and InfoSec. While the “obvious” data types receive a lot of focus from corporations, security professionals, and government regulators, the less obvious types reveal how far we still have to go in grappling with data security and management in this chaotic, dynamic digital age. Tracking and trapping this type of data is both costly and complicated, so not surprisingly, many organizations spend their time “admiring the problem” rather than driving solutions.
The HBO and other similar breaches demonstrate that, despite the increased focus on data security, it still isn’t enough. Even with massive information security budgets and stout process and protocols for protecting data, breaches and exposures happen all too frequently. The use of data for infinite purposes and people’s desire and willingness to consume data creates a nearly impossible landscape for controlling the flow of data and information. Despite scientific advances and economic motivators to devise systems and controls, we’re left paraphrasing Jeff Goldblum in Jurassic Park: “data, um, finds a way.”
While the challenge remains, there are now methods and mechanisms for both responding to a breach and proactively protecting yourself from future breaches. And from a process standpoint, the approaches aren’t all that different. First, when responding to a breach, the organization needs to determine the data sources and populations that are at risk. This could be complex data management systems used to maintain customer records or simply user laptops or network shares. Next, the organization needs to better understand the content of the data that was exposed. This can be done through a lightweight metadata analysis complemented with a series of deeper-dive content analytics on various samples of data. Did this data source contain sensitive data elements and, if yes, which types? Is this type (or types) of sensitive information typically expected in this data source? Shoring up these types of data exhaust or data de-structuring scenarios puts information governance policy into practice.
Using that same methodology, organizations can proactively protect sensitive data of all types. A lightweight metadata scan can help identify possible sources of sensitive information and help highlight the riskiest data sets. Focusing first on the high-risk populations and running through the same data scanning, reviewing, and sampling techniques, organizations can identify data requiring deletion, encryption, segregation, or other types of remediation and take appropriate action. Aggregating the output of these scans allows senior level management to view high risk applications or data sources and, after repeated scans, continuously monitor progress on risk reduction. Even within a broad organization, this approach allows for flexibility on both the definition of “sensitive data” and the means of remediation.