Information Sharing: The Wrong Information in the Wrong Hands Could Birth a Threat that Simply Can’t be Handled….
My previous column generated a lot of feedback, mostly around those topics that I specifically avoided: feasibility, responsibility, and liability. While I offered a more capitalist solution — to use private information feeds and any number of available threat detection and cyber G2 products — the debate continues to return to the social nature of information sharing. More specifically, to the faults in that nature.
I’m a fairly positive person, but I have to agree with the skeptics on this one. The industries that define “critical infrastructure” are, for good reason, very secretive. When talking about automation systems, you’re also dealing with the incontrovertible fact that knowing about an issue isn’t always good enough. You’re talking about systems with legitimate lifecycle concerns. You’re talking about devices that have patching concerns. Knowing that you have a bug, a vulnerability, or some other weakness doesn’t always mean you can do a thing about it.
Resource: How to Deal with Todays Multi-Platform, Multi-Vendor Patch Management Mess
Any system that operates in real time, is responsible for production of a good (i.e., it contributes directly to the bottom line) and is a huge investment in capital is going to demand that production is constant and the equipment is used for decades, so these concerns aren’t going to change. That means that the “if it works don’t touch it” mentality that continues to thwart many aspects of cyber security — including information sharing — aren’t going anywhere. It’s also why the trust required to implement a successful Information Sharing scheme is also unlikely to blossom overnight. The wrong information in the wrong hands could birth a threat that simply can’t be handled, so lips remain sealed.
So, the operators aren’t likely to participate unless they’re forced to. If they are forced to, there will be justifiable concerns about responsibility. What can, can’t, should and shouldn’t be disclosed, and what can and should be done with the information? And you can’t live in a litigious country like mine unless you also consider liability. Will the government share incidents that could impact national security? If an incident is disclosed and it results in another incident, who is responsible? What about if an incident isn’t disclosed, and it leads to another incident? Until everyone agrees on all of this, it will take a lot more than an executive order or even a law to make information sharing into a functional reality. Thankfully, like I said before, I’m going to avoid these issues and focus on the fun part of the debate: the innovative products and technologies that could render the whole debate moot.
Resource: How to Deal with The Multi-Platform, Multi-Vendor Patch Management Mess
Resource: The Busy IT Professional’s Guide to Vulnerability Management
More from Eric Knapp
- Critical Vulnerability Discovered in Waste Automation, Results in Global Ecological Disaster
- Shall We Play a Game?
- Tech Debate: Is The Cloud Critical Infrastructure?
- ISA Automation Week Conference Wrap Up
- ISA Automation Week Day One Wrap Up: Building an ROI for Industrial Cyber Security
- The Rogue Internet: The Evolution of the Cyber Threat
- Why I’m Not (very) Worried about PRISM
- What the Debates on Information Sharing Didn’t Miss
Latest News
- Bankrupt IronNet Shuts Down Operations
- AWS Using MadPot Decoy System to Disrupt APTs, Botnets
- Generative AI Startup Nexusflow Raises $10.6 Million
- In Other News: RSA Encryption Attack, Meta AI Privacy, ShinyHunters Hacker Guilty Plea
- Researchers Extract Sounds From Still Images on Smartphone Cameras
- National Security Agency is Starting an Artificial Intelligence Security Center
- CISA Warns of Old JBoss RichFaces Vulnerability Being Exploited in Attacks
- Hackers Set Sights on Apache NiFi Flaw That Exposes Many Organizations to Attacks
