Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Privacy

Big Brother is Watching, But That’s OK Within Limits

How Can a Company Protect its Information and Operations Without Running Askew of Data Privacy Laws and the Concerns of its Customers?

How Can a Company Protect its Information and Operations Without Running Askew of Data Privacy Laws and the Concerns of its Customers?

We are all familiar with the literary and movie versions of the “big brother is watching you” scenario.  It usually begins under the umbrella of protecting society from the bad guys and ends up with a police state that violates the rights of everyday citizens.  Some privacy advocates and regulators fear that security tools and methods that track a person’s computing activities or location will lead to such a dystopian future.  Security advocates feel the “greater good” is better served through these tools.  Reality is probably somewhere in the middle, but maintaining a balance in these matters is the way to go.

The tension between security and privacy is not new. Security tools by their very nature need to monitor and inspect users’ and machines’ computing activities. A policeman wearing a blindfold and earplugs won’t be very effective. In the public and private domains, there are laws that protect our rights from an unlawful invasion of privacy. Without probable cause, police can’t just enter your home to see if by some chance the law is being broken.

Our work environments lie somewhere in between private and public, and have additional interests to protect other than our own.  

Between the security and logging required by some industry regulations, and the risk of insider threats, most mid-sized and large enterprises have a broad suite of tools to monitor, log and analyze the activities taking place on their network and computers.  For example, there are SEC regulations that require brokerages to capture and store all broker email, chat and voice communication for six years on non-rewriteable storage.  Another example is the behavioral analytics technology companies deploy to identify and mitigate insider threats, which requires capturing and analyzing user behavior and other vectors for anomalies and correlation.

Most would say that a company has a right to protect their assets and operations, and certainly to comply with industry regulations.  

The bigger challenge facing companies is around how all that security data is being used. Where does it reside? How is it being protected? Who has access to it? Does it apply to my personal communication?  Does it apply to my personal device that is connected to the corporate network and email?  Add in the variety and complexity of laws that vary from country to country and even state to state, and you need a law degree to manage a security operations center.  For example, some security tools store personally identifiable information (PII) that needs to be stored and protected in accordance with appropriate regional and industry regulations.  Another example is that some states, like Illinois and California, have eavesdropping laws requiring consent from all parties to a conversation for it to be recorded, calling into question the monitoring of emails from outside parties.

So how does a company protect its information and operations without running askew of data privacy laws and the concerns of its customers?  

Advertisement. Scroll to continue reading.

In a case of “doctor heal thyself,” it is very much the same feature set the CISO is requiring of any other enterprise technology being implemented by the business. Although those working in the security sector are perceived as being of a higher level of trust, that still does not allow for extraneous exposure or handling of personal data.  It is understood that to catch and stop bad actors, their identity and activities will need to be exposed to some audience at some point.

Most regulations have allowances for this, once a level of suspicion has been established. However, the principle of least privilege still applies, and until such point that a case has been established, privacy needs to be maintained. Appropriate roles and segregation of duties should be maintained. First level analysts validating machine provided investigation targets can work with minimal access to personally identifiable information. Once validated, escalation procedures would then move the incident to higher level investigators with more complete access to such information.  

A common technical method of achieving this mode of operation is through role based access control and data masking, also known as “pseudonymization.” Data masking replaces the name and other identifying information with a consistent artificial identifier, like an ID number.  

Combining data masking with role based access control allows lower level review that hides personally identifiable information. Once validated, the incident is escalated to a smaller group of higher level responders that have access to the personal information, which is critical for remediation and legal proceedings.

For example, when a level one analyst is looking at a suspected insider threat identified by a user and entity behavioral analytics (“UEBA”) tool, instead of being identified as “Steven Grossman,” it is shown as “User-12345.” After the level one analyst reviews the incident and validates that it is real, it can be escalated to the human resources team, which has more complete access to the underlying personal data.  Segregation of duties also needs to take place at the infrastructure level, where administrators that have broad access to maintain servers and databases don’t have unfettered unlogged access to personal data that may be misused or violate a person’s privacy and/or data privacy regulations.

Another growing concern around behavioral analytics is the fear of profiling, where algorithms determine on their own who is a threat, and act without human input.  It’s a fear of the “machines taking over,” along with whatever biases are built into the algorithm.

Regardless of whether this fear is well-founded or not, at this point in time it is important to keep a human in the loop to validate insider threats. By everybody’s admission, machine learning and artificial intelligence are still in their early days and for analyzing behavior they are best suite
d to accelerate and amplify the capabilities of human analysts. Some insider threat cases are open and shut, while others are false positives.  For everything in between, machine based analytics can do 80 percent of the job, but it is best if the other 20 percent is handled by a human analyst. Software should make that human analysis quick and intuitive, but it should be a human making the final determination.

We are not in George Orwell’s 1984 and the machines are not taking over, though in the corporate environment, big brother is and should be keeping an eye out to protect their interests.  It is up to us in the industry and regulators to ensure that tools and processes we use to catch the bad guys have the right protective measures and don’t cross the line, violating our rights in the process.

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...

Cybersecurity Funding

2022 Cybersecurity Year in Review: Top news headlines and trends that impacted the security ecosystem

Endpoint Security

Today, on January 10, 2023, Windows 7 Extended Security Updates (ESU) and Windows 8.1 have reached their end of support dates.

Email Security

Many Fortune 500, FTSE 100 and ASX 100 companies have failed to properly implement the DMARC standard, exposing their customers and partners to phishing...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

CISO Strategy

Cybersecurity-related risk is a top concern, so boards need to know they have the proper oversight in place. Even as first-timers, successful CISOs make...

Application Security

Fortinet on Monday issued an emergency patch to cover a severe vulnerability in its FortiOS SSL-VPN product, warning that hackers have already exploited the...