Risk Management

Psycho-Analytics Could Aid Insider Threat Detection

Psycho-Analytics Could Help Detect Future Malicious Behavior

<p style="text-align: center;"><strong><span><span>Psycho-Analytics Could Help Detect Future Malicious Behavior</span></span></strong></p>

Psycho-Analytics Could Help Detect Future Malicious Behavior

The insider threat is perhaps the most difficult security risk to detect and contain — and concern is escalating to such an extent that a new bill, H.R.666 – Department of Homeland Security Insider Threat and Mitigation Act of 2017, passed through Congress unamended in January 2017.

The bill text requires the Department of Homeland Security (DHS) to establish an Insider Threat Program, including training and education, and to “conduct risk mitigation activities for insider threats.” What it does not do, however, is explain what those ‘mitigation activities’ should comprise.

One difficulty is that the insider is not a uniform threat. It includes the remote attacker who becomes an insider through using legitimate but stolen credentials, the naive employee, the opportunistic employee, and the malicious insider. Of these, the malicious insider is the most intransigent concern. 

Traditional security controls, such as access control and DLP, have some but little effect. In recent years, these have been supplemented by user behavior analytics (UBA), using machine learning to detect anomalous user behavior within the network.

“Behavioral analytics is the only way to… get real insight into insider threat,” explains Nir Polak, CEO of Exabeam. “UBA tells you when someone is doing something that is unusual and risky, on an individual basis and compared to peers. UBA cuts through the noise to give real insight – any agencies looking to get a handle on insider threat should be looking closely at UBA.”

Humphrey Christian, VP of Product Management, at Bay Dynamics, advocates a combination of UBA and risk management. “A threat is not a threat if it’s targeting an asset that carries minimal value to the organization. An unusual behavior is also not a threat if it was business justified, such as it was approved by the employee’s manager,” he told SecurityWeek. “Once an unusual behavior is identified, the application owner who governs the application at risk, must qualify if he indeed gave the employee access to the asset. If the answer is ‘no’, then that alert should be sent to the top of the investigation pile.”

This week a new paper published by the Intelligence and National Security Alliance (INSA) proposes that physical user behavioral analytics should go a step further and incorporate psycho-analytics set against accepted behavior models. These are not just the baseline of acceptable behavior on the network, but incorporate the psychological effect of life events both inside and outside of the workplace. The intent is not merely to respond to anomalous behavior that has already happened, but to get ahead of the curve and be able to predict malicious behavior before it happens.

The INSA paper starts from the observation that employees don’t just wake up one morning and decide to be malicious. Malicious behavior is invariably the culmination of progressive dissatisfaction. That dissatisfaction can be with events both within and outside the workplace. INSA’s thesis is that clues to this progressive dissatisfaction could and should be detected by technology; machine learning (ML) and artificial intelligence (AI).

Advertisement. Scroll to continue reading.

This early detection would allow managers to intervene and perhaps help a struggling employee and prevent a serious security event.

Early signs of unhappiness within the workplace can be relatively easy to detect when they manifest as ‘counterproductive work behaviors’ (CWBs). INSA suggests that there are three key insights “that are key to detecting and mitigating employees at risk for committing damaging insider acts.” CWBs do not occur in isolation; they usually escalate; and they are seldom spontaneous.

Successful insider threat mitigation can occur when early non-harmful CWBs can be detected before they escalate.

Using existing studies, such as the Diagnostic and Statistical Manual of Mental Disorders Vol. 5 (DSM-5), INSA provides a table of stressors and potentially linked CWBs. For example, emotional stress at the minor level could lead to repeated tardiness; at a more serious level it could lead to bullying co-workers and unsafe (dangerous) behavior. INSA’s argument is that while individual CWBs might be missed by managers and HR, patterns — and any escalation of stress indicators — could be detected by ML algorithms. This type of user behavior analytics goes beyond anomalous network activity and seeks to recognize stressed user behavior that could lead to anomalous network activity before it happens.

But it still suffers from one weakness — that is, where the stressors that affect the user’s work occur entirely outside of the workplace; such as divorce, financial losses, or family illness. Here INSA proposes a more radical approach, but one that would work both inside and outside the workplace.

“In particular,” it suggests, “sophisticated psycholinguistic tools and text analytics can monitor an employee’s communications to identify life stressors and emotions and help detect potential issues early in the transformation process.”

The idea is to monitor and analyze users’ communications, which could include tweets and blogs. The analytics would look for both positive and negative words. An example is given. “I love food … with … together we … in … very … happy.” This sequence could easily appear in a single tweet; but the use of ‘with’, ‘together’, and ‘in’ would suggest an inclusive and agreeable temperament. 

In fairne
ss to doubters, INSA has done itself no favors with the misuse of a second example. Here Chelsea (formerly Bradley) Manning is quoted. “A second blog post,” says INSA, “substantiates that Life Event and identifies an additional one, ‘Relationship End/Divorce’ with two mentions for each Life Event.” The implication is that psycholinguistic analysis of this post would have highlighted the stressors in Manning’s life and warned employers of the potential for malicious activity. The problem, however, is that the quoted section comes not from a Manning blog post before the event, but from the chat logs of his conversation with Lamo in May 2010 (see Wired) after WikiLeaks had started publishing the documents. The linguistic analysis in this case might have helped explain Manning’s actions, but could do nothing to forewarn the authorities.

The point, however, is that psycholinguistic analysis has the potential to highlight emotional status, and over time, highlight individuals on an escalating likelihood of developing first minor CWBs and ultimately major CWBs. The difficulty is that it really is kind of creepy. That creepiness is acknowledged by INSA. “Use of these tools entails extreme care to assure individuals’ civil or privacy rights are not violated,” it says. “Only authorized information should be gathered in accordance with predefined policies and legal oversight and only used for clearly defined objectives. At no point should random queries or ‘What If’ scenarios be employed to examine specific individuals without predicate and then seek to identify anomalous bad behavior.”

Users’ decreasing expectation of privacy would suggest that sooner or later psycholinguistic analysis for the purpose of identifying potential malicious insiders before they actually become malicious insiders will become acceptable. In the meantime, however, it should be used with extreme caution and with the clear, unambiguous informed consent of users. What INSA is advocating, however, is an example of what law enforcement agencies have been seeking for many years: the ability to predict rather than just respond to bad behavior.

Upcoming Webinar on April 13th at 1pm ET:

The Flavors of Insider Threats & Recipes for Detection

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version