Identity & Access

If Loose Lips Sink Ships, What do They do to Enterprise Security?

Image Credit: United States Navy

<p class="MsoNormal" style="margin-bottom: .0001pt; line-height: normal;"><span><span><img src="/sites/default/files/features/USS_Carl_Vinson.jpg" alt="Image Credit: United States Navy" title="USS Carl Vinson" width="675" height="309" /></span></span></p>

(USS Carl Vinson – Image Credit: United States Navy)

Over the past month, the movement of the aircraft carrier USS Carl Vinson and her carrier battle group escorts (described as an “armada” by President Trump) toward the Korean peninsula has become a strange and twisted tale. The sometimes concurring and conflicting statements about the ship’s schedule coming from the Department of Defense and the president leaves us wondering whether these statements were intentional disinformation, a misunderstanding or simply a premature communication of an intended ship’s movement.

Having served aboard the USS Carl Vinson in the late 1990s, I can assure you that the World War II slogan, “loose lips sink ships” is still very much a part of Navy life. Crew members aren’t allowed to share precise dates or locations of ship movement with their families as part of operational security (or OPSEC) practices.

Yes, users are the weakest link in security and we’ve all heard of them falling victim to phishing attacks or leaving their laptop on a bus. But some users will share information that seems innocuous, yet can be used by attackers in social engineering attacks, which are easier, lower risk and less costly than many technical exploits. Let’s look at a few examples of not-so-obvious information sharing.

Out-of-office notifications

An email “out of office” message that includes details of when a user will return from vacation can be used to gain the confidence of another employee to share information. The attacker, posing as a co-worker, could convince another employee (indicated in the out-of-office email) that they are under a deadline to complete a report that needs information before the vacationing employee returns.

This exact scenario has been proven by penetration testers such as Kevin Mitnick, who has commented during his trade show presentations that “this con is based on impersonating a user’s circle of trust.”

From a policy perspective, consider allowing out-of-office notifications only for internal employees. The policy may need to be more specific to only those employees with access to sensitive information, while employees in other departments, such as sales or direct customer interaction roles, are not restricted.

Advertisement. Scroll to continue reading.

Social Media

We put a lot of personal information up on social media, simply because the profile template asks us for it. Information related to your role, job title, projects worked, company history and skills are standard and is often publically accessible. While this information may not be confidential from a corporate perspective, it is a gold mine of information for con artists. Like the out-of-office notifications, this information can contribute to a social engineering attack that establishes credibility for the attacker to gain access to a user’s circle of trust. 

While the social media genie is unlikely to return to the bottle, there are privacy settings that can help limit information sharing. If your organization has a social media team, work with them on setting policies and educating your users on the potential risks.

Sharing with press and vendors

Many enterprises have policies against sharing specific security controls and policies outside of the company. Given past experiences in working with customers, I can attest to the difficulty in publicizing success stories, for good reason. But it can be human nature to show off too much when the cameras are rolling.

For example, a crew filming a “top secret” Super Bowl security center in February 2014 exposed the WiFi network’s credentials. In 2015, a French television network, while reporting on its own security incident, actually filmed a staffer in their offices with user names and passwords written down and visible in the background. A cybersecurity startup exposed a California hospital’s network in demonstrations without permission.  

Security professionals are probably not going to be on the invitation list for external media events. But they can provide training to communication staff on what to look out for to protect information, especially in the background of publicly available materials.

Counter-intelligence operations

While recent reports indicate that the Carl Vinson to Korea story was not an intentional ruse, it certainly wouldn’t be the first example of disinformation from a government. The parallel in IT security is next-gen honeypots.

While honeypots have been around as a distraction to attackers for many years, providing attractive but fabricated information, the next generation of technologies are more sophisticated. They keep attackers engaged with automated reactions that allow the security team to ascertain the real objectives and methods of attack. This provides information that can be used to adapt defenses such as addressing vulnerabilities, creating blacklists, or even identify an insider threat.

These are just a few examples of inadvertently sharing sensitive information and corresponding OPSEC policies to consider. When identifying where risk exists in your organization, don’t overlook the everyday sharing of information by users. Attackers are looking for soft targets, and old-fashioned confidence schemes married to easily-accessible information can make their lives plain sailing.

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version