Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Hacker Conversations

Hacker Conversations: Casey Ellis, Hacker and Ringmaster at Bugcrowd

SecurityWeek interviews Casey Ellis, founder, chairman and CTO at Bugcrowd, best known for operating bug bounty programs for organizations.

Interview with Kevin O’Connor, a high school hacker who went on to work for the NSA.

In this edition of Hacker Conversations, SecurityWeek talks to Casey Ellis, founder, chairman and CTO at Bugcrowd – and hacker. Bugcrowd provides a crowdsourced ethical hacking cybersecurity platform, best known for operating bug bounty programs on behalf of individual organizations.

What is a hacker?

“A hacker,” says Ellis, “is someone who takes the assumptions of a system and tips them upside down to see what falls out. Hackers will learn how a system works, to the extent they can manipulate it into doing things it was never originally intended to do.” That desire is almost a default condition. “When I see a new technology, the first thing I often do is try to get it to misbehave.”

There are several factors in this definition. For example, it is not computer specific – it could apply to almost any engineering technology. Here we are solely discussing the computer hacker variety.

Most importantly, however, the act of hacking is amoral; it is driven by curiosity rather than a desire to do bad things. The process of hacking is neither moral (a good action), nor immoral (a bad action); and the term ‘hacker’ simply describes someone who likes to deconstruct and then reconstruct with additional or different outcomes.

Casey Ellis, founder, chairman and CTO at Bugcrowd
Casey Ellis, founder, chairman and CTO at Bugcrowd

It is the use made of these outcomes, for moral or immoral purposes, that forces us to divide hackers into two camps: the ethical hacker (Whitehat) and malicious hacker (Blackhat). The ethical hacker finds ways in which the system can be manipulated so the developer can prevent the malicious hacker from finding and abusing the same manipulations for his or her own benefit (usually financial or political).

Both schools of hacker have the same skill set. The question then is, why do some become immoral while others remain strictly moral; and yet others flip between the two? This is what we sought to discover in conversation with Casey Ellis. 

Ethical (Whitehat) vs malicious (Blackhat)

The motivating factors between the ethical and unethical hacker are many and varied. They could come from a personal moral compass; the vagaries and conflicts with and within national and international law; the hacker’s economic and cultural background; and social pressures arising from and amplified by neurodivergence. Or, indeed, a unique combination of a variety of these factors.

“No one wakes up one day and decides they want to become a drug dealer, or they want to be a stick-up kid. Those decisions are made after a series of events have happened in one’s life,” said actor Michael K. Williams in the Guardian in 2014. The same reasoning could be applied to most malicious hackers.

However, while there may be an element of choice between being an ethical or unethical hacker, most hackers cannot stop being hackers. “I think most people that self-identify as a hacker, they know that they kind of can’t turn that off – it’s just a thing that their brain does,” said Ellis.

Advertisement. Scroll to continue reading.

Moral compass

The accepted meaning of moral compass is clear: an innate or learned ability to understand the difference between what is right and what is wrong, and to act accordingly. It is the most common (and perhaps the easiest) answer given by ethical hackers when asked why they are ethical. The difficulty comes over ‘right’ and ‘wrong’. This distinction is effectively a subjective majority opinion governed by the current society. It may differ between different societies, or even between micro niche societies within one society.

Nevertheless, it is often used by ethical hackers to describe a firm belief that being malicious is bad.

A moral compass is not fixed for life and is more influenced by nurture than nature. Ellis, as an ethical hacker, believes his own moral compass was developed at an early life from his family upbringing. Basically, good parenting. But outside influences can affect most people as they progress through life. 

“You’ve got young people with this incredible power and skill in what they can achieve. That skill outpacing the growth and development of a moral compass is not uncommon. So how do you make sure they don’t accidentally trip over into a life of crime?” It’s something he’s proud of in Bugcrowd: “I love that we’ve actually diverted people from a life of crime because we give them a Whitehat outlet for their skills.”

The law

The influence of hackers on the law, and the law on hackers, should not be underestimated. In the UK, the Computer Misuse Act was a direct response to a ‘hack’ by Robert Schifreen and Steve Gold (two non-malicious young men). They accessed an early form of electronic mailbox (British Telecom’s Prestel) operated by the Duke of Edinburgh, primarily to prove it could be done. They were eventually arrested, prosecuted, found guilty and then released on appeal – hacking was not against the law because there was no law against hacking. And hence the subsequent and consequent Computer Misuse Act.

The US has its Computer Fraud and Abuse Act (CFAA) of 1986, which prohibits accessing a computer without authorization, or in excess of authorization. Technically, it makes independent system research, for whatever reason, illegal. So, in legal terms, an ethical hacker is automatically a malicious hacker under US law – and the influence of this lack of distinction between the two has inevitably adversely affected the development of a moral compass in young hackers.

The effect of the CFAA was eased only as recently as May 2022, with new charging rules published by the DoJ: “The policy for the first time directs that good-faith security research should not be charged. Good faith security research means accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services. “

Prior to this rule, says Ellis, “Doing anything to a computer without authorization was a felony crime. Even vulnerability research was technically a crime.” So ethical hackers had to be prepared to break the law for good purposes, a law which technically equated a Whitehat with a Blackhat.

Social and cultural background

The social and cultural background of young hackers is also instrumental in their development. Cultural is easiest to consider: one country’s freedom fighter is another country’s terrorist. It’s a form of relativity – perception is governed by your starting point.

“Good and bad can get a bit fuzzy,” says Ellis. “One of my favorite questions I like to throw into conversations is, ‘Do you consider the NSA and GCHQ to be Blackhat or Whitehat organizations?’”

The influence of social background is more complex: there are many examples of social backgrounds being influential in the development of both Blackhat and Whitehat hackers. Nevertheless, there are many examples where social background can be considered a contributing factor to criminality.

As a natural hacker grows up, he or she is faced with the need to make a living. In some parts of the world, even in so-called ‘advanced’ societies, it is sometimes easier to make a living through crime than it is through ‘legitimate’ employment. 

“There are areas of the world,” explains Ellis, “with such an established infrastructure around the criminal enterprise that it’s easy to get a job in crime. It’s almost a case of jumping on LinkedIn, responding to a job offer, and becoming a criminal.” In some of these areas, it is easier to work in crime than it is to get lawful employment – and a hacker has a skill set attractive to criminals.

Some areas of eastern Europe have a reputation for producing hackers. Ellis has a separate theory for this. “There’s a depth of technical prowess that exists in that part of the world – and my theory is it’s really a product of the Cold War. You have all these parents being put through state-funded astrophysics and science and engineering courses as part of the USSR’s war effort.”

But then the Cold War ended. The parents had nothing to do, but they did have kids. “So, you’ve got all this knowledge and intelligence and critical system thinking being dumped into that part of the world, and then suddenly, it’s got no outlet. I think, to me, that explains a big part of why there’s so much talent in that part of the world.”

But neither social nor cultural background is enough to explain the existence of hackers, nor their delineation into ethical or unethical hacking.

The influence of neurodiversity

The incidence of neurodiversity among hackers is interesting. There is ample empirical evidence to suggest a higher ratio of neurodivergence among hackers than among ‘normal people’ (affectionately known as ‘normies’) – but no scientific evidence. As the name suggests, neurodivergence implies a difference in the way the brain operates between divergents and normies.

There are two categories of neurodivergence with relevance to hacking skills: ADHD and ASD (formerly known as Asperger’s Syndrome). Ellis is ADHD. Daniel Kelley (here in the Hacker Conversations series) is ASD. While there are many degrees in both conditions, there are also similarities and differences between them. Both can hyperfocus, while ADHD is comparatively more extrovert in personality, and ASD is more introvert and socially unskilled.

“Systems are usually built by neurotypical people and used by neurotypical people,” says Ellis. “So, having a neurodivergent come in and say, ‘Hey, here’s the thing you missed’ makes sense. It’s there in the name – they’re thinking in a different way.” But at the same time, Ellis rejects the idea that neurodivergence is a pre-requisite for hacking – the insatiable curiosity and desire to deconstruct and reconstruct differently is more important.

The common element between the two forms of neurodivergence is the ability to hyperfocus, often for hours on end. “When I got my diagnosis,” said Ellis, “I thought, yeah that makes total sense because my mind flips between things very, very quickly. But if I line up and hyperfocus on getting something done, I’m pretty much unstoppable at that point.” Ellis learned that through understanding his ADHD condition, it became a superpower and not a disability. That helps, but does not create, a hacker.

ASD has a different effect. The lack of social skills in an uncompensated ASD youngster can sometimes constrain that person to a more solitary life, often alone with a computer. If that condition is supplemented by high intelligence, exploring the world through and with the computer becomes natural. Under these conditions, the combination of hyperfocus, an unformed moral compass, an ill-defined legal definition of malicious hacking, and the prevalent social background can all combine into directing that person onto the wrong path.

Again, there is no evidence to show that this does happen, but there are plenty of examples to show that it can happen.

The fence

Hacking is not binary, fixed as either ethical or unethical. There may be a fence between the two sides, but that fence has gates, and the potential to move from one side of the fence to the other – even if temporarily – exists.

Ellis cites his perception of two examples. The first is the Uber hack that ultimately led to the prosecution Uber’s CISO, Joe Sullivan. The perpetrators, suggests Ellis, “were basically kids on an internet safari where they came into possession of some pretty valuable data.”

They were neither ethical nor unethical at that point – just kids having fun. But when they came to that fence, they made the wrong choice, “and decided to go down the path of trying to get money for what they had found.”

His second example is the more recent Optus breach in Australia. It starts with the same type of internet safari as the Uber incident: hunters just looking around the internet, rattling cages and seeing what fell out. “So, strictly speaking, probably not legal, but they’re not necessarily causing any harm in the process,” said Ellis. “They’re just looking for bugs.”

Then they found an unsecured API within Optus that allowed them to enumerate the entire customer database. They did this and found themselves at the fence – and like the Uber hackers, they chose the wrong side.

But they didn’t stay on the wrong side. “They tapped out, and said, ‘That’s it, we’re not doing this anymore,” explained Ellis. “They even posted a message saying if there had been a bug bounty program or a clear way to communicate the vulnerabilities to Optus, none of this would have happened. We would have just told you guys you’ve got a problem so you can fix it.”

Ellis, Bugcrowd and the fence

Ellis had two primary motivations for the founding of Bugcrowd: a commercial enterprise to build a unique security platform, and help for hackers who come to that fence.

Commercially, the platform is a countermeasure to the recognized asymmetry of malicious cyberattacks. A small team of defenders must defend against multiple diverse attackers coming from all angles, all the time. It only takes one of these attackers to cause a breach.

Bugcrowd doesn’t reverse this, but it improves defense. It provides its own small diverse army of highly skilled ethical hackers providing results-based continuous pentesting.

For hackers who may come to that fence, it provides a monetary incentive to choose the right side – an ethical and not unrewarded outlet for their skills. The principle is simple: hacking is a skill set that is amoral, neither moral nor immoral. Society provides many incentives for hackers to choose the immoral side of the fence. Bugcrowd and organizations like it, attempt to redress the balance to help hackers choose the moral side.

In Ellis’ own words, with the right help, “Hackers should be viewed as part of the internet’s immune system.”

Related: Hacker Conversations: Youssef Sammouda, Bug Bounty Hunter

Related: Inside the Mind of the Hacker: The Speed and Efficiency of Hackers in Adopting New Technologies

Related: Hackers Receive $500,000 in One Week via Bugcrowd

Related: Bugcrowd Raises $30 Million in Series D Funding Round

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Vulnerabilities

Less than a week after announcing that it would suspended service indefinitely due to a conflict with an (at the time) unnamed security researcher...

Data Breaches

OpenAI has confirmed a ChatGPT data breach on the same day a security firm reported seeing the use of a component affected by an...

IoT Security

A group of seven security researchers have discovered numerous vulnerabilities in vehicles from 16 car makers, including bugs that allowed them to control car...

Vulnerabilities

A researcher at IOActive discovered that home security systems from SimpliSafe are plagued by a vulnerability that allows tech savvy burglars to remotely disable...

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.

Vulnerabilities

Patch Tuesday: Microsoft warns vulnerability (CVE-2023-23397) could lead to exploitation before an email is viewed in the Preview Pane.

IoT Security

A vulnerability affecting Dahua cameras and video recorders can be exploited by threat actors to modify a device’s system time.