Virtual Event Today: Ransomware Resilience & Recovery Summit - Login to Live Event
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Vulnerabilities

Industry Reactions to Oracle CSO Rant: Feedback Friday

Mary Ann Davidson, Oracle’s Chief Security Officer, published a blog post titled “No, You Really Can’t” on Monday asking customers to stop performing their own security tests on the company’s products.

Mary Ann Davidson, Oracle’s Chief Security Officer, published a blog post titled “No, You Really Can’t” on Monday asking customers to stop performing their own security tests on the company’s products.

Davidson pointed out that customers and security researchers who reverse engineer Oracle’s products in an effort to find vulnerabilities are violating the company’s license agreement. The CSO believes that security hygiene — encrypting sensitive data, patching, updating, and properly configured software — is more important for an organization’s cyber defense than finding zero-day vulnerabilities in the products they’re using.

industry reactions to Oracle CSO rant

Davidson also targeted bug bounty programs, saying that it’s much more efficient to hire an employee to do ethical hacking or develop new security tools, instead of paying money to third parties. She noted that in Oracle’s case external researchers only find 3 percent of security holes, while 87 percent are identified by the company’s employees.

“Now is a good time to reiterate that I’m not beating people up over this merely because of the license agreement. More like, ‘I do not need you to analyze the code since we already do that, it’s our job to do that, we are pretty good at it, we can – unlike a third party or a tool – actually analyze the code to determine what’s happening and at any rate most of these tools have a close to 100% false positive rate so please do not waste our time on reporting little green men in our code’,” Davidson said.

Davidson’s blog post caused uproar in the industry and was quickly pulled down by Oracle. Soon after, the company issued the following statement: “The security of our products and services has always been critically important to Oracle. Oracle has a robust program of product security assurance and works with third party researchers and customers to jointly ensure that applications built with Oracle technology are secure. We removed the post as it does not reflect our beliefs or our relationship with our customers.”

SecurityWeek contacted several industry professionals, including researchers who have reported vulnerabilities to Oracle, to see what they think about Davidson’s blog post.

And the feedback begins…

Alexander Polyakov, CTO of ERPScan:

Advertisement. Scroll to continue reading.

“Well, despite my great respect for Oracle, I have been discovering vulnerabilities in Oracle applications since 2007, only I found 30+ issues in Oracle applications, and our team finds probably twice more in total. And it was not a big deal, seriously. It seems one of the first systems for me as an intern pentester to break into. It was very easy to find those vulnerabilities, as easy as hacking websites. You can’t imagine that top vendor can have those simple vulnerabilities and more importantly can act like that.

 

Do you think that you need to have advanced reverse engineering skills or be a nerd from Mr. Robot to find a vulnerability? Frankly speaking, no, you don’t. It’s much easier than you may think. For most issue I’ve found I did not use any reverse engineering tools and just tried to enter data in the field where nobody expected this type of data. Simple? Yes! And that works! What does it mean? It means that to find most of the vulnerabilities in Oracle products (and there are about 3000 of them, by the way) you don’t need to be a hacker, you just need to have good testing skills. In scientific terms, it is a boundary testing, and if such types of tests are failed, it means even worse than a software with vulnerabilities. It’s a software without proper Quality Assurance! Ok, big vendor, if you don’t care about security, is QA also unimportant?”

Adam Willard, Senior Software Security Engineer at Foreground Security:

“The statements made by Oracle’s CSO do not take into account the dramatic security gains we can make with the help of responsible researchers and a well-informed customer base. They [the blog’s statements] are predicated upon the notions that the vendor knows best and will miss nothing. I do not disagree that reverse engineering is a violation of terms and services, but terms and services are about as useful a deterrent as speed limit signs. With regard to Common Criteria and the like, I’d expect a CSO to know that having a certification doesn’t ensure security. Finally, the statement ‘A customer can’t produce a patch for the problem – only the vendor can do that,’ is not 100% true. While the customer can’t create a patch for the software, the customer can implement compensating controls and instrumentation if there is a known vulnerability.

 

In my experience, the Oracle teams responsible for product and global security have always responded in a reasonable amount of time and have always been professional. The development teams have investigated issues and developed appropriate fixes if a patch wasn’t already in the pipeline. As competent as they are, however, it may not be possible for all projects to have the same access to every tool and testing resource available before the code is released. I have worked on software where the flagship product had access to certain products for static and dynamic analysis, pen testing, and rigorous quality control processes, but lesser-known products just didn’t have the same revenue stream to ensure the same coverage.

 

I understand that the volume of submissions could be overwhelming; however, doesn’t this also suggest that the security community is interested in ensuring that Oracle products are secure and that their customers are protected? If the internal teams found all the bugs, why are we still finding them and why was the product released prior to this type of testing? How does the security researcher know that a vulnerability was already discovered? Should that deter the researcher from reporting it responsibly? If researchers and auditors are essentially providing free source code analysis, maybe Oracle should capitalize on this as other vendors have rather than saying, ‘Thanks but no thanks.’” 

Benjamin Kunz Mejri, founder and CEO of Vulnerability Lab:

“Reverse engineering is a significant part of our industry. To revoke or deny access to a useful resource like reverse engineering blocks the creativity and space of a company. Due to reverse engineering on Oracle and by the Oracle Corporation itself, a lot of further projects and plans have been realized. I can understand the concerns of a frustrated CEO that has to deal with that many security issues by reverse engineering, but you should never exclude the private sector.

 

As far as you want users to use your software you have to deal with their feedback and findings as well. Until today I reported a lot of issues using reverse engineering techniques to Oracle, but I never got a mail reply like that, and hopefully I will never receive it. The statement of the Oracle boss is very unproductive and displays the lost position of the state in case of overheating.”

Dr. Chenxi Wang, VP of Cloud Security and Strategy at CipherCloud:

“I sympathize with Mary Ann. Today’s CSOs face constant landmines, pressure, and higher stakes. At a place like Oracle, whose software products must support many different versions and platforms, testing is a tremendous undertaking.

 

That said, telling customers to stop testing the software they purchased is the wrong answer. It sets a bad precedent with the software supply chain, and lends excuses for suppliers to be lazy. Software consumers need transparency and a foundation to build trust with their suppliers. Threatening breach of TOS doesn‘t help build that trust. In addition, when in history did obscurity contribute to security?

 

At the end of the day, customers face mounting pressures to protect their systems and their own customers. Tech firms should work with customers and whitehat researchers whose collective efforts strengthen security, not against them.”

Casey Ellis, CEO and co-founder of Bugcrowd:

“Cybercriminals and nation-state actors (who are the primary users of exploits in Oracle’s software) aren’t going to honor Mary Ann’s request, nor will they heed Oracle’s EULA. When the crowd contains the smartest folks around the table (e.g. David Litchfield @dlitchfield – A notoriously excellent Oracle security researcher), the last thing you want to do is silence them.

 

What I believe Oracle is actually trying to do is find a way to control the flow of information from the crowd, identifying and handling the signal and being able to ignore the noise (and those who do the #nagbounty thing). I understand what Mary Ann was trying to achieve with this post, but what they really need is a way to control the flow of information without simply saying “Don’t… Just don’t.” All that being said, I’m really glad that they took the post down. This means, in spite of the tone of the email, she’s listening to the community (who had a unanimously bad reaction).”

Amit Sethi, Principal Consultant at Cigital:

“Attackers don’t worry about EULAs or play by the rules. They will reverse engineer your software whether you like it or not. The question is whether you want people who are on your side doing the same and reporting security problems to you.

 

Davidson admitted that 3% of vulnerabilities in their software are found by security researchers and 10% are found by customers. Many of these vulnerabilities were likely found by violating EULAs and might have stopped actual attacks. If they are seeing a high number of false positives in reports from customers, there are better ways of dealing with that than by telling customers to stop reverse engineering their code.

 

Davidson’s post also seems to confuse intellectual property protection with finding security issues. She states ‘The point of our prohibition against reverse engineering is intellectual property protection, not “how can we cleverly prevent customers from finding security vulnerabilities – bwahahahaha – so we never have to fix them – bwahahahaha.’ People who want to steal Oracle’s intellectual property are not going to report security issues to them. Asking people who are reporting security issues to stop doing what they are doing does absolutely nothing to protect their intellectual property.”

Steve Lowing, Director of Product Management at Promisec:

“My take is that while there is always a sense of pride that comes from an engineer’s hard work that you never want to dismiss, it does not excuse the fact that security and in particular understanding new vulnerabilities is and always should be a “Team Sport”. When you read this blog post the first thing I am left with is that of the classic “not invented here” syndrome which unfortunately plagues many companies. This attitude clearly is not something you want in a security leader who has as a tenant that *all* security issues are dealt with equally, fairly and timely. Hiding behind a claim of violation of reverse engineering is disgraceful. Terms and conditions around Reverse Engineering are meant to protect from theft of IP, not preventing the improving of the IP by shining a light on an errant security hole or bug.

 

That said, the message that security hygiene is important holds some weight. There is evidence that suggests that ensuring the latest patches are in place is a must do, just simply read the latest Verizon Breach report where it concludes that over 90% of the hacks last year were the result of only 10 vulnerabilities and these have had patches out for over a year! Clearly basic hygiene would have a dramatic impact. So on that point I would agree.

 

However there is also evidence that being proactive by looking for vulnerabilities as a means to reduce and manage risk is helpful in keeping the bad guys at bay by helping an IT organization to prioritize what to fix first. This includes existing as well as new, emerging vulnerabilities. Ethical hacking is not just for compliance anymore; many companies are doing this because it’s another angle to take to get ahead of the hackers. Oracle sells many products that are often used as the building blocks for other products. They can’t afford to operate like a walled garden and need to work with their customers to ensure they continue to offer the best products that also are the most secured in their category. Relying only on internal resources misses the bigger picture and the value that comes with it.” 

Jeff Williams, CTO of Contrast Security:

“Should people be allowed to reverse engineer software that they have licensed in order to verify its security?

 

From a public policy perspective, it just doesn’t seem reasonable to prohibit people from verifying the security of the software they trust the business to. I want to run my tools on it, and have my experts dig into it. However, if people are simply running static or dynamic analysis tools on Oracle software and submitting the results, it creates insane amounts of work for their security team. There are more accurate tools, but the problem is still the same. Does Oracle have an obligation to investigate every report of a possible vulnerability in their software? Or is it reasonable for them to ask for a working exploit?

 

Is Oracle right that we should trust them and their process?

 

Here I think it’s easy to criticize Oracle’s position. While I’m sure that they do have security processes in place, it’s not nearly enough to satisfy me that I should trust my business to their software. Their history of security vulnerabilities is not particularly reassuring either. As I’ve written before, I think it’s important for software organizations to disclose their security story, including evidence that they’ve implemented and verified all the different aspects of that story. A high level marketing piece isn’t even close.”

Steve Durbin, Managing Director, Information Security Forum: 

“As our digital environment becomes more complex, volatile and less predictable, our core systems will become more reliant on software with inherent flaws or software that was properly written but is being used in ways that were never intended. The notion that as organizations we should blindly hand over responsibility to third parties and trust in their ability to provide flawless code, timely patching and complex testing against systems that may be outside their control is counter to the industry practice of resilience. In the digital world, responsibility for cyber security and ensuring resilient systems, must begin and end with the organization using those systems. You cannot outsource this responsibility.


So, whilst we have an expectation that basic security hygiene tests have been completed, and are inherent in our software by design, that does not mean that we should not take proactive steps to guard against zero day vulnerabilities emerging in our systems. We cannot blindly trust a third party to do this on our behalf – as Heartbleed and Shellshock have demonstrated. Security hygiene is step one but proactive cyber resilience management is where organizations must be headed. Security is all about the management of risk after all.”

Chris Wysopal, CTO and CISO Veracode:

“We now rely on software for everything – health, safety and well-being – and crafting a policy of ‘see something, say nothing’ puts us all at risk.


Application security is an enormous software supply chain issue for both enterprises and software vendors because we all rely on software provided by others. Vendors need to be responsive to their customers’ valid requests for assurance, and to security researchers who are trying to make the software we all consume better. Leaders in the industry – Google, Apple, Microsoft, Adobe – all encourage third-party code audits and bug bounty programs as a valuable extension of their own security processes.

 

Discouraging customers from reporting vulnerabilities or telling them they are violating license agreements by reverse engineering code, is an attempt to turn back the progress made to improve software security.”

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

Professional services company Slalom has appointed Christopher Burger as its first CISO.

Allied Universal announced that Deanna Steele has joined the company as CIO for North America.

More People On The Move

Expert Insights

Related Content

Vulnerabilities

Less than a week after announcing that it would suspended service indefinitely due to a conflict with an (at the time) unnamed security researcher...

Data Breaches

OpenAI has confirmed a ChatGPT data breach on the same day a security firm reported seeing the use of a component affected by an...

IoT Security

A group of seven security researchers have discovered numerous vulnerabilities in vehicles from 16 car makers, including bugs that allowed them to control car...

Vulnerabilities

A researcher at IOActive discovered that home security systems from SimpliSafe are plagued by a vulnerability that allows tech savvy burglars to remotely disable...

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.

Vulnerabilities

Patch Tuesday: Microsoft warns vulnerability (CVE-2023-23397) could lead to exploitation before an email is viewed in the Preview Pane.

IoT Security

A vulnerability affecting Dahua cameras and video recorders can be exploited by threat actors to modify a device’s system time.