Malware & Threats

Sophos Blasts Cylance’s Competitive Testing Methods

Sophos Says Cylance Rigged Demo Test

<p><span><img src="/sites/default/files/features/VirusTotal-Policy-Debate.jpg" alt="Sophos Says Cylance Rigged Demo Test" title="Sophos Says Cylance Rigged Security Demo" width="675" height="393" style="font-family: &amp;quot; font-size: medium; text-align: center;" /></span></p>

Dan Schiappa, VP with Sophos, has published what amounts to a stinging rebuke against Cylance product comparison methods. The specific incident involved a comparative test between Cylance and others, including Sophos. Cylance came out best — but a Sophos customer in the audience reportedly asked to see how the Sophos product was configured. He found that the default settings had been disabled. When enabled, and the tests re-run, Sophos was said to have beat Cylance.

Sophos does not use the word ‘cheating’. Nevertheless, suggestions of unfair environment manipulation are not new. What is unusual here is that Sophos is effectively and publicly inviting Cylance to join an open debate — or even to take legal action against Sophos.

“When the playing field is leveled, and Cylance’s product comes under real scrutiny,” wrote Schiappa, “the company cries foul, puts the fear of lawsuits into the minds of its partners, and accuses others of ‘smoke and mirrors’ tactics.”

Such behavior, which we could euphemistically describe as ‘gaming the opportunities’, are not new. As long ago as 1993 Sarah Tanner (probably better known as Alan Solomon) wrote ‘A Reader’s Guide to Reviews’ . Its purpose was to explain how tests can be manipulated, and why readers should not automatically believe everything they read. 

“The main weapons at your disposal,” says the article, “are the choice of what features to review and what to ignore and the weights given to the features you do cover. By a careful use of this, even GrottyScan can be the Editor’s Choice.” It then lists 26 different methods that can be used to tilt a comparison towards the tester’s own product. For example, if your own product is slow, “Do your timing test on a disk full of viruses. That way, WonderScan will be slowed down by the screen display and other things it has to do when it finds a virus, whereas GrottyScan won’t be slowed down, as it won’t have found many viruses.”

Much has changed in the intervening 23 years. Product testing and comparisons are far more sophisticated — but so is the ‘gaming’. In April 2015 Qihoo was accused of supplying units ‘cranked up’ to achieve well in speed tests, while consumers received slower but more reliable models. “On requesting an explanation from Qihoo 360 for their actions,” reported AV-Comparatives, AV-TEST and Virus Bulletin, “the firm confirmed that some settings had been adjusted for testing, including enabling detection of types of files such as keygens and cracked software, and directing cloud lookups to servers located closer to the test labs. After several requests for specific information on the use of third-party engines, it was eventually confirmed that the engine configuration submitted for testing differed from that available by default to users.”

The basic problem is that comparative product testing is very difficult, and very expensive. It is easy, either by intent or accident, to introduce bias in favor of one product over another. These difficulties persuaded the AV industry to form the anti-malware testing standards organization (AMTSO) to develop and promulgate transparent and accurate testing procedures.

AMTSO has its critics. It can easily be seen as an organization of the AV industry for the AV industry. But the work it has done is solid. Most of the major testing laboratories conform to or follow AMTSO standards — and it has the added advantage that no two members of AMTSO are likely to accuse each other of gaming tests undertaken to AMTSO standards. If gaming happens, as in the Qihoo incident, it will be discovered and made public.

Advertisement. Scroll to continue reading.

As malware expert David Harley commented in an email to SecurityWeek, “If the tester is actually the vendor, there’s obvious scope for abuse. Vendors are and should be engaged in internal comparative testing, but I’d encourage the public to be skeptical where a vendor makes those results public. Not to assume malfeasance, but to examine claims and methodology to the best of their ability.”

The moral from both the Sophos experience with Cylance and the whole history of gaming test results is that the public should be aware of vendors’ own tests, and place more faith in independent tests operated under AMTSO overview.

Related: VirusTotal Policy Change Rocks Anti-Malware Industry

Related:  Palo Alto Networks, NSS Labs Spar Over NGFW Test Results

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version