The A-Z of professional Vulnerability Management: A – is for Authenticated Scanning
Choices, choices…
Imagine you have the choice between opening a box and looking inside, or shaking and prodding it from the outside to guess what it may contain. Imagine further, that if you are unable to successfully guess the contents of the box, something bad may happen, something damning, damaging or dangerous. Which of the two choices would you decide to take?
Life, of course, rarely throws us such easy choices to solve our problems, so we are not used to them, but in the case of Vulnerability Assessment, this choice is entirely yours and really as simple as outlined.
What I am referring to is Authenticated Vulnerability Scanning, also oftentimes called credentialed or trusted scanning. Whether this takes the form of a username/password pairing, an active and legitimate session token, a certificate, or even an SNMP community (if you are not using SNMPv3 yet, and in that case, shame on you) the principle remains the same. In an authenticated or trusted scan, rather than scanning ports, services and application externally and attempting to deduce and guess what is running and vulnerable, native authentication and remote administrative functions are used to provide the same system or application access as a legitimate user or administrator.
The inside view…
The difference in approach is vast. Unauthenticated, a vulnerability scanner must first connect to and interrogate each open port and service, first having to successfully determine the actual target identity so as even to be able to send the right requests. Then, based entirely on the information and methods available to an anonymous user, it has to reliably identify the exact running operating system, application including versions and configuration.
Now, most applications are not intended and designed to allow this – best practices will recommend obscuring and disabling any information output or features that may provide an attacker precisely this information, and many developers and vendors will enforce this in any case.
Other applications just do not provide that type of information, or do not expose any services that can be detected remotely at all. Add to that the library and 3rd party dependencies, as well as back-porting of patches, as is often done in many Linux Distributions, that does not increment the version details, and it quickly becomes apparent that trying to determine existing vulnerabilities and missing patches this way is like a doctor trying to diagnose a patient by the color of their tongue. Sure, a few diseases and ailments may be found like this, but the majority of serious issues cannot be deduced.
Contrast this with an authenticated scan. Given a user with sufficient privileges, the scanner will instead remotely log on to the system and issue a series of commands that any administrator will immediately recognize. What applications are installed? Which exact versions? Which DLL’s or other types of libraries are being used or have been compiled or enabled? Instead of unnecessarily scanning thousands of closed ports and then attempting to guess what may be listening, a “netstat” or similar local command can be executed to provide an exact list including associated processes. Rather than brute-forcing hundreds of usernames passwords, these can be directly and precisely enumerated, and we can even go so far that we verify policy configuration options such as password complexity and expiration.
In the case of application vulnerability assessment, the benefits are as great, and in some cases even more pronounced. Assessing a web application without legitimate credentials for example, will in many cases result in large sections of the application not having been assessed. Considering that most such sites have the majority of active functions in the backend, administrative, or user sections, this would entail a less than concise view of the website.
The right tool for the job…
That is not to say, that there is no merit in an unauthenticated scan. It has many uses, not the least to verify the results of a trusted scan, for example to find backdoors hidden by a rootkit or similar scenarios, as well as to see what an attacker would see without credentials, so that these vectors of risk can also be assessed and potentially mitigated. But this is another component of the overall vulnerability management lifecycle, and not a full replacement for a trusted vulnerability, patch and configuration scan.
You have to differentiate between the act of testing your defenses and security measures, and the act of vulnerability management. In the latter, you want an administrators viewpoint, so as to be able to identify and assess all and any vulnerabilities and security hazards. In the former, we are checking if indeed all risks have been mitigated and the precautions have been implemented properly and sufficiently. For one, we want as deep an insight into the infrastructure and related system as possible, for the other, we want to be hindered by the same limitations as an attacker may have, before he has escalated privileges and built out his position. For that, we have to have full access and an unlimited view. One is the hackers view, the other the security professionals.
Oh, But the Overheads……
Nothing is for free in life, and in the case of authenticated scanning there is also an associated cost. It requires a user or other form of legitimate credentials on every device or system to be assessed this way.
The silly argument that adding a user for Audits provides another attack vector is of course hogwash. If that were the case, any administrator, or every user for that matter, is a further attack vector and should be avoided. The credentials for this audit account or user that has to be added, should be securely stored in whatever solution you are using to execute the assessments with, and not used for anything or anyone else. This also ensures a clean audit trail, with a unique user that can be tracked and monitored. These credentials, as they are only used in this automated fashion, are more than likely safer than those used by humans.
Lastly, if you do not trust your vulnerability assessment solution to store these safely, you would be mad to trust it with your actual vulnerability data, right?
If you are unable to roll out account additions or modifications to accommodate this, you may have a building site that needs your attention first. A well-organized, designed and operated network is a prerequisite for efficient vulnerability management.
It is not a replacement for that, and the process will never fully meet expectations and requirements if this is not the case. If you are unable to deploy software for example, you are more than likely unable to deploy patches and other fixes on a larger scale, and if you do not have centralized, or at least a streamlined user and group management approach, the chances that your security policies are enforced as per policy, is also unlikely, or at least cannot be verified.
More importantly, it is simply not possible to reliably assess a system without authenticated access (and before anyone becomes a smartass, Agent-based approach have authenticated access, in the form of the agent). Period. This is not a matter of conjecture or opinion. If you doubt this, I can only suggest jumping into a lab and running a trusted an untrusted scan and looking at the differences in results. The benefits absolutely outweigh the overheads .
Coming back to the choice I hypothesized earlier, it isn’t really a choice. Choosing to do unauthenticated scanning is not an option – if you want to do vulnerability assessment properly and to the fullest maximum of its potential, you have no other choice.