A group of academic researchers have devised a method of reconstructing text exposed via participants’ eyeglasses and other reflective objects during video conferences.
Zoom and other video conferencing tools, which have been widely adopted over the past couple of years as a result of the Covid-19 pandemic, may be used by attackers to leak information unintentionally reflected in objects such as eyeglasses, the researchers say.
“Using mathematical modeling and human subjects experiments, this research explores the extent to which emerging webcams might leak recognizable textual and graphical information gleaming from eyeglass reflections captured by webcams,” the academics note in their research paper.
According to the researchers, evolving webcam technology may result in optical attacks that rely on using multiframe super resolution techniques for the reconstruction of the reflected content.
Dubbed ‘webcam peeking attack’, a threat model devised by academics shows that it is possible to obtain an accuracy of over 75% when reconstructing and recognizing text with heights as small as 10 mm, captured by a 720p webcam.
“We further apply this threat model to web textual contents with varying attacker capabilities to find thresholds at which text becomes recognizable. Our user study with 20 participants suggests present-day 720p webcams are sufficient for adversaries to reconstruct textual content on big-font websites,” the researchers note.
According to the academics, attackers can also rely on webcam peeking to identify the websites that the victims are using. Moreover, they believe that 4k webcams will allow attackers to easily reconstruct most header texts on popular websites.
To mitigate the risk posed by webcam peeking attacks, the researchers propose both near- and long-term mitigations, including the use of software that can blur the eyeglass areas of the video stream. Some video conferencing solutions already offer blurring capabilities, albeit not fine-tuned.
However, because different individuals face varying degrees of potential information leakage, mainly based on the quality of reflections, it would not be feasible to recommend or implement a single set of protection settings, the researchers say.
The webcam peeking attack model used human-based recognition to evaluate the limits of reflection recognition, but the academics believe that a more sophisticated machine learning model may be used to improve the attack performance – albeit machine learning is likely to face its own set of issues, mainly due to varying personal environment conditions.
Related: Researchers: Wi-Fi Probe Requests Expose User Data
Related: Academics Devise New Speculative Execution Attack Against Apple M1 Chips
Related: Academics Devise Side-Channel Attack Targeting Multi-GPU Systems

More from Ionut Arghire
- Former Ubiquiti Employee Who Posed as Hacker Pleads Guilty
- Atlassian Warns of Critical Jira Service Management Vulnerability
- Exploitation of Oracle E-Business Suite Vulnerability Starts After PoC Publication
- Google Shells Out $600,000 for OSS-Fuzz Project Integrations
- F5 BIG-IP Vulnerability Can Lead to DoS, Code Execution
- Flaw in Cisco Industrial Appliances Allows Malicious Code to Persist Across Reboots
- HeadCrab Botnet Ensnares 1,200 Redis Servers for Cryptomining
- Malicious NPM, PyPI Packages Stealing User Information
Latest News
- US Downs Chinese Balloon Off Carolina Coast
- Microsoft: Iran Unit Behind Charlie Hebdo Hack-and-Leak Op
- Feds Say Cyberattack Caused Suicide Helpline’s Outage
- Big China Spy Balloon Moving East Over US, Pentagon Says
- Former Ubiquiti Employee Who Posed as Hacker Pleads Guilty
- Cyber Insights 2023: Venture Capital
- Atlassian Warns of Critical Jira Service Management Vulnerability
- High-Severity Privilege Escalation Vulnerability Patched in VMware Workstation
