Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Microsoft’s Windows Recall: Cutting-Edge Search Tech or Creepy Overreach?

SecurityWeek editor-at-large Ryan Naraine examines the broad tension between tech innovation and privacy rights at a time when ChatGPT-like bots and generative-AI apps are starting to dominate the landscape. 

The growing concern over privacy rights and intrusive AI technologies is back on the front burner, thanks to a new Windows Recall feature from Microsoft that uses AI to create a searchable digital memory of everything ever done on a Windows computer.

Windows Recall, turned on by default on Microsoft’s new Copilot+ PC machines, features shiny new technology that lets Windows users search across time to find and re-engage with content. “Just describe how you remember it and Recall will retrieve the moment you saw it. Any photo, link, or message can be a fresh point to continue from,” Microsoft boasted.

For many, it gets a bit creepy with the next line in Microsoft’s documentation:

“As you use your PC, Recall takes snapshots of your screen. Snapshots are taken every five seconds while content on the screen is different from the previous snapshot. Your snapshots are then locally stored and locally analyzed on your PC.”

This AI-powered analysis is the magic that lets Windows users search for content, including both images and text, using natural language. “Trying to remember the name of the Korean restaurant your friend Alice mentioned? Just ask Recall and it retrieves both text and visual matches for your search, automatically sorted by how closely the results match your search. Recall can even take you back to the exact location of the item you saw,” Microsoft explained.

On mainstream media, Microsoft boss Satya Nadella hyped up the use of semantic search technology “over all your history” to even “recreate moments from your past” and responded to security concerns by noting that the screenshots are locally stored and locally analyzed without Microsoft’s involvement.

Redmond is publicly documenting several privacy-themed controls for both consumers and business users, noting that tools are in place to filter web sites or applications from the constant taking of screenshots.  The company insists the technology won’t save any content from private browsing sessions and will not store any DRM-protected material.

However, the company acknowledged there is a risk that usernames or passwords will be exposed.

Advertisement. Scroll to continue reading.

“Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry,” the company explained.

More importantly, Microsoft argues that the crucial mitigation for any privacy or security concerns is that Windows Recall runs locally on the edge device. 

“No internet or cloud connections are required or used to save and analyze snapshots. Your snapshots aren’t sent to Microsoft. Recall AI processing occurs locally, and your snapshots are securely stored on your local device only,” the company declared.

“Snapshots are encrypted by Device Encryption or BitLocker, which are enabled by default on Windows 11. Recall doesn’t share snapshots with other users that are signed into Windows on the same device. Microsoft can’t access or view the snapshots.”

Despite these assurances, this technology feels like creepy overreach.  The constant tracking of user activity — and taking/storing screenshots of everything – is unsettling.

On social media, security experts warn that errors and misconfigurations could inadvertently cause data to be sent to Microsoft’s servers.  

“This is one of those problems you can predict,” said a prominent CISO at a publicly traded company. “It feels very invasive and I’m not sure the productivity trade offs are worth it.  We’ll see how the market reacts.”

Security experts also warn that advanced attackers with access to infected machines will find value in the Windows Recall feature with one incident response pro calling it “the perfect gold mine.”  

On social media, the feature was roundly criticized with many wary of the intrusive recording, the risk to user data being harvested by info-stealer malware, and the possibilities that Microsoft may benefit from mining the stored information.

Others noted that domestic abusers could abuse the feature to spy on spouses or hijack usernames and passwords from the screenshots in the image timeline.

It feels very much that the AI-powered future of computing and the killer apps coming down the pike will invariably disrupt the way we think about data privacy and security.  

A tool like Windows Recall will be very valuable to consumers and businesses looking to find documents, web pages or images in a shiny GUI with a prominent search box and a nifty, scrollable timeline bar.

But, it does come with legitimate privacy and security concerns at a time when Microsoft itself is struggling with multiple security crises.  The world’s largest software maker is still struggling to respond to multiple embarrassing hacks that led to a critical US government CSRB report, a major corporate restructuring, and a new “security-before-features” pledge from the company’s leadership. 

The push-and-pull over Windows Recall highlights the broader tension between tech innovation and privacy rights at a time when ChatGPT-like bots and generative-AI apps are starting to dominate the landscape. 

Despite Microsoft’s assurances about local storage and encryption, there’s a fundamental mistrust in how AI-driven tools are being used and trained to generate profits for already-rich companies. The recent controversy over Slack (by default and without user opt-in) scraping customer data to develop new AI and ML models only adds to the suspicions.

The future of AI tools like Windows Recall will be dependent on big-tech’s ability to address these concerns transparently and effectively, setting a precedent for the responsible deployment of AI technology. 

A lot rests on Microsoft’s shoulders at a very crucial time. Privacy rights advocates are paying very close attention.

Related: Google Cites ‘Monoculture’ Risks in Response to Microsoft CSRB Report

Related: Microsoft Overhauls Cybersecurity Strategy After Scathing CSRB Report

Related: Microsoft’s Security Chickens Have Come Home to Roost

Related: Microsoft Hires New CISO in Major Security Shakeup

Written By

Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. He is a security community engagement expert who has built programs at major global brands, including Intel Corp., Bishop Fox and GReAT. Ryan is a founding-director of the Security Tinkerers non-profit, an advisor to early-stage entrepreneurs, and a regular speaker at security conferences around the world.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

The AI Risk Summit brings together security and risk management executives, AI researchers, policy makers, software developers and influential business and government stakeholders.

Register

People on the Move

Retired U.S. Army General and former NSA Director Paul M. Nakasone has joined the Board of Directors at OpenAI.

Jill Passalacqua has been appointed Chief Legal Officer at autonomous security solutions provider Horizon3.ai.

Cisco has appointed Sean Duca as CISO and Practice Leader for the APJC region.

More People On The Move

Expert Insights