Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Privacy

Terms of Use: User Privacy and the Algorithms Behind Social Media

At What Point do “Likes” and “Dislikes” Also Become Personal Information Along With the Rest of Our Digital Footprints? 

At What Point do “Likes” and “Dislikes” Also Become Personal Information Along With the Rest of Our Digital Footprints? 

We’ve talked about how life in a contactless world is leading to a new wellspring of digital data. The definition of what’s “personally identifiable information” could and should undergo more examination as our digital vapor trail begins to represent our lives in increasing detail.  

The threats posed by this phenomenon aren’t limited to malicious actors launching more sophisticated phishing scams. Increasingly, it isn’t just people using that data to influence us, it’s robots—unthinking algorithms on e-commerce sites, search engines and social media are continually categorizing our behavior to where it seems they can read our minds. 

From the algorithm’s point of view, this is a natural and helpful way to deliver content that people are looking for. But there is a dawning realization of the potential danger posed by these bits of code, written by humans to steer other humans. 

The concept itself isn’t new. In 2017 the Pew Research Center wrote about the effects of algorithms, their benefits and their risks, introducing the study with this declarative abstract: 

Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment.”

This an interesting statement to look back on today, having watched many of those darker possibilities become reality around the world. With severe political and cultural polarization proliferating across the U.S., Europe and elsewhere, are algorithms to blame? Or are we? Most important: What, if anything, can be done? 

There’s no question that services like TikTok, Facebook, YouTube, Google, Amazon and many others have a rich data set on each account holder. These days we know much more about what these services possess and how people can be categorized based on the information they willingly offer social media platforms. We know marketers can target specific groups of people directly thanks to access to that data. 

The algorithms on those platforms are working behind the scenes, all the time. They’re not good or bad in and of themselves. They’re just small virtual agents constantly nudging people in directions their behavior suggests they’d want to move in. Algorithms are designed to be predictive, but at the same time their very design creates the effect of leading people down rabbit holes. Another concern is that the way algorithms work is deeply influenced by the humans who create them. 

Advertisement. Scroll to continue reading.

As a consumer, just saying you like something or not doesn’t seem like you’re giving much away, but it really adds up. Not all likes are equal when it comes to revealing your affinities, and after years of likes and dislikes many of these services now have very specific avenues to target you and predict what you want to consume. 

So at what point do likes and dislikes also become personal information along with the rest of our digital footprints? 

Classifying clicks as PII may be the only possible tool we have to begin mitigating the influence that social media algorithms have over individuals. Algorithmic bias may be a tough sell conceptually for some lawmakers. The technology itself is too esoteric for the vast majority of people to understand, and it’s often cloaked in secrecy as corporate IP anyway.

The difficulty of regulating this arena is clear from the abject lack of regulation. We are living in a technological Wild West where there is no oversight over what an algorithm can or should do. There’s no equivalent to the FDA working to both protect the intellectual property of a company’s algorithm, while ensuring that the public isn’t being violated by it. 

What we’ve learned so far from intelligence agencies is that government-backed entities are creating apps and getting them installed on millions of mobile phones, creating the potential for massive disruption. We know from the news that shadow agencies have sold data on millions of people for use in an election campaign and may now be setting up new social platforms targeting specific sets of users.  

With the technology essentially out of reach for any meaningful intervention, for protective measures to have any teeth, we must focus not on how data is collected and used, but on what kinds of data can be collected and used. We also must increase transparency and put more control into the hands of users themselves. 

This is where the EU has gotten much closer to giving individuals control over the influence of algorithms, through its GDPR legislation. Ultimately, individuals won’t likely be protected from manipulation by algorithms because some killer technology was developed to protect them, but rather thanks to strong privacy laws that grant them finely tuned control over the entire range of their PII. People should be able to see easily what information has been collected about them, with the power and the right to be forgotten by those databases. 

Under such a model, companies could still innovate and create great algorithms that adhere to regulations like GDPR. But users would be able to dictate the ways they want to be marketed to.

RelatedCalifornians Consider Expanding Landmark Data Privacy Law

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Understand how to go beyond effectively communicating new security strategies and recommendations.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Cybersecurity Funding

Los Gatos, Calif-based data protection and privacy firm Titaniam has raised $6 million seed funding from Refinery Ventures, with participation from Fusion Fund, Shasta...

Privacy

Many in the United States see TikTok, the highly popular video-sharing app owned by Beijing-based ByteDance, as a threat to national security.The following is...

Privacy

Employees of Chinese tech giant ByteDance improperly accessed data from social media platform TikTok to track journalists in a bid to identify the source...

Application Security

Open banking can be described as a perfect storm for cybersecurity. At one end, small startups with financial acumen but little or no security...

Mobile & Wireless

As smartphone manufacturers are improving the ear speakers in their devices, it can become easier for malicious actors to leverage a particular side-channel for...

Government

The proposed UK Online Safety Bill is the enactment of two long held government desires: the removal of harmful internet content, and visibility into...

Cloud Security

AWS has announced that server-side encryption (SSE-S3) is now enabled by default for all Simple Storage Service (S3) buckets.