Privacy

Terms of Use: User Privacy and the Algorithms Behind Social Media

At What Point do “Likes” and “Dislikes” Also Become Personal Information Along With the Rest of Our Digital Footprints? 

<p style="text-align: center;"><span><span><strong><span>At What Point do "Likes" and "Dislikes" Also Become Personal Information Along With the Rest of Our Digital Footprints? </span></strong></span></span></p>

At What Point do “Likes” and “Dislikes” Also Become Personal Information Along With the Rest of Our Digital Footprints? 

We’ve talked about how life in a contactless world is leading to a new wellspring of digital data. The definition of what’s “personally identifiable information” could and should undergo more examination as our digital vapor trail begins to represent our lives in increasing detail.  

The threats posed by this phenomenon aren’t limited to malicious actors launching more sophisticated phishing scams. Increasingly, it isn’t just people using that data to influence us, it’s robots—unthinking algorithms on e-commerce sites, search engines and social media are continually categorizing our behavior to where it seems they can read our minds. 

From the algorithm’s point of view, this is a natural and helpful way to deliver content that people are looking for. But there is a dawning realization of the potential danger posed by these bits of code, written by humans to steer other humans. 

The concept itself isn’t new. In 2017 the Pew Research Center wrote about the effects of algorithms, their benefits and their risks, introducing the study with this declarative abstract: 

Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment.”

This an interesting statement to look back on today, having watched many of those darker possibilities become reality around the world. With severe political and cultural polarization proliferating across the U.S., Europe and elsewhere, are algorithms to blame? Or are we? Most important: What, if anything, can be done? 

There’s no question that services like TikTok, Facebook, YouTube, Google, Amazon and many others have a rich data set on each account holder. These days we know much more about what these services possess and how people can be categorized based on the information they willingly offer social media platforms. We know marketers can target specific groups of people directly thanks to access to that data. 

The algorithms on those platforms are working behind the scenes, all the time. They’re not good or bad in and of themselves. They’re just small virtual agents constantly nudging people in directions their behavior suggests they’d want to move in. Algorithms are designed to be predictive, but at the same time their very design creates the effect of leading people down rabbit holes. Another concern is that the way algorithms work is deeply influenced by the humans who create them. 

Advertisement. Scroll to continue reading.

As a consumer, just saying you like something or not doesn’t seem like you’re giving much away, but it really adds up. Not all likes are equal when it comes to revealing your affinities, and after years of likes and dislikes many of these services now have very specific avenues to target you and predict what you want to consume. 

So at what point do likes and dislikes also become personal information along with the rest of our digital footprints? 

Classifying clicks as PII may be the only possible tool we have to begin mitigating the influence that social media algorithms have over individuals. Algorithmic bias may be a tough sell conceptually for some lawmakers. The technology itself is too esoteric for the vast majority of people to understand, and it’s often cloaked in secrecy as corporate IP anyway.

The difficulty of regulating this arena is clear from the abject lack of regulation. We are living in a technological Wild West where there is no oversight over what an algorithm can or should do. There’s no equivalent to the FDA working to both protect the intellectual property of a company’s algorithm, while ensuring that the public isn’t being violated by it. 

What we’ve learned so far from intelligence agencies is that government-backed entities are creating apps and getting them installed on millions of mobile phones, creating the potential for massive disruption. We know from the news that shadow agencies have sold data on millions of people for use in an election campaign and may now be setting up new social platforms targeting specific sets of users.  

With the technology essentially out of reach for any meaningful intervention, for protective measures to have any teeth, we must focus not on how data is collected and used, but on what kinds of data can be collected and used. We also must increase transparency and put more control into the hands of users themselves. 

This is where the EU has gotten much closer to giving individuals control over the influence of algorithms, through its GDPR legislation. Ultimately, individuals won’t likely be protected from manipulation by algorithms because some killer technology was developed to protect them, but rather thanks to strong privacy laws that grant them finely tuned control over the entire range of their PII. People should be able to see easily what information has been collected about them, with the power and the right to be forgotten by those databases. 

Under such a model, companies could still innovate and create great algorithms that adhere to regulations like GDPR. But users would be able to dictate the ways they want to be marketed to.

RelatedCalifornians Consider Expanding Landmark Data Privacy Law

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version