All The Arguments You Need: to Convince People They Should Care About Digital Privacy
In our All the Arguments You Need series, we take on mindsets standing in the way of progress and rebut them with facts and logic.
In the past few weeks, The Social Dilemma has brought to light deep-seated, structural issues with how we exist and interact with others online, a majority of which result from private tech companies deploying surveillance for their own profit. Tech ethicists have for years tried to make people care about this problem, by stressing all humans’ right to privacy, and how we need to ensure it stays protected from data-hungry algorithms.
Despite this constant push, most people who are online today either do not or don’t know how to, protect their digital privacy. In ignoring this very crucial step, they often present faulty arguments to negate the need for digital privacy.
I’m not doing anything wrong. I have nothing to hide.
Privacy is a basic human right. It’s not necessarily about hiding a wrong. Even if a person isn’t doing anything illicit or illegal, it’s possible they might want to keep aspects of their life private, such as their intimate relationships online, political beliefs, and consumer habits. Digital surveillance is not just deployed to keep a moralistic check on internet users; it’s used to manipulate people online by tracking their behavior, which reports show can also happen after the user has shut off certain apps or systems.
When thinking of digital privacy, it’s best to turn the lens outward — instead of analyzing what the individual has done wrong, think about what the corporate or government interest on the other side can perceive as susceptible to political or consumerist manipulation. People’s behavior online is being compiled and analyzed without their consent, and used in bulk to make make a product out of their psychology and sell that data to the highest bidder.
They’ve already got all of my data. What do I have to lose?
This is an essentially defeatist argument, akin to saying ‘tech corporations have already harmed me, what more could they do?’ And the answer is — quite a bit. Surveillance and data mining is not about one fell swoop in which tech companies collect users’ data; it’s an ongoing, routine violation of privacy that slowly builds users’ profiles, by tracking each decision online to build a more wholesome picture of an individual. This information is then sold to advertisers and governments to enable them to manipulate your behavior, which happens so infinitesimally every second that it’s almost impossible to tell.
It’s in every person’s interest to protect themselves from such privacy violations, because there is still privacy left to lose, and a future self left to safeguard.
What’s the worst they can do — show me ads? Whatever, I’ll watch them.
Digital surveillance is more than just being targeted by advertisers. The information corporations collect is not regulated or safeguarded, even if given to them willingly. This opens up their entire databases to hackers, who have till date stolen millions of dollars from unprotected credit card numbers. This data is also used to create profiles of individual users to predict their future behavior using algorithms. These algorithms often reflect society’s classism, racism, and sexism, such as offering up minority groups and neighborhoods when predicting crime, or downgrading women’s resumés when sifting through candidates for a job. Unadulterated surveillance makes strengthening these algorithms possible, which in turn damages society in ways ranging from infinitesimally small behavioral changes over time to putting entire communities in danger.
Depending on the country a person is in, unhampered digital surveillance can also mean threatening freedom of expression and a free exchange of ideas — concepts central to democracy — online. For example, after the first NSA leaks in 2013, research by PEN American Center shows 28% of writers either completely avoided or curtailed their social media use; 24% avoided controversial conversations over phone and email; and 16% avoided writing about hairy issues altogether. Since 2013, with more and more data being released about the pervasiveness of digital surveillance, this chilling effect on speech has increased. Imagine not being able to log online and speak your mind, or follow people on social media you can rely on to tell the truth? If more people don’t invest their time and energy in digital privacy, then self-censorship increases, ultimately making it impossible for people to speak up in public.
This self-censorship also manifests in health emergencies, such as Covid19. By ramping up surveillance in the Covid era without any safeguards for privacy, we’re already seeing the illness stigmatized, and often leading to incidents of violence. Another consequence is the epidemic of anti-vax sentiment and the anti-vaxxer communities people have been able to build online, after being marketed conspiratorial, anti-science information by algorithms that deemed them susceptible. Such target-based surveillance and subsequent spoonfeeding of false information has led to several infodemics that make fighting the Covid19 impossible every single day, right from an individual level all the way to internal policy-making.
Related on The Swaddle:
It’s important to remember that online, we’re constantly being sold things, be it consumer products, information, or ideas. Increasingly, what we see is modeled on how we’ve behaved, been surveilled in the past, as if we’re voluntarily letting someone else determine our own personal echo chamber. Ensuring digital privacy lets people exercise control over not just their lives, but also their digital selves.
But I don’t have a problem with them taking my data if it makes the tech I use more efficient.
The problem is not users voluntarily, consensually giving tech companies data. The problem arises when sophisticated data collection and surveillance occurs without the users’ knowledge or consent, which makes digital surveillance a sneaky, unethical and possibly dangerous practice.
Amazon’s voice assistant Alexa, for example, may play songs on command and get you information immediately, but she’s also gathering data on your activities, your conversations, even your sex life, without your consent. The private workings of your home can then be shared with law enforcement if requested, without you having any control over your private information. Amazon can also be listening in on your medical information, then crafting ads that lure you into buying costly therapies and products. This can also have severe mental health consequences, such as when women who had miscarriages were shown baby product ads around their would-have-been due date.
There is a way for us to keep enjoying the convenience of technology, but safeguard our rights, experiences, agency, and mental health in the process. In the end, it’s important to change our collective perspective on digital privacy — it’s not wholly about privacy or security. These are tools to ensure a larger, more absolute goal — freedom.