Apple’s Siri Was ‘Accidentally’ Recording Conversations Without People’s Consent
Previous research also found security vulnerabilities in other AI-based virtual assistants that leaves users’ personal data prone to hacking.
Hey Siri, you’re not as innocuous as people take you to be. That seems to be the takeaway — after an Apple softwarebug led to the virtual assistant feature recording people’s interactions without their consent.
Last week, Apple acknowledged this in its latest update in iOS 15, noting the AI-based virtual assistant recorded people’s conversations — even if they opted out of it. “The bug automatically enabled the Improve Siri & Dictation setting that gives Apple permission to record, store, and review your conversations with Siri,” a report by ZDNet noted. Later, while issuing an apology, the company said it corrected the bug for “many” users — not all of them.
It leaves much to answer: the company’s statement doesn’t clarify how many phones were affected or when. “Without transparency, there’s no way to tell who may have had their conversations recorded and listened to by Apple employees despite asking to avoid exactly that outcome,” as The Verge mentioned.
Arguably, there are limits to what people know and agree to while interacting with these virtual assistants. The idea is innovations like Alexa (Amazon’s AI-based voice technology), Siri, Google Assistant may create recordings of user requests, but it is not that someone is actively listening to these recordings. Still, the notion that people’s conversations with others can also be recorded without one’s consent is chilling.
Technology and AI experts previously have argued in favor of these big tech companies listening to our requests — so as to finetune the glitches in voice-based technology. This is what Amazon’s FAQ on Alexa says: “The more data we use to train these systems, the better Alexa works, and training Alexa with voice recordings from a diverse range of customers helps ensure Alexa works well for everyone.” In other words, the only way to improve voice-based technology, according to some experts, is to have humans listen in on private interactions.
This isn’t “illegal,” so to say, since these companies bury the caveats in products terms and services. As of 2020, 60% of Indian users were estimated to be using voice assistants on their smartphones for a myriad of tasks — listening to music, setting an alarm, asking questions.
Related on The Swaddle:
What Is a Constant Lack of Digital Privacy Doing to Our Mental Health?
Among other concerns, the lack of user knowledge and consent with these voice assistants is particularly deleterious. It is nefarious mostly because several people don’t realize their data is being monitored in such a way and to such an extent, and are further unaware about the implications, according to a Bloomberg investigation. Florian Schaub, an assistant professor at the University of Michigan who studied people’s privacy perceptions, argued that people tend to personify their devices which makes them further unsuspecting. So when they ask an innocuous question to Alexa or Siri, they’re not really thinking about the scope of their action. But when they realize that a person is listening to these conversations could feel intrusive and violating, and when made aware, people were more likely to just disconnect from these systems.
This raises a host of concerns on users’ privacy, the extent of data retention, and how that data is harnessed by different stakeholders.
“VAs work based on users’ voices – it is their main feature. All the above-mentioned VAs activate upon hearing a particular activation keyword. Although some of the policies claim that the cloud servers do not store data/voice unless the activation word is detected, there is a constant exchange of voice & related data between their cloud servers and the VA device. This is especially concerning in cases of false activation when data may be getting stored without actual knowledge,” the Internet Freedom Foundation (IFF) noted. In one bizarre incident, Alexa sent a private conversation of someone with their partner to a coworker by mistake. Or in another instance, Alexa told a 10-year-old to touch a coin to a half-inserted plug. The voice assistant had gleaned the answer from an internet trend (called the “penny challenge”) and used data from that to suggest it. Personalized VAs enhanced by recording conversations can be scary then be an unsafe implication of this process.
The impact on surveillance and privacy is also disquieting. This data can be used to track users’ interests, or be exploited by governments or law enforcement agencies and amount to human rights violations. In other cases, it can make the user vulnerable to malware attacks. In 2020, cyber-security researchers looked at security lapses in Alexa due to a bug; they were able to access all voice interactions. This bug could have allowed hackers access to personal information, including bank details and home addresses, and conversation history.
We’re living in an age where something or someone is constantly listening to us. As computer scientist Mark Weiser said, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”
Saumya Kalia is an Associate Editor at The Swaddle. Her journalism and writing explore issues of social justice, digital sub-cultures, media ecosystem, literature, and memory as they cut across socio-cultural periods. You can reach her at @Saumya_Kalia.