In recent years, the issue of user privacy has become more critical than ever before. With the rise of social media and other online platforms, companies are collecting vast amounts of user data, which can be used for various purposes. While some of these purposes may be benign, such as improving the user experience or providing targeted advertising, others may be more nefarious, such as selling user data to third parties or engaging in targeted surveillance.
There are now many apps that are activated by code words — they are called " marker words". These words can activate the listening function on your gadget covertly and completely invisibly. It can be not only "OK, Google" or "Hi, Siri", but also other completely unrelated words or sounds.
Perhaps you may have noticed Instagram advertising something you recently talked to your friends about even in real-time without holding your phone. If so, you know you're being bugged.
So, who's eavesdropping on us?
Facebook reportedly hired hundreds of third-party contractors to transcribe voice messages but stopped the practice in July 2019 after it was made public. The contractors were not always clear on why they were listening to certain conversations and did not understand how the messages were obtained. Facebook did not inform its users about this development, which involved the potential listening of personal voicemails by unauthorized individuals.
It has been reported that contractors who test Apple's Siri voice assistant for accuracy may be listening in on users' private conversations. It should be noted that Siri can be activated by more than just the phrase "Hey, Siri" and can be triggered by similar-sounding words, background noise, or hand movements. This has resulted in Siri being inadvertently activated during private conversations, leading to the collection of personal information and recordings of private conversations, including those between doctors during commercial transactions. These recordings are often accompanied by data that can reveal the location or personal contacts of the users. Apple representatives claim to be working to address these concerns in order to protect users' personal information.
Over one thousand Amazon contractors are listening to voice recordings made in the homes and offices of Echo voice assistant owners. These contractors are required to sign non-disclosure agreements and are not allowed to discuss the program publicly. They work nine-hour shifts and analyze up to 1,000 sound recordings per shift, but even if they have concerns about what they hear, they are required to adhere to the non-disclosure policy. Amazon claims to take the security and privacy of its customer's personal information seriously, and employees do not have access to information that could identify a person or account directly. It is important to note that users can disable the use of their personal voice records for the development of new features in Amazon's Alexa privacy settings.
Google employs experts to listen to the voice commands given by users to its voice assistant. These recordings are made after the voice assistant has heard the phrase "Ok, Google" and can be made on smartphones using Google Assistant or on the Google Home smart speaker. Google shares snippets of these recordings between users and linguists around the world to improve the voice assistant, but claims to have access to no more than 0.2% of all user commands. The company has prohibited employees from transcribing conversations or other extraneous sounds. However, in June 2019, it was reported that a significant leak of audio recordings of users occurred, with over a thousand recordings, including personal conversations between parents and children, addresses, and work calls being exposed. Some recordings were made accidentally due to the assistant being activated by mistake. Google attributed the leak to the actions of one linguist and claimed to be investigating the matter.
Despite the concerns that these data collection practices raise, companies often argue that they are necessary to improve user experience and provide more personalized services.
However, many users remain skeptical of these claims and are increasingly concerned about the potential for abuse. For example, data breaches can expose user data to hackers and other malicious actors, potentially putting users at risk of identity theft and other forms of cybercrime. Additionally, governments and other organizations may use user data to engage in targeted surveillance, raising concerns about civil liberties and individual privacy.
In response to these concerns, governments and regulatory bodies have taken steps to regulate the collection and use of user data. In the European Union, the General Data Protection Regulations (GDPR) have strengthened data privacy laws and given users greater control over their data. In the United States, the California Consumer Privacy Act (CCPA) has similarly sought to protect user privacy by requiring companies to disclose what data they collect and allowing users to opt out of data sharing.
Despite these efforts, however, the issue of user privacy remains a contentious one. As technology continues to advance, companies will undoubtedly find new ways to collect and utilize user data, raising new concerns about privacy and security. It is therefore crucial that users remain vigilant and informed about the data collection practices of companies they interact with, and for governments and regulatory bodies to continue to monitor and regulate these practices to protect user privacy.