012: AI IS SPYING ON YOU: The Data Theft You Didn't See Coming
Manage episode 479034772 series 3649195
This episode of I Call Bullshit examines the widespread data collection practices of AI-powered devices and platforms, exploring the implications for consumer privacy, the potential for data misuse, and the current state of regulations, while also addressing the often misleading claims of data anonymization and providing specific advice for using AI tools like ChatGPT.
Key Topics Discussed:
- Understanding AI Data Collection:
- How AI assistants (Alexa, Siri, ChatGPT, Google Assistant) collect user data through voice recordings, text inputs, and usage patterns.
- [1] Amazon. "Alexa and Your Privacy." Amazon.com. https://www.amazon.com/alexaprivacysettings/ (Note: Specific details about data storage and usage are often updated; refer to the latest policy).
- [3] OpenAI. "How Your Data is Used to Improve Models." OpenAI Help Center. https://www.google.com/search?q=https://help.openai.com/en/articles/6345527-how-your-data-is-used-to-improve-models (Note: OpenAI's data usage policies are subject to change; refer to the latest documentation).
- The role of smart devices (smart speakers, thermostats, doorbells, TVs) in continuous data gathering, including environmental data, usage patterns, and even biometric information.
- [2] Consumer Reports. "Smart TVs Are Watching You." ConsumerReports.org. https://www.consumerreports.org/privacy/smart-tvs-are-watching-you/ (Note: Specific functionalities and privacy implications vary by manufacturer and model).
- How AI assistants (Alexa, Siri, ChatGPT, Google Assistant) collect user data through voice recordings, text inputs, and usage patterns.
- Data Usage and Misuse:
- How collected data is used to train AI models, personalize user experiences, and target advertising.
- [5] O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016. (Discusses the broader implications of data-driven algorithms and their potential for bias and manipulation).
- The potential for data breaches and misuse, referencing incidents like the DeepSeek data leak.
- [6] Wikipedia. "DeepSeek." https://en.wikipedia.org/wiki/DeepSeek (Search for information regarding any reported data leaks or security incidents associated with DeepSeek or similar AI companies). Note: Specific details and sources regarding data leaks should be verified with reputable cybersecurity news outlets and reports.
- The risk of bias in AI algorithms due to biased training data, leading to discriminatory outcomes.
- [7] Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018. (Examines how search engine algorithms can perpetuate and amplify racial bias).
- [8] Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019. (A comprehensive analysis of the economic and social implications of surveillance capitalism).
- How collected data is used to train AI models, personalize user experiences, and target advertising.
- The ChatGPT Black Box:
- Analysis of OpenAI's privacy policy regarding data collection, usage, and sharing.
- [26] OpenAI. "Privacy Policy." https://openai.com/policies/privacy-policy (Refer to the latest version of OpenAI's privacy policy for the most up-to-date information).
- [27] OpenAI. "Privacy Policy." https://openai.com/policies/privacy-policy (Refer to the latest version of OpenAI's privacy policy for the most up-to-date information).
- [28] OpenAI. "Privacy Policy." https://openai.com/policies/privacy-policy (Refer to the latest version of OpenAI's privacy policy for the most up-to-date information).
- [29] OpenAI. "Privacy Policy." https://openai.com/policies/privacy-policy (Refer to the latest version of OpenAI's privacy policy for the most up-to-date information).
- [30] OpenAI. "Privacy Policy." https://openai.com/policies/privacy-policy (Refer to the latest version of OpenAI's privacy policy for the most up-to-date information).
- Discussion on whether user data is sold and how it is used for model training and service improvement.
- Advice for consumers on how to use AI tools like ChatGPT while minimizing privacy risks.
- [31] OpenAI. "How Your Data is Used to Improve Models." OpenAI Help Center. https://help.openai.com/en/articles/6345527-how-your-data-is-used-to-improve-models (Refer to the latest guidance from OpenAI on data usage controls).
- Analysis of OpenAI's privacy policy regarding data collection, usage, and sharing.
- The Illusion of Anonymity:
- The concept of the "data mosaic effect" and how seemingly anonymous data points can be aggregated to re-identify individuals.
- [22] Ohm, Paul. "Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization." UCLA Law Review, Vol. 57, 2010, pp. 1701-1777.
- The concept of the "data mosaic effect" and how seemingly anonymous data points can be aggregated to re-identify individuals.
Producer & Host:
Kristina Braly
contact[at]kristinabraly.com
instagram: @icallbswithkb
website: icallbullshit.co
13 episodes