EP 117 Pauline Norstrom CEO Anekanta "Fact-Checking is Dead: Can We Still Trust Social Media?"
Manage episode 472309928 series 3446259
Podcast Summary: Security Circle Podcast with Pauline Norstrom
Overview
In this episode of the Security Circle Podcast, host Yolanda welcomes back Pauline Norstrom, CEO of Anacanta Consulting, for a second appearance. The discussion centers around artificial intelligence (AI), misinformation, cybersecurity, and the broader ethical implications of technology.
Key Topics Discussed
- The Impact of Meta Removing Fact-Checking on Facebook
- Meta’s decision to remove fact-checking raises concerns about misinformation spreading unchecked.
- Facebook previously introduced fact-checking after the Cambridge Analytica scandal, which involved unauthorized access to 87 million users' data for political influence.
- The potential return of large-scale misinformation campaigns is a key concern.
- The Role of AI in Social Media & Misinformation
- AI is at the core of social media operations, influencing what users see.
- The removal of moderation could allow algorithms to amplify harmful content.
- AI's ability to manipulate user sentiment and engagement raises ethical issues.
- The Future of Social Media & Trust in Tech
- Increasing numbers of people are leaving platforms like Facebook and X (Twitter) due to toxicity and lack of control.
- Without trust and proper regulation, platforms may lose advertising revenue, making unchecked AI a potential commercial risk.
- The Risks of AI-Generated News & Fake Information
- Example: Apple AI wrongly reported false news stories, further eroding public trust in media.
- AI models can "hallucinate" or generate false information based on incorrect or biased datasets.
- Responsibility should lie with companies to ensure accurate AI outputs.
- AI Regulation and the Business Risk Factor
- Lack of AI regulation in the UK is causing uncertainty for businesses.
- The UK has voluntary AI guidelines but no strict legal framework.
- Businesses are hesitant to adopt AI fully due to liability risks.
- AI in Business vs. Public Use
- AI adoption in regulated industries (healthcare, finance, law) requires human oversight.
- The Air Canada chatbot case highlights liability issues when AI provides misleading advice.
- AI should be a tool for enhancement, not a replacement for human decision-making.
- The Online Safety Bill & Protecting Children
- The UK’s Online Safety Bill is 10 years overdue and lacks enforceability.
- AI-driven social media poses risks to children, exposing them to harmful content.
- Ethical concerns arise around uncontrolled AI algorithms influencing young users.
- The Future of AI and Its Ethical Challenges
- AI can be beneficial if used correctly, but over-reliance on it can be dangerous.
- Businesses and governments must establish clear accountability for AI decisions.
- Pauline argues that AI is just maths, and humans must critically assess its outputs.
Final Thoughts
Pauline Norstrom emphasizes that AI should be seen as a tool to enhance intelligence rather than replace human expertise. The conversation underscores the need for critical thinking, regulation, and ethical AI deployment to prevent harm while maximizing AI's benefits.
Security Circle ⭕️ is an IFPOD production for IFPO the International Foundation of Protection Officers
129 episodes