Sovereign AI: Using LLMs Without Sacrificing Privacy - The Sovereign Computing Show (SOV013)
MP3•Episode home
Manage episode 479798759 series 3619256
Content provided by ATL BitLab. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by ATL BitLab or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
AI assistants like ChatGPT and Claude are powerful tools, but they come with significant privacy trade-offs. In this episode, Jordan Bravo and Stephen DeLorme explore practical approaches to using AI without surrendering your data to big tech companies. They compare privacy-focused third-party services that use confidential computing (like Maple) and local storage options (like Venice.AI) before diving into running open-source models entirely on your own hardware with tools like Ollama, GPT4All, and LM Studio. They also reveal how your Smart TV might take screenshots of what you're watching through Automatic Content Recognition (ACR) and share steps to disable this intrusive tracking. Show Notes: https://atlbitlab.com/podcast/sovereign-ai-using-llms-without-sacrificing-privacy 00:00 Introduction to The Sovereign Computing Show 00:42 ATL BitLab Sponsorship Information 01:45 Welcome and Show Contact Information 02:09 Smart TVs and Automatic Content Recognition (ACR) 03:58 How ACR Surveillance Works in Smart TVs 05:23 The Creepy Reality of TV Screenshot Tracking 08:33 Solutions for Smart TV Privacy Concerns 10:47 Unplugging Your Smart TV from the Internet 11:51 Main Topic: Using AI and LLMs Privately 12:44 Understanding LLMs vs. Other Generative AI 14:51 The Privacy Problem with Major LLM Providers 16:44 Private Third-Party AI Providers 16:44 Maple and Confidential Computing 22:32 Venice.AI with Local Storage 27:28 Kagi AI's Privacy Trade-offs 30:49 The Privacy Spectrum of AI Services 33:38 Self-Hosting LLMs and Local Models 34:22 Ollama for Running Local Models 37:25 Running Models Without Internet Connection 38:43 OpenWebUI for Graphical Interface 41:35 GPT4All for User-Friendly Local AI 43:03 LM Studio with Integrated Interface 44:55 Hardware Limitations for Local LLMs 46:15 Local Image Generation 46:47 Stable Diffusion Web UI 48:09 ComfyUI for Artist-Friendly Workflows 51:50 ATL BitLab AI Meetup Information 53:11 Conclusion and Contact Information 53:40 Show Outro and Support Details
…
continue reading
23 episodes