44 subscribers
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED


1 Battle Camp S1: Reality Rivalries with Dana Moon & QT 1:00:36
Testing LLMs for trust and safety
Manage episode 452408429 series 1584445
We all get a few chuckles when autocorrect gets something wrong, but there's a lot of time-saving and face-saving value with autocorrect. But do we trust autocorrect? Yeah. We do, even with its errors. Maybe you can use ChatGPT to improve your productivity. Ask it to a cool question and maybe get a decent answer. That's fine. After all, it's just between you and ChatGPT. But, what if you're a software company and you're leveraging these technologies? You could be putting generative AI output in front of your users.
On this episode of the Georgian Impact Podcast, it is time to talk about GenAI and trust. Angeline Yasodhara, an Applied Research Scientist at Georgian, is here to discuss the new world of GenAI.
You'll Hear About:
- Differences between closed and open-source large language models (LLMs), advantages and disadvantages of each.
- Limitations and biases inherent in LLMs due to their training on Internet data.
- Treating LLMs as untrusted users and the need to restrict data access to minimize potential risks.
- The continuous learning process of LLMs through reinforcement learning from human feedback.
- Ethical issues and biases associated with LLMs, and the challenges of fostering creativity while avoiding misinformation.
- Collaboration between AI and security teams to identify and mitigate potential risks associated with LLM applications.
Who is Angelina Yasodhara?
Angeline Yasodhara is an Applied Research Scientist at Georgian, where she collaborates with companies to help accelerate their AI products. With expertise in the ethical and security implications of LLMs, she provides valuable insights into the advantages and challenges of closed vs. open-source LLMs.
138 episodes
Manage episode 452408429 series 1584445
We all get a few chuckles when autocorrect gets something wrong, but there's a lot of time-saving and face-saving value with autocorrect. But do we trust autocorrect? Yeah. We do, even with its errors. Maybe you can use ChatGPT to improve your productivity. Ask it to a cool question and maybe get a decent answer. That's fine. After all, it's just between you and ChatGPT. But, what if you're a software company and you're leveraging these technologies? You could be putting generative AI output in front of your users.
On this episode of the Georgian Impact Podcast, it is time to talk about GenAI and trust. Angeline Yasodhara, an Applied Research Scientist at Georgian, is here to discuss the new world of GenAI.
You'll Hear About:
- Differences between closed and open-source large language models (LLMs), advantages and disadvantages of each.
- Limitations and biases inherent in LLMs due to their training on Internet data.
- Treating LLMs as untrusted users and the need to restrict data access to minimize potential risks.
- The continuous learning process of LLMs through reinforcement learning from human feedback.
- Ethical issues and biases associated with LLMs, and the challenges of fostering creativity while avoiding misinformation.
- Collaboration between AI and security teams to identify and mitigate potential risks associated with LLM applications.
Who is Angelina Yasodhara?
Angeline Yasodhara is an Applied Research Scientist at Georgian, where she collaborates with companies to help accelerate their AI products. With expertise in the ethical and security implications of LLMs, she provides valuable insights into the advantages and challenges of closed vs. open-source LLMs.
138 episodes
All episodes
×
1 Redefining legal impact with the team at Darrow 18:34



1 The nitty-gritty of fine-tuning a GenAI model 18:30

1 How Georgian's AI team supports companies in adopting GenAI 20:35

1 Adapting teams to GenAI with Marketing AI Institute's Paul Roetzer 26:32

1 An introduction to generative AI with NVIDIA’s Mahan Salehi 22:05

1 Why Georgian launched a Product-led Purpose thesis 26:29

1 How purpose is a business differentiator: the Fiix story 22:03

1 OpenWeb’s Tiffany Xingyu Wang on making publishers sustainable with first-party data 31:15

1 How to make blockchain interoperability more secure with Hyperlane's Jon Kol 44:46


1 Commercializing AI with Vector Institute’s Cameron Schuler 30:18

1 How Explainable AI enables trust with Fiddler.AI’s Krishna Gade 25:12

1 PolyAI’s Nikola Mrkšić on making a voice assistant people love 32:45

1 Creating Dynamic Content Experiences with Contentstack’s Neha Sampat 27:05

1 SPINS’ Brian Ritz on Getting Insights and Storytelling with Data 30:34

1 How Self-Sovereign Identity Works with Trinsic's Riley Hughes 28:09

1 Developing Open Standards for Self-Sovereign Identity With Kaliya Young 24:26


1 Why Inclusive Design Builds Better Products With Fable’s Alwar Pillai 19:01

1 Cybersecurity in the Age of Quantum Computing With Michele Mosca 23:48

1 Solving Scalability Challenges in Web3 with Maggie Love 23:26

1 Why Decentralized Data will Drive Innovation in Web3 with 3Box Labs' Lauren Feld 29:34

1 How to Build a Healthy Internet with OpenWeb's Nadav Shoval 30:50

1 McMaster University’s Vass Bednar Explains How Privacy, Public Policy and Innovation Intersect 26:56

1 An Introduction to Self-Sovereign Identity with Northern Block CEO Mathieu Glaude 29:45

1 Navigating the Cybersecurity Landscape with CISO Alex Manea 25:35

1 Rewind CEO Mike Potter On Backing up the Cloud, the Risk 3rd Party Applications Pose & Building a Time Machine for SaaS Apps 20:33

1 The Future of Digital Advertising with Beam.city DNA 21:40
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.