Content provided by Alexandre Andorra. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alexandre Andorra or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

#103 Improving Sampling Algorithms & Prior Elicitation, with Arto Klami

1:14:39
 
Share
 

Manage episode 410857248 series 2635823
Content provided by Alexandre Andorra. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alexandre Andorra or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!


Changing perspective is often a great way to solve burning research problems. Riemannian spaces are such a perspective change, as Arto Klami, an Associate Professor of computer science at the University of Helsinki and member of the Finnish Center for Artificial Intelligence, will tell us in this episode.

He explains the concept of Riemannian spaces, their application in inference algorithms, how they can help sampling Bayesian models, and their similarity with normalizing flows, that we discussed in episode 98.

Arto also introduces PreliZ, a tool for prior elicitation, and highlights its benefits in simplifying the process of setting priors, thus improving the accuracy of our models.

When Arto is not solving mathematical equations, you’ll find him cycling, or around a good board game.

Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Thank you to my Patrons for making this episode possible!

Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser and Julio.

Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)

Takeaways:

- Riemannian spaces offer a way to improve computational efficiency and accuracy in Bayesian inference by considering the curvature of the posterior distribution.

- Riemannian spaces can be used in Laplace approximation and Markov chain Monte Carlo algorithms to better model the posterior distribution and explore challenging areas of the parameter space.

- Normalizing flows are a complementary approach to Riemannian spaces, using non-linear transformations to warp the parameter space and improve sampling efficiency.

- Evaluating the performance of Bayesian inference algorithms in challenging cases is a current research challenge, and more work is needed to establish benchmarks and compare different methods.

- PreliZ is a package for prior elicitation in Bayesian modeling that facilitates communication with users through visualizations of predictive and parameter distributions.

- Careful prior specification is important, and tools like PreliZ make the process easier and more reproducible.

- Teaching Bayesian machine learning is challenging due to the combination of statistical and programming concepts, but it is possible to teach the basic reasoning behind Bayesian methods to a diverse group of students.

- The integration of Bayesian approaches in data science workflows is becoming more accepted, especially in industries that already use deep learning techniques.

- The future of Bayesian methods in AI research may involve the development of AI assistants for Bayesian modeling and probabilistic reasoning.

Chapters:

00:00 Introduction and Background

02:05 Arto's Work and Background

06:05 Introduction to Bayesian Inference

12:46 Riemannian Spaces in Bayesian Inference

27:24 Availability of Romanian-based Algorithms

30:20 Practical Applications and Evaluation

37:33 Introduction to Prelease

38:03 Prior Elicitation

39:01 Predictive Elicitation Techniques

39:30 PreliZ: Interface with Users

40:27 PreliZ: General Purpose Tool

41:55 Getting Started with PreliZ

42:45 Challenges of Setting Priors

45:10 Reproducibility and Transparency in Priors

46:07 Integration of Bayesian Approaches in Data Science Workflows

55:11 Teaching Bayesian Machine Learning

01:06:13 The Future of Bayesian Methods with AI Research

01:10:16 Solving the Prior Elicitation Problem

Links from the show:


Transcript

This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

  continue reading

158 episodes

iconShare
 
Manage episode 410857248 series 2635823
Content provided by Alexandre Andorra. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Alexandre Andorra or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!


Changing perspective is often a great way to solve burning research problems. Riemannian spaces are such a perspective change, as Arto Klami, an Associate Professor of computer science at the University of Helsinki and member of the Finnish Center for Artificial Intelligence, will tell us in this episode.

He explains the concept of Riemannian spaces, their application in inference algorithms, how they can help sampling Bayesian models, and their similarity with normalizing flows, that we discussed in episode 98.

Arto also introduces PreliZ, a tool for prior elicitation, and highlights its benefits in simplifying the process of setting priors, thus improving the accuracy of our models.

When Arto is not solving mathematical equations, you’ll find him cycling, or around a good board game.

Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Thank you to my Patrons for making this episode possible!

Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser and Julio.

Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)

Takeaways:

- Riemannian spaces offer a way to improve computational efficiency and accuracy in Bayesian inference by considering the curvature of the posterior distribution.

- Riemannian spaces can be used in Laplace approximation and Markov chain Monte Carlo algorithms to better model the posterior distribution and explore challenging areas of the parameter space.

- Normalizing flows are a complementary approach to Riemannian spaces, using non-linear transformations to warp the parameter space and improve sampling efficiency.

- Evaluating the performance of Bayesian inference algorithms in challenging cases is a current research challenge, and more work is needed to establish benchmarks and compare different methods.

- PreliZ is a package for prior elicitation in Bayesian modeling that facilitates communication with users through visualizations of predictive and parameter distributions.

- Careful prior specification is important, and tools like PreliZ make the process easier and more reproducible.

- Teaching Bayesian machine learning is challenging due to the combination of statistical and programming concepts, but it is possible to teach the basic reasoning behind Bayesian methods to a diverse group of students.

- The integration of Bayesian approaches in data science workflows is becoming more accepted, especially in industries that already use deep learning techniques.

- The future of Bayesian methods in AI research may involve the development of AI assistants for Bayesian modeling and probabilistic reasoning.

Chapters:

00:00 Introduction and Background

02:05 Arto's Work and Background

06:05 Introduction to Bayesian Inference

12:46 Riemannian Spaces in Bayesian Inference

27:24 Availability of Romanian-based Algorithms

30:20 Practical Applications and Evaluation

37:33 Introduction to Prelease

38:03 Prior Elicitation

39:01 Predictive Elicitation Techniques

39:30 PreliZ: Interface with Users

40:27 PreliZ: General Purpose Tool

41:55 Getting Started with PreliZ

42:45 Challenges of Setting Priors

45:10 Reproducibility and Transparency in Priors

46:07 Integration of Bayesian Approaches in Data Science Workflows

55:11 Teaching Bayesian Machine Learning

01:06:13 The Future of Bayesian Methods with AI Research

01:10:16 Solving the Prior Elicitation Problem

Links from the show:


Transcript

This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

  continue reading

158 episodes

All episodes

×
 
Get 10% off Hugo's "Building LLM Applications for Data Scientists and Software Engineers" online course! Today’s clip is from episode 135 of the podcast, with Teemu Säilynoja. Alex and Teemu discuss the importance of simulation-based calibration (SBC). They explore the practical implementation of SBC in probabilistic programming languages, the challenges faced in developing SBC methods, and the significance of both prior and posterior SBC in ensuring model reliability. The discussion emphasizes the need for careful model implementation and inference algorithms to achieve accurate calibration. Get the full conversation here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Teemu focuses on calibration assessments and predictive checking in Bayesian workflows. Simulation-based calibration (SBC) checks model implementation SBC involves drawing realizations from prior and generating prior predictive data. Visual predictive checking is crucial for assessing model predictions. Prior predictive checks should be done before looking at data. Posterior SBC focuses on the area of parameter space most relevant to the data. Challenges in SBC include inference time. Visualizations complement numerical metrics in Bayesian modeling. Amortized Bayesian inference benefits from SBC for quick posterior checks. The calibration of Bayesian models is more intuitive than Frequentist models. Choosing the right visualization depends on data characteristics. Using multiple visualization methods can reveal different insights. Visualizations should be viewed as models of the data. Goodness of fit tests can enhance visualization accuracy. Uncertainty visualization is crucial but often overlooked. Chapters : 09:53 Understanding Simulation-Based Calibration (SBC) 15:03 Practical Applications of SBC in Bayesian Modeling 22:19 Challenges in Developing Posterior SBC 29:41 The Role of SBC in Amortized Bayesian Inference 33:47 The Importance of Visual Predictive Checking 36:50 Predictive Checking and Model Fitting 38:08 The Importance of Visual Checks 40:54 Choosing Visualization Types 49:06 Visualizations as Models 55:02 Uncertainty Visualization in Bayesian Modeling 01:00:05 Future Trends in Probabilistic Modeling Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: Teemu's website: https://teemusailynoja.github.io/ Teemu on LinkedIn: https://www.linkedin.com/in/teemu-sailynoja/ Teemu on GitHub: https://github.com/TeemuSailynoja Bayesian Workflow group: https://users.aalto.fi/~ave/group.html LBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmitt LBS #73 A Guide to Plotting Inferences & Uncertainties of Bayesian Models, with Jessica Hullman: https://learnbayesstats.com/episode/73-guide-plotting-inferences-uncertainties-bayesian-models-jessica-hullman LBS #66 Uncertainty Visualization & Usable Stats, with Matthew Kay: https://learnbayesstats.com/episode/66-uncertainty-visualization-usable-stats-matthew-kay LBS #35 The Past, Present & Future of BRMS, with Paul Bürkner: https://learnbayesstats.com/episode/35-past-present-future-brms-paul-burkner LBS #29 Model Assessment, Non-Parametric Models, And Much More, with Aki Vehtari: https://learnbayesstats.com/episode/model-assessment-non-parametric-models-aki-vehtari Posterior SBC – Simulation-Based Calibration Checking Conditional on Data: https://arxiv.org/abs/2502.03279 Recommendations for visual predictive checks in Bayesian workflow: https://teemusailynoja.github.io/visual-predictive-checks/ Simuk, SBC for PyMC: https://simuk.readthedocs.io/en/latest/ SBC, tools for model validation in R: https://hyunjimoon.github.io/SBC/index.html New ArviZ, Prior and Posterior predictive checks: https://arviz-devs.github.io/EABM/Chapters/Prior_posterior_predictive_checks.html Bayesplot, plotting for Bayesian models in R: https://mc-stan.org/bayesplot/ Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
ICYMI, I'll be in London next week, for a live episode of the Learning Bayesian Statistics podcast 🍾 Come say hi on June 24 at Imperial College London ! We'll be talking about uncertainty quantification — not just in theory, but in the messy, practical reality of building models that are supposed to work in the real world. 🎟️ Get your tickets ! Some of the questions we’ll unpack : 🔍 Why is it so hard to model uncertainty reliably? ⚠️ How do overconfident models break things in production? 🧠 What tools and frameworks help today? 🔄 What do we need to rethink if we want robust ML over the next decade? Joining me on stage: the brilliant Mélodie Monod , Yingzhen Li and François-Xavier Briol -- researchers doing cutting-edge work on these questions, across Bayesian methods, statistical learning, and real-world ML deployment. A huge thank you to Oliver Ratmann for setting this up! 📍 Imperial-X, White City Campus (Room LRT 608) 🗓️ June 24, 11:30–13:00 🎙️ Doors open at 11:30 — we start at noon sharp Come say hi, ask hard questions, and be part of the recording. 🎟️ Get your tickets ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Today’s clip is from episode 134 of the podcast, with David Kohns. Alex and David discuss the future of probabilistic programming, focusing on advancements in time series modeling, model selection, and the integration of AI in prior elicitation. The discussion highlights the importance of setting appropriate priors, the challenges of computational workflows, and the potential of normalizing flows to enhance Bayesian inference. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Setting appropriate priors is crucial to avoid overfitting in models. R-squared can be used effectively in Bayesian frameworks for model evaluation. Dynamic regression can incorporate time-varying coefficients to capture changing relationships. Predictively consistent priors enhance model interpretability and performance. Identifiability is a challenge in time series models. State space models provide structure compared to Gaussian processes. Priors influence the model's ability to explain variance. Starting with simple models can reveal interesting dynamics. Understanding the relationship between states and variance is key. State-space models allow for dynamic analysis of time series data. AI can enhance the process of prior elicitation in statistical models. Chapters : 10:09 Understanding State Space Models 14:53 Predictively Consistent Priors 20:02 Dynamic Regression and AR Models 25:08 Inflation Forecasting 50:49 Understanding Time Series Data and Economic Analysis 57:04 Exploring Dynamic Regression Models 01:05:52 The Role of Priors 01:15:36 Future Trends in Probabilistic Programming 01:20:05 Innovations in Bayesian Model Selection Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: David's website: https://davkoh.github.io/ David on LinkedIn: https://www.linkedin.com/in/david-kohns-03984013b/ David on GitHub: https://github.com/davkoh David on Google Scholar: https://scholar.google.com/citations?user=9gKE8e4AAAAJ&hl=en Dynamic Regression Case Study: https://davkoh.github.io/case-studies/01_dyn_reg/dyn_reg_casestudy5.html ARR2 Paper: https://projecteuclid.org/journals/bayesian-analysis/advance-publication/The-ARR2-Prior--Flexible-Predictive-Prior-Definition-for-Bayesian/10.1214/25-BA1512.full ARR2 Paper GitHub repository: https://github.com/n-kall/arr2/tree/main ARR2 StanCon talk: https://www.youtube.com/watch?v=8XBe2jrOKvw&list=PLCrWEzJgSUqzNzh6mjWsWUu-lSK59VXP6&index=29 ARR2 Prior in PyMC: https://www.austinrochford.com/posts/r2-priors-pymc.html LBS #124 State Space Models & Structural Time Series, with Jesse Grabowski: https://learnbayesstats.com/episode/124-state-space-models-structural-time-series-jesse-grabowski LBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmitt LBS #74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian Seyboldt: https://learnbayesstats.com/episode/74-optimizing-nuts-developing-zerosumnormal-distribution-adrian-seyboldt Nutpie’s Normalizing Flows adaptation: https://pymc-devs.github.io/nutpie/nf-adapt.html Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Today’s clip is from episode 133 of the podcast, with Sean Pinkney & Adrian Seyboldt. The conversation delves into the concept of Zero-Sum Normal and its application in statistical modeling, particularly in hierarchical models. Alex, Sean and Adrian discuss the implications of using zero-sum constraints, the challenges of incorporating new data points, and the importance of distinguishing between sample and population effects. They also explore practical solutions for making predictions based on population parameters and the potential for developing tools to facilitate these processes. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways : Zero Sum constraints allow for better sampling and estimation in hierarchical models. Understanding the difference between population and sample means is crucial. A library for zero-sum normal effects would be beneficial. Practical solutions can yield decent predictions even with limitations. Cholesky parameterization can be adapted for positive correlation matrices. Understanding the geometry of sampling spaces is crucial. The relationship between eigenvalues and sampling is complex. Collaboration and sharing knowledge enhance research outcomes. Innovative approaches can simplify complex statistical problems. Chapters : 03:35 Sean Pinkney's Journey to Bayesian Modeling 11:21 The Zero-Sum Normal Project Explained 18:52 Technical Insights on Zero-Sum Constraints 32:04 Handling New Elements in Bayesian Models 36:19 Understanding Population Parameters and Predictions 49:11 Exploring Flexible Cholesky Parameterization 01:07:23 Closing Thoughts and Future Directions Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: Sean's website: https://spinkney.github.io/ Sean on LinkedIn: https://www.linkedin.com/in/sean-pinkney123/ Sean on GitHub: https://github.com/spinkney Sean on BlueSky: https://bsky.app/profile/spinkney.bsky.social Sean on Mastodon: https://fosstodon.org/@spinkney Sean's talk at StanCon 2024: https://youtu.be/eE8Vqxs8OfQ?si=09-vNvCxpbz8enUj Flexible Cholesky Parameterization of Correlation Matrices: https://arxiv.org/abs/2405.07286 Quantile Regressions in Stan: https://spinkney.github.io/posts/post-2-quantile-reg-series/post-2-quantile-reg-part-I/quantile-reg.html LBS #74 Optimizing NUTS and Developing the ZeroSumNormal Distribution, with Adrian Seyboldt: https://learnbayesstats.com/episode/74-optimizing-nuts-developing-zerosumnormal-distribution-adrian-seyboldt Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Today’s clip is from episode 132 of the podcast, with Tom Griffiths. Tom and Alex Andorra discuss the fundamental differences between human intelligence and artificial intelligence, emphasizing the constraints that shape human cognition, such as limited data, computational resources, and communication bandwidth. They explore how AI systems currently learn and the potential for aligning AI with human cognitive processes. The discussion also delves into the implications of AI in enhancing human decision-making and the importance of understanding human biases to create more effective AI systems. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Check out Hugo’s latest episode with Fei-Fei Li, on How Human-Centered AI Actually Gets Built Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways: Computational cognitive science seeks to understand intelligence mathematically. Bayesian statistics is crucial for understanding human cognition. Inductive biases help explain how humans learn from limited data. Eliciting prior distributions can reveal implicit beliefs. The wisdom of individuals can provide richer insights than averaging group responses. Generative AI can mimic human cognitive processes. Human intelligence is shaped by constraints of data, computation, and communication. AI systems operate under different constraints than human cognition. Human intelligence differs fundamentally from machine intelligence. Generative AI can complement and enhance human learning. AI systems currently lack intrinsic human compatibility. Language training in AI helps align its understanding with human perspectives. Reinforcement learning from human feedback can lead to misalignment of AI goals. Representational alignment can improve AI's understanding of human concepts. AI can help humans make better decisions by providing relevant information. Research should focus on solving problems rather than just methods. Chapters : 00:00 Understanding Computational Cognitive Science 13:52 Bayesian Models and Human Cognition 29:50 Eliciting Implicit Prior Distributions 38:07 The Relationship Between Human and AI Intelligence 45:15 Aligning Human and Machine Preferences 50:26 Innovations in AI and Human Interaction 55:35 Resource Rationality in Decision Making 01:00:07 Language Learning in AI Models 01:06:04 Inductive Biases in Language Learning 01:11:55 Advice for Aspiring Cognitive Scientists 01:21:19 Future Trends in Cognitive Science and AI Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: Check out Hugo’s latest episode with Fei-Fei Li, on How Human-Centered AI Actually Gets Built: https://high-signal.delphina.ai/episode/fei-fei-on-how-human-centered-ai-actually-gets-built?utm_source=laplace&utm_medium=podcast&utm_campaign=feifei_launch Tom's profile at Princeton University: https://psychology.princeton.edu/people/tom-griffiths Computational Cognitive Science Lab: https://cocosci.princeton.edu/ Tom’s Google Scholar: https://scholar.google.com/citations?user=UAwKvEsAAAAJ&hl=en Tom's latest book, Bayesian Models of Cognition : https://mitpress.mit.edu/9780262049412/bayesian-models-of-cognition/ Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Today’s clip is from episode 131 of the podcast, with Luke Bornn. Luke and Alex discuss the application of generative models in sports analytics. They emphasize the importance of Bayesian modeling to account for uncertainty and contextual variations in player data. The discussion also covers the challenges of balancing model complexity with computational efficiency, the innovative ways to hack Bayesian models for improved performance, and the significance of understanding model fitting and discretization in statistical modeling. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
L
Learning Bayesian Statistics
Learning Bayesian Statistics podcast artworkLearning Bayesian Statistics podcast artwork
 
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Takeaways: Player tracking data revolutionized sports analytics. Decision-making in sports involves managing uncertainty and budget constraints. Luke emphasizes the importance of portfolio optimization in team management. Clubs with high budgets can afford inefficiencies in player acquisition. Statistical methods provide a probabilistic approach to player value. Removing human bias is crucial in sports decision-making. Understanding player performance distributions aids in contract decisions. The goal is to maximize performance value per dollar spent. Model validation in sports requires focusing on edge cases. Generative models help account for uncertainty in player performance. Computational efficiency is key in handling large datasets. A diverse skill set enhances problem-solving in sports analytics. Broader knowledge in data science leads to innovative solutions. Integrating software engineering with statistics is crucial in sports analytics. Model validation often requires more work than model fitting itself. Understanding the context of data is essential for accurate predictions. Continuous learning and adaptation are essential in analytics. Chapters: 11:58 Transition from Academia to Sports Analytics 20:44 Evolution of Sports Analytics and Data Sources 23:53 Modeling Uncertainty in Decision Making 32:05 The Role of Statistical Models in Player Evaluation 39:20 Generative Models and Bayesian Framework in Sports 46:54 Hacking Bayesian Models for Better Performance 49:55 Understanding Computational Challenges in Bayesian Inference 52:44 Exploring Different Approaches to Model Fitting 56:30 Building a Comprehensive Statistical Toolbox 01:00:37 The Importance of Data Management in Modeling 01:03:21 Iterative Model Validation and Diagnostics 01:06:53 Uncovering Insights from Sports Data 01:16:47 Emerging Trends in Sports Analytics 01:21:30 Future Directions and Personal Aspirations Links from the show: Luke’s website: http://www.lukebornn.com/ Luke on Linkedin: https://www.linkedin.com/in/lukebornn/ Luke on Wharton Moneyball: https://knowledge.wharton.upenn.edu/podcast/moneyball-highlights/luke-bornn-part-owner-of-ac-milan/ LBS #108 Modeling Sports & Extracting Player Values, with Paul Sabin: https://learnbayesstats.com/episode/108-modeling-sports-extracting-player-values-paul-sabin Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Today’s clip is from episode 130 of the podcast, with epidemiological modeler Adam Kucharski. This conversation explores the critical role of patient modeling during the COVID-19 pandemic, highlighting how these models informed public health decisions and the relationship between modeling and policy. The discussion emphasizes the need for improved communication and understanding of data among the public and policymakers. Get the full discussion here . Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Takeaways: Epidemiology requires a blend of mathematical and statistical understanding. Models are essential for informing public health decisions during epidemics. The COVID-19 pandemic highlighted the importance of rapid modeling. Misconceptions about data can lead to misunderstandings in public health. Effective communication is crucial for conveying complex epidemiological concepts. Epidemic thinking can be applied to various fields, including marketing and finance. Public health policies should be informed by robust modeling and data analysis. Automation can help streamline data analysis in epidemic response. Understanding the limitations of models is key to effective decision-making Collaboration is key in developing complex models. Uncertainty estimation is crucial for effective decision-making. AI has the potential to enhance data interpretation in epidemiology. Educational initiatives should focus on understanding exponential growth and lagged outcomes. The complexity of modern epidemics requires a deeper understanding from the public. Understanding the balance between perfection and practicality is essential in modeling. Chapters: 00:00 Introduction to Epidemiological Modeling 05:16 The Role of Bayesian Methods in Epidemic Forecasting 11:29 Real-World Applications of Models in Public Health 19:07 Common Misconceptions About Epidemiological Data 27:43 Understanding the Spread of Ideas and Beliefs 32:55 Workflow and Collaboration in Epidemiological Modeling 34:51 Modeling Approaches in Epidemiology 40:04 Challenges in Model Development 45:55 Uncertainty in Epidemiological Models 48:46 The Impact of AI on Epidemiology 54:55 Educational Initiatives for Future Epidemiologists Links from the show: Adam’s website: https://kucharski.substack.com/ Adam on Google Scholar: https://scholar.google.com/citations?user=eIqfmHYAAAAJ&hl=en Adam on Linkedin: https://www.linkedin.com/in/adam-kucharski-1a1b0225b/ The Rules of Contagion: Why Things Spread - and Why They Stop: https://www.amazon.co.uk/Rules-Contagion-Things-Wellcome-Collection/dp/1788160207 Adam's next book, The Uncertain Science of Certainty: https://proof.kucharski.io/ LBS #50 Ta(l)king Risks & Embracing Uncertainty, with David Spiegelhalter: https://learnbayesstats.com/episode/50-talking-risks-embracing-uncertainty-david-spiegelhalter LBS #51 Bernoulli’s Fallacy & the Crisis of Modern Science, with Aubrey Clayton: https://learnbayesstats.com/episode/51-bernoullis-fallacy-crisis-modern-science-aubrey-clayton Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Today’s clip is from episode 129 of the podcast, with AI expert and researcher Vincent Fortuin. This conversation delves into the intricacies of Bayesian deep learning, contrasting it with traditional deep learning and exploring its applications and challenges. Get the full discussion at https://learnbayesstats.com/episode/129-bayesian-deep-learning-ai-for-science-vincent-fortuin Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Proudly sponsored by PyMC Labs , the Bayesian Consultancy. Book a call , or get in touch ! Intro to Bayes Course (first 2 lessons free) Advanced Regression Course (first 2 lessons free) Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work ! Visit our Patreon page to unlock exclusive Bayesian swag ;) Takeaways : The hype around AI in science often fails to deliver practical results. Bayesian deep learning combines the strengths of deep learning and Bayesian statistics. Fine-tuning LLMs with Bayesian methods improves prediction calibration. There is no single dominant library for Bayesian deep learning yet. Real-world applications of Bayesian deep learning exist in various fields. Prior knowledge is crucial for the effectiveness of Bayesian deep learning. Data efficiency in AI can be enhanced by incorporating prior knowledge. Generative AI and Bayesian deep learning can inform each other. The complexity of a problem influences the choice between Bayesian and traditional deep learning. Meta-learning enhances the efficiency of Bayesian models. PAC-Bayesian theory merges Bayesian and frequentist ideas. Laplace inference offers a cost-effective approximation. Subspace inference can optimize parameter efficiency. Bayesian deep learning is crucial for reliable predictions. Effective communication of uncertainty is essential. Realistic benchmarks are needed for Bayesian methods Collaboration and communication in the AI community are vital. Chapters : 00:00 Introduction to Bayesian Deep Learning 06:12 Vincent's Journey into Machine Learning 12:42 Defining Bayesian Deep Learning 17:23 Current Landscape of Bayesian Libraries 22:02 Real-World Applications of Bayesian Deep Learning 24:29 When to Use Bayesian Deep Learning 29:36 Data Efficient AI and Generative Modeling 31:59 Exploring Generative AI and Meta-Learning 34:19 Understanding Bayesian Deep Learning and Prior Knowledge 39:01 Algorithms for Bayesian Deep Learning Models 43:25 Advancements in Efficient Inference Techniques 49:35 The Future of AI Models and Reliability 52:47 Advice for Aspiring Researchers in AI 56:06 Future Projects and Research Directions Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli . Links from the show: Vincent’s website: https://fortuin.github.io/ Vincent on Linkedin: https://www.linkedin.com/in/vincent-fortuin-42426b134/ Vincent on GitHub: https://github.com/fortuin Vincent on Medium: https://medium.com/@vincefort Vincent on BlueSky: https://bsky.app/profile/vincefort.bsky.social LBS #107 Amortized Bayesian Inference with Deep Neural Networks, with Marvin Schmitt: https://learnbayesstats.com/episode/107-amortized-bayesian-inference-deep-neural-networks-marvin-schmitt/ Position paper on Bayesian deep learning: https://proceedings.mlr.press/v235/papamarkou24b.html Position paper on generative AI: https://arxiv.org/abs/2403.00025 BNN review paper: https://arxiv.org/abs/2309.16314 Priors in BDL review paper: https://onlinelibrary.wiley.com/doi/10.1111/insr.12502 BayesFlow: https://bayesflow.org/ BayesianTorch: https://github.com/IntelLabs/bayesian-torch Laplace Torch: https://aleximmer.github.io/Laplace/ TyXe: https://github.com/TyXe-BDL/TyXe Introduction to PAC-Bayes: https://arxiv.org/abs/2110.11216 Training GPT2 with Bayesian methods: https://proceedings.mlr.press/v235/shen24b.html Bayesian fine-tuning for LLMs: https://arxiv.org/abs/2405.03425 Try out NormalizingFlow initialization with Nutpie: https://discourse.pymc.io/t/new-experimental-sampling-algorithm-fisher-hmc-in-nutpie-for-pymc-and-stan-models/16114/5 Transcript This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play