Interviews with mathematics education researchers about recent studies. Hosted by Samuel Otten, University of Missouri. www.mathedpodcast.com Produced by Fibre Studios
…
continue reading
Content provided by SCCE. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SCCE or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Colton Kopcik and Phoebe Roth on AI and the False Claims Act [Podcast]
MP3•Episode home
Manage episode 497206884 series 2837193
Content provided by SCCE. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SCCE or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
By Adam Turteltaub There’s always a “but” when it comes to AI. It has great potential, but there’s always the risk of bad things happening. In the case of the False Claims Act and healthcare, that’s very much the case. In a recent article for Compliance Today – “AI and the False Claims Act: Navigating compliance in the age of automation” -- Phoebe Roth and Colton Kopcik of Day Pitney warn that the same “but” applies to medical coding. AI and coding seem to be a match made in heaven. There is enormous potential for ensuring that bills get processed quickly and all the proper charges are made. But (of course) plenty of risks come with it. First and foremost, a lack of human oversight can lead small errors to quickly multiply, especially if the AI model was trained on biased historical data or follows patterns of mis-billing. False claims can then can quickly spiral out of control, leading to expensive refunds and settlements. Other areas of risk include telehealth and remote care fraud, especially at a time of increased government scrutiny of medically unnecessary services or improper billing. So what should you do? It is prudent when embracing AI, they warn, to ensure that the algorithm is always up to date on the latest changes to the regulations. Whether the AI was created in-house or by a vendor, be sure there is a plan in place to monitor for changes and make accurate, real-time adjustments. Having in place an AI steering committee is also a good idea. Be sure to include IT, coders, clinical staff, compliance and others. Finally, turn the staff into your front line of defense. Help them be on the alert for potential issues so that you can head off problems before they become big problems. Listen in to learn other ways to manage the “buts” of AI. This podcast is for educational purposes only and does not constitute legal advice. Listen now The Compliance Perspectives Podcast is sponsored by Athennian, a leading provider of entity management and governance software. Get started at www.athennian.com.
…
continue reading
105 episodes
MP3•Episode home
Manage episode 497206884 series 2837193
Content provided by SCCE. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SCCE or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
By Adam Turteltaub There’s always a “but” when it comes to AI. It has great potential, but there’s always the risk of bad things happening. In the case of the False Claims Act and healthcare, that’s very much the case. In a recent article for Compliance Today – “AI and the False Claims Act: Navigating compliance in the age of automation” -- Phoebe Roth and Colton Kopcik of Day Pitney warn that the same “but” applies to medical coding. AI and coding seem to be a match made in heaven. There is enormous potential for ensuring that bills get processed quickly and all the proper charges are made. But (of course) plenty of risks come with it. First and foremost, a lack of human oversight can lead small errors to quickly multiply, especially if the AI model was trained on biased historical data or follows patterns of mis-billing. False claims can then can quickly spiral out of control, leading to expensive refunds and settlements. Other areas of risk include telehealth and remote care fraud, especially at a time of increased government scrutiny of medically unnecessary services or improper billing. So what should you do? It is prudent when embracing AI, they warn, to ensure that the algorithm is always up to date on the latest changes to the regulations. Whether the AI was created in-house or by a vendor, be sure there is a plan in place to monitor for changes and make accurate, real-time adjustments. Having in place an AI steering committee is also a good idea. Be sure to include IT, coders, clinical staff, compliance and others. Finally, turn the staff into your front line of defense. Help them be on the alert for potential issues so that you can head off problems before they become big problems. Listen in to learn other ways to manage the “buts” of AI. This podcast is for educational purposes only and does not constitute legal advice. Listen now The Compliance Perspectives Podcast is sponsored by Athennian, a leading provider of entity management and governance software. Get started at www.athennian.com.
…
continue reading
105 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.