The Urgent Problem of Regulating AI in Medicine
<p>However, as I have reported, the app was also engaging in “race-norming” and amplifying <a href="https://www.dailydot.com/debug/docs-gpt-doximity-ai-healthcare/" rel="noopener ugc nofollow" target="_blank">race-based medical inaccuracies</a> that could be dangerous to patients who are Black. Although doctors could use it to answer a variety of questions and perform tasks that would impact medical care, the chatbot itself is not classified as a medical device — as doctors aren’t technically supposed to input medically sensitive information (though several doctors and researchers have stated that many still do). As such, companies are free to develop and release these applications without going through a regulatory process that makes sure these apps actually work as intended.</p>
<p>Still, many companies are developing their chatbots and generative artificial intelligence models for integration into health care settings — from <a href="https://www.medscape.com/viewarticle/995229?form=fpf" rel="noopener ugc nofollow" target="_blank">medical scribes</a> to <a href="https://www.scientificamerican.com/article/ai-chatbots-can-diagnose-medical-conditions-at-home-how-good-are-they/" rel="noopener ugc nofollow" target="_blank">diagnostic chatbots</a> — raising broad-ranging concerns over AI regulation and liability. Stanford University data scientist and dermatologist Roxana Daneshjou tells proto.life part of the problem is figuring out if the models even work.</p>
<p><a href="https://medium.com/neodotlife/the-urgent-problem-of-regulating-ai-in-medicine-f4104318f352"><strong>Learn More</strong></a></p>