• Skip to main content
  • Skip to primary sidebar

Biz Builder Mike

You can't sail Today's boat on Yesterdays wind - Michael Noel

  • Tokenomics is not Economics – Digital CX -The Digital Transformation
  • Resume / CV – Michael Noel
  • Contact Us
  • Featured
You are here: Home / AI / 3 Questions: Leo Anthony Celi on ChatGPT and medicine

Feb 09 2023

3 Questions: Leo Anthony Celi on ChatGPT and medicine

Launched in November 2022, ChatGPT is a chatbot that can not only engage in human-like conversation, but also provide accurate answers to questions in a wide range of knowledge domains. The chatbot, created by the firm OpenAI, is based on a family of “large language models” — algorithms that can recognize, predict, and generate text based on patterns they identify in datasets containing hundreds of millions of words.

In a study appearing in PLOS Digital Health this week, researchers report that ChatGPT performed at or near the passing threshold of the U.S. Medical Licensing Exam (USMLE) — a comprehensive, three-part exam that doctors must pass before practicing medicine in the United States. In an editorial accompanying the paper, Leo Anthony Celi, a principal research scientist at MIT’s Institute for Medical Engineering and Science, a practicing physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School, and his co-authors argue that ChatGPT’s success on this exam should be a wake-up call for the medical community.

Q: What do you think the success of ChatGPT on the USMLE reveals about the nature of the medical education and evaluation of students?

A: The framing of medical knowledge as something that can be encapsulated into multiple choice questions creates a cognitive framing of false certainty. Medical knowledge is often taught as fixed model representations of health and disease. Treatment effects are presented as stable over time despite constantly changing practice patterns. Mechanistic models are passed on from teachers to students with little emphasis on how robustly those models were derived, the uncertainties that persist around them, and how they must be recalibrated to reflect advances worthy of incorporation into practice.

ChatGPT passed an examination that rewards memorizing the components of a system rather than analyzing how it works, how it fails, how it was created, how it is maintained. Its success demonstrates some of the shortcomings in how we train and evaluate medical students. Critical thinking requires appreciation that ground truths in medicine continually shift, and more importantly, an understanding how and why they shift.

Q: What steps do you think the medical community should take to modify how students are taught and evaluated?

A: Learning is about leveraging the current body of knowledge, understanding its gaps, and seeking to fill those gaps. It requires being comfortable with and being able to probe the uncertainties. We fail as teachers by not teaching students how to understand the gaps in the current body of knowledge. We fail them when we preach certainty over curiosity, and hubris over humility.

Medical education also requires being aware of the biases in the way medical knowledge is created and validated. These biases are best addressed by optimizing the cognitive diversity within the community. More than ever, there is a need to inspire cross-disciplinary collaborative learning and problem-solving. Medical students need data science skills that will allow every clinician to contribute to, continually assess, and recalibrate medical knowledge.

Q: Do you see any upside to ChatGPT’s success in this exam? Are there beneficial ways that ChatGPT and other forms of AI can contribute to the practice of medicine?

A: There is no question that large language models (LLMs) such as ChatGPT are very powerful tools in sifting through content beyond the capabilities of experts, or even groups of experts, and extracting knowledge. However, we will need to address the problem of data bias before we can leverage LLMs and other artificial intelligence technologies. The body of knowledge that LLMs train on, both medical and beyond, is dominated by content and research from well-funded institutions in high-income countries. It is not representative of most of the world.

We have also learned that even mechanistic models of health and disease may be biased. These inputs are fed to encoders and transformers that are oblivious to these biases. Ground truths in medicine are continuously shifting, and currently, there is no way to determine when ground truths have drifted. LLMs do not evaluate the quality and the bias of the content they are being trained on. Neither do they provide the level of uncertainty around their output. But the perfect should not be the enemy of the good. There is tremendous opportunity to improve the way health care providers currently make clinical decisions, which we know are tainted with unconscious bias. I have no doubt AI will deliver its promise once we have optimized the data input.

[mailpoet_form id="1"]

3 Questions: Leo Anthony Celi on ChatGPT and medicine Republished from Source https://news.mit.edu/2023/3-questions-leo-anthony-celi-chatgpt-and-medicine-0209 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by Anne Trafton MIT News Office · Categorized: AI, Front Page Featured, MIT AI · Tagged: AI, MIT AI

Primary Sidebar

https://youtu.be/Qvad1CQ9WOM

Blockchain Weekly Rebooted –

During the Blockchain Spring 2016 to 2020 I hosted Blockchain Weekly. Each week I interviewed someone who was doing interesting things in the blockchain space. At one time we had 29k subscribers and we were consistently getting over 15k views a week on the channel. All of that went away during the lockdown, including the Gmail address that controlled that channel. Recently, I found some of the original videos on some old hard drives. So I’m reposting a few of the relevant ones while I am starting to shoot new Blockchain Weekly Episodes to be aired 1st quarter 2023. Please subscribe to bless the You Tube Algorithm, and allow me to let you know about any updates! Our Sponsor – https://BlockchainConsultants.io

The Utah State Legislature has approved a new law, the Utah Decentralized Autonomous Organizations Act, providing legal recognition and limited liability to decentralized autonomous organizations (DAOs). This legislation, also known as the “Utah LLDs,” was passed after the combined efforts of the Digital Innovation Taskforce and the Utah Blockchain Legislature. The Utah DAO Act defines […]

Search Here

Market Insights

  • FTX Founder Allegedly Sought Federal Regulation Before Collapse
  • DeFi Hack Linked to North Korea
  • US Banking Crisis Fuels Regulation Debate
  • HSBC approves multi-million-pound bonuses for Silicon Valley Bank UK staff
  • Swiss regulators consider UBS takeover of Credit Suisse to prevent collapse
  • Mid-Size Banks Ask for Deposit Insurance Extension
  • Former Coinbase CTO Bets $1 Million on Bitcoin Reaching $1 Million in 90 Days
  • Binance Responds to U.S. Senators Letter, Excludes Financial Data
  • Crypto Entrepreneur Bail Package Revised
  • DeFi Hacker Returns $5.4M to Euler Finance

Tags

AI (259) andrewchen (4) Biz Builder Mike (28) Blockchain (824) Crowd Funding (68) crowdfundinsider (2) entrepreneur (957) eonetwork (42) Front Page Featured (33) MIT AI (89) startupmindset (145) Technology (667) virtual reality (1) youngupstarts (99)
  • Twitter
  • Facebook
  • About Us
  • LinkedIn
  • ANTI-SPAM POLICY
  • Google+
  • API Terms and Conditions
  • RSS
  • Archive Page
  • Biz Builder Mike is all about New World Marketing
  • Cryptocurrency Exchange
  • Digital Millennium Copyright Act (DMCA) Notice
  • DMCA Safe Harbor Explained: Why Your Website Needs a DMCA/Copyright Policy
  • Marketing? Well, how hard can that be?
  • Michael Noel
  • Michael Noel CBP
  • Noels Law of decentralization

Copyright © 2023 · Altitude Pro on Genesis Framework · WordPress · Log in