Updates for the Academic Medicine Community on Creating Safe A.I. Systems with Eric Nalisnick

Listen Now:


Eric Nalisnick
Eric Nalisnick

Important news and notes for the academic medicine community about building safe artificial intelligence systems are discussed in depth on this week’s Faculty Factory Podcast.

We’re excited to be joined by first-time guest Eric Nalisnick, an Assistant Professor in the Department of Computer Science at Johns Hopkins University for this timely discussion.

Alongside thoughts on the current state of incorporating the human element into these systems, one thing will remain abundantly clear after listening to today’s discussion: these A.I. systems, when left unchecked, are unreliable for work that allows no margin for error (i.e., medical practice, tax returns, etc.).

Large language models, like ChatGPT, are effective for low-stakes tasks, brainstorming, and bouncing ideas off of in order to stimulate creativity or encourage alternative ways of thinking.

With the ongoing and rapidly growing integration of artificial intelligence in the medical, research, and education fields, maintaining safety, ethical standards, and ensuring that the human touch is not lost are central themes in today’s interview.

“Integration and efficiency are something I hope we will see from A.I. systems, as opposed to more erosion of the human aspect,” he optimistically mentioned in the closing moments of our podcast

If you enjoyed today’s podcast or found it useful, consider listening to previous Faculty Factory interviews related to the topics Eric discussed with us: