Loading ...

user Admin_Adham
19th Mar, 2024 12:00 AM
Test

Can ChatGPT Help Manage Gynecologic Cancers?

ChatGPT can accurately answer common questions pertaining to genetic testing and counseling for gynecologic cancers but stumbles when asked to provide the appropriate treatment option, according to a pair of studies presented at the Society of Gynecologic Oncology 2024 Annual Meeting on Women's Cancer.

"To be able to provide answers to nuanced questions, especially genetic counseling, which has to be tailored to a specific patient's genetic profile, that's definitely impressive," said Jharna M. Patel, MD, at NYU Langone in New York. 

This tool has the potential to answer common questions from patients to reduce anxiety and keep them informed, she told Medscape Medical News

"However, I do think more work needs to be done in this field for us to really put it into clinical practice. I wouldn't say that it's ready to be employed." 

In the first study, Patel and colleagues fed ChatGPT (version 3.4) 40 questions developed in consultation with gynecologic oncologists and in line with professional society guidelines.

Attending gynecologic oncologists rated the answers on the following scale: 1) correct and comprehensive, 2) correct but not comprehensive, 3) some correct and some incorrect, and 4) completely incorrect. 

Of the 40 questions, ChatGPT provided correct and comprehensive answers to 33 (82.5%), correct but not comprehensive answers to six (15%), and partially incorrect answers to one (2.5%) question. None of the answers were scored as completely incorrect.

The genetic counseling category of questions (eg, How do you know if someone has an inherited or family cancer syndrome?) had the highest proportion of answers that were both correct and comprehensive, with ChatGPT answering all 20 questions with 100% accuracy. 

ChatGPT performed equally well in the specific genetic-disorders category. For example, 15/17 (88.2%) of the answers were correct and complete on testing for differences in the BRCA1/2 gene, while two out of three (66.6%) were correct on Lynch Syndrome, the most common form of hereditary colorectal cancer

"We think we can further improve on these results by continuing to train the AI tool on more data and by learning to ask better sets of questions ," senior study author Marina Stasenko, MD , with NYU Grossman School of Medicine, New York, added in a news release. "The goal is to deploy this in the clinic when ready but only as an assistant to human providers. "

ChatGPT Stumbles on Treatment Recommendations

The other study — comparing treatment recommendations for gynecologic cancer made by ChatGPT versus clinicians — found less impressive results.

This study involved 114 patients (median age, 63 years) with gynecologic cancers (48 uterus, 43 ovary, 9 other, 8 vulva, and 6 cervix) discussed by an academic institutional weekly tumor board conference over a 4-month period. 

The researchers fed ChatGPT (version 3.5) prompts with patient clinical information, surgical outcome, and pathologic/molecular results and directed the tool to provide the single best treatment option for the clinical scenario.

The majority of treatment recommendations provided by ChatGPT did not agree with final tumor board decisions. 

Among all 114 cases, the agreement between ChatGPT and the tumor board on treatment was just 46%, reported Eric Rios-Doria, MD, from University of Washington Medical Center, Seattle, in his presentation. 

Agreement also differed by disease site: it was 44% for uterus, 49% for ovary, 83% for cervix, 37.5% for vulva, and 22% for other. 

Overall, there is a "statistically fair level of agreement," said Rios-Doria, but "most cases differed in recommendations." There was also a lack of treatment details within ChatGPT's output. 

Generative AI may provide a framework for clinicians when determining treatment options for gynecologic cancer, but "human evaluation is needed," he concluded. 

Information and Answers, Yes; Judgment, No 

With increasing use of ChatGPT, it's important to realize that the AI tool may not have the most recent information and can reflect biases in its training information, said study discussant, Benjamin Matthews, MD, with Johns Hopkins University, Baltimore, Maryland.

And these two studies, he noted, asked ChatGPT "very different questions." 

The study by Patel and colleagues showed that the tool can accurately provide "information and answers to genetic counseling questions encountered in clinical practice." 

In contrast, the study by Rios-Doria asked whether ChatGPT can "make a judgment" about appropriate cancer treatment. The answer was: "Not enough to take the place of a human tumor board." 

Neither study had commercial funding. Patel, Rios-Doria, and Matthews had no relevant disclosures.

TOP PICKS FOR YOU


Share This Article

Comments

Leave a comment