This claim is accurate. The article correctly cites data from a recent medical paper that examined how language models would respond to illogical medical questions. Researchers found that the models would often times provide inaccurate responses instead of refusing to give inaccurate medical advice. The likelihood of these responses dropped when researchers explicitly instructed the models to avoid illogical answers.
Chen, Shan et al. “When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior.” NPJ digital medicine vol. 8,1 605. 17 Oct. 2025, doi:10.1038/s41746-025-02008-z