- cross-posted to:
- world@lemmy.world
- cross-posted to:
- world@lemmy.world
Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.
Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.
“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.
That depends on what you mean by “know.” It generates text from a large bank of hopefully relevant data, and the relevance of the answer depends on how much overlap there is between your query and the data it was trained on. There are different models with different focuses, so pick your model based on what your query is like.
And yeah, one big issue is the confidence. If users are aware of its limitations, it’s fine, I certainly wouldn’t put my kids in front of one without training them on what it can and can’t be relied on to do. It’s a tool, so users need to know how it’s intended to be used to get value from it.
My use case is distilling a broad idea into specific things to do a deeper search for, and I use traditional tools for that deeper search. For that it works really well.