what is the message to the audience? That ChatGPT can investigate just as well as BBC.
What about this part?
Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.
Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.
“AI Chatbot”. Which is what to 99% of people, almost certainly including the journalist who doesn’t live under a rock? They are just avoiding naming it.
No. You are the one who knows, without doubt, they used ChatGPT and can’t be wrong. If you think “hey, there are other options, don’t jump to unproven conclusions” is to like to argue I’m not the one with a problem.
I’m open to being proven wrong, but you need a bit more than “trust me, I must know”.
The article says they used ChatGPT or some similar LLM bot. It says they used a chatbot, and that’s what the word chatbot means by default. A skilled reporter mentions if it was something else.
The reporter used a chatbot such as ChatGPT to ask if there’s anything suspicious in the image, the chatbot, by coincidence, happened to point out something in the photo that the reporter could then recognise as AI-generated indeed, and got on typing his article again.
The only part of this that is not mentioned in the article is that the reporter confirmed the referred spot in the image with his own eyes, but that is such an integral part of a reporter’s education that you need specific reasons to work against the assumption that this was done.
I don’t think it’s irresponsible to suggest to readers that they can use an AI chatbot to examine any given image to see if it was AI-generated. Even the lowest-performing multi-model chatbots (e.g. Grok and ChatGPT) can do that pretty effectively.
Also: Why stop at one? Try a whole bunch! Especially if you’re a reporter working for the BBC!
It’s not like they give an answer, “yes: Definitely fake” or “no: Definitely real.” They will analyze the image and give you some information about it such as tell-tale signs that an image could have been faked.
But why speculate? Try it right fucking now: Ask ChatGPT or Gemini (the current king at such things BTW… For the next month at least hahaha) if any given image is fake. It only takes a minute or two to test it out with a bunch of images!
Then come back and tell us that’s irresponsible with some screenshots demonstrating why.
I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just be how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.
Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?
What about this part?
Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.
Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.
About that part I would say the article doesn’t mention ChatGPT, only AI.
“AI Chatbot”. Which is what to 99% of people, almost certainly including the journalist who doesn’t live under a rock? They are just avoiding naming it.
Yes. It’s ChatGPT. You got them good. You passed the test Neo. Now get the pills.
deleted by creator
No. You are the one who knows, without doubt, they used ChatGPT and can’t be wrong. If you think “hey, there are other options, don’t jump to unproven conclusions” is to like to argue I’m not the one with a problem.
I’m open to being proven wrong, but you need a bit more than “trust me, I must know”.
The article says they used ChatGPT or some similar LLM bot. It says they used a chatbot, and that’s what the word chatbot means by default. A skilled reporter mentions if it was something else.
The reporter used a chatbot such as ChatGPT to ask if there’s anything suspicious in the image, the chatbot, by coincidence, happened to point out something in the photo that the reporter could then recognise as AI-generated indeed, and got on typing his article again.
The only part of this that is not mentioned in the article is that the reporter confirmed the referred spot in the image with his own eyes, but that is such an integral part of a reporter’s education that you need specific reasons to work against the assumption that this was done.
No it doesn’t.
No it’s not
The article doesn’t say the kind of chatbot, not chatbot means LLM or ChatGPT.
I’m not going to continue. It’s just going in circles.
Are you sure you’re not the LLM?
You can see my comment history to determine if I’m an LLM or not :)
In any case, have fun in your circles!
I don’t think it’s irresponsible to suggest to readers that they can use an AI chatbot to examine any given image to see if it was AI-generated. Even the lowest-performing multi-model chatbots (e.g. Grok and ChatGPT) can do that pretty effectively.
Also: Why stop at one? Try a whole bunch! Especially if you’re a reporter working for the BBC!
It’s not like they give an answer, “yes: Definitely fake” or “no: Definitely real.” They will analyze the image and give you some information about it such as tell-tale signs that an image could have been faked.
But why speculate? Try it right fucking now: Ask ChatGPT or Gemini (the current king at such things BTW… For the next month at least hahaha) if any given image is fake. It only takes a minute or two to test it out with a bunch of images!
Then come back and tell us that’s irresponsible with some screenshots demonstrating why.
I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just be how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.
Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?