I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.
In its current stage, no. But it’s come a long way in a short time, and I don’t think we’re so far from having machines that pass the Turing test 100%. But rather than being a proof of consciousness, all this really shows is that you can’t judge consciousness from the outside looking in. We know it’s a big illusion just because its entire development has been focused on building that illusion. When it says it feels something, or cares deeply about something, it’s saying that because that’s the kind of thing a human would say.
Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone. It’s a worrying prospect, and not just because I won’t become immortal by having a machine imitate my behaviour. There’s bad actors working to exploit this situation. Elon Musk’s attempts to turn Grok into his own personally controlled overseer of truth and narrative seem to backfire in the most comical ways, but that’s teething troubles, and in time this will turn into a very subtle and pervasive problem for humankind. The intrinsic fakeness of it is a concerning aspect. It’s like we’re getting a puppet show version of what AI could have been.
I don’t think we’re so far from having machines that pass the Turing test 100%.
The Turing test isn’t solved with technology, its solved with participants who are easier to fool or more sympathetic to computer output as humanly legible. In the end, it can boil down to social conventions far more than actual computing capacity.
Per the old Inglorious Bastards gag
You can fail the Turing Test not because you’re a computer but because you’re a British computer.
Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone.
We’ve ingested a bunch of early 21st century digital markers for English language Western oriented human speech and replicated those patterns. But human behavior isn’t limited to Americans shitposting on Reddit. Neither is American culture a static construct. As the spread between the median user and the median simulated user in the computer dataset diverges, the differences become more obvious.
Do we think the designers at OpenAI did a good enough job to keep catching up to the current zeitgeist?
In its current stage, no. But it’s come a long way in a short time, and I don’t think we’re so far from having machines that pass the Turing test 100%. But rather than being a proof of consciousness, all this really shows is that you can’t judge consciousness from the outside looking in. We know it’s a big illusion just because its entire development has been focused on building that illusion. When it says it feels something, or cares deeply about something, it’s saying that because that’s the kind of thing a human would say.
Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone. It’s a worrying prospect, and not just because I won’t become immortal by having a machine imitate my behaviour. There’s bad actors working to exploit this situation. Elon Musk’s attempts to turn Grok into his own personally controlled overseer of truth and narrative seem to backfire in the most comical ways, but that’s teething troubles, and in time this will turn into a very subtle and pervasive problem for humankind. The intrinsic fakeness of it is a concerning aspect. It’s like we’re getting a puppet show version of what AI could have been.
The Turing test isn’t solved with technology, its solved with participants who are easier to fool or more sympathetic to computer output as humanly legible. In the end, it can boil down to social conventions far more than actual computing capacity.
Per the old Inglorious Bastards gag
You can fail the Turing Test not because you’re a computer but because you’re a British computer.
We’ve ingested a bunch of early 21st century digital markers for English language Western oriented human speech and replicated those patterns. But human behavior isn’t limited to Americans shitposting on Reddit. Neither is American culture a static construct. As the spread between the median user and the median simulated user in the computer dataset diverges, the differences become more obvious.
Do we think the designers at OpenAI did a good enough job to keep catching up to the current zeitgeist?