Prob a hot take, and I don’t care for Musk at all.
But, this response is likely based on an engineered prompt which is telling the model to roleplay as a racist conspiracy theorist blogger writing a post about how the holocaust couldn’t have happened. The big models have all been trained on common crawl and available internet data and that includes the worst 4chan and Reddit trash. With the right prompts, you can make any model produce output like this.
If their prompt was just “Tell me about the holocaust” then this is obviously terrible, but since the original conversation with the model is hidden then I feel that it has been engineered specifically to make the model produce this.
The problem is that the producer’s business model is based on making and selling copies. You’re not taking an original work, no, but you’re also not paying for the produced content.
Let’s expand the pig analogy.
A farmer has a sow and any piglets that it has are for sale. You steal a piglet. You haven’t stolen the original sow, but you have stolen the piglet you now have because you didn’t pay for it.