Title is a little sensational but this is a cool project for non-technical folks who may need a mini-internet or data archive for a wide variety of reasons:
“PrepperDisk is a mini internet box that comes preloaded with offline backups of Wikipedia, street maps, survivalist information, 90,000 WikiHow guides, iFixit repair guides, government website backups (including FEMA guides and National Institutes of Health backups), TED Talks about farming and survivalism, 60,000 ebooks and various other content. It’s part external hard drive, part local hotspot antenna—the box runs on a Raspberry Pi that allows up to 20 devices to connect to it over wifi or wired connections, and can store and run additional content that users store on it. It doesn’t store a lot of content (either 256GB or 512GB), but what makes it different from buying any external hard drive is that it comes preloaded with content for the apocalypse.”
AI opponents will spend their last hours manually slogging through 250GB of content rather than let a hallucination potentially misguide them.
I’m reminded of that AI-written book that misidentified poisonous mushrooms.
AI is worthless if you are unable to tell hallucination from fact.
But just like NP hard problems, verifying a given solution is much easier than coming up with one yourself.
I’m reminded that AI is helping me restore an old motorbike I got for practically free, and the only fight we had was looking for the oil filter on the wrong side of the bike
Right. So it told you objectively wrong information while the correct answer is freely available from technical documents that it ought to know how to read.
So imagine if that was something actually life threatening.
But it wasn’t life threatening. And still much faster than looking for the technical manual. And took me all of 30 seconds to realise it had misunderstood the photo I sent it.
Yeah, you have to take it for what it’s worth, and it’s worth a lot. Most of what it says is pretty close, and when close is good enough, go for it. When AI is telling you how to secure your brake hydraulic connectors and it doesn’t seem quite right - time for a 2nd opinion.
For sure, AIs/llms can be dangerous if you don’t also apply critical thinking, but that’s been true of the internet forever, and even before. The Anarchist cookbook has recipes that will, at best, waste a bunch of soap and gasoline or have you scraping banana peels with a razorblade, or at worst, have you making chlorine gas in your basement. 4chan had a popular recipe for “peanut butter cookies” that would result in an oven fire, and instructions to drill a hole in your iPhone to use the headphone jack.
It’s much more important to protect and promote critical thinking skills than it is to try to shield everybody from misinformation and hallucinations.
My father had a 50 year career in education, he spent it trying to promote critical thinking - he recently retired and feels that the system has made negative progress through all 50 years of his career in terms of improving critical thinking skills through the educational system.
I’m reminded of the people who misidentify poisonous mushrooms each year and die from it.
I’m reminded of Huga Shrooma the first man to misidentify poisonous mushrooms.
The man saved generations to come.
You’ll never win against AI haters. Nothing is perfectly accurate and even if LLMs are less accurate than average it does not diminish the use case potential.
If someone’s eats a dradly mushroom based on 1 research source then really thats just natural selection at play lol
The interesting thing to be about LLMs is their potential for reduced bias. If you can manage to feed them unbiased training data (impossible) then you should be getting unbiased results.
Of course, they have already been feeding LLMs biased training data, leading to significantly biased results - but when the bias matches the owner’s agenda they present it as “unbiased and fair”.