

The safety designed into Rust is suddenly foreign to the C family that I’m honestly not sure you can do that. Even “unsafe” Rust doesn’t completely switch off the enforced safety


The safety designed into Rust is suddenly foreign to the C family that I’m honestly not sure you can do that. Even “unsafe” Rust doesn’t completely switch off the enforced safety


Those two things aren’t being claimed by the same people.
There are people with functioning brains, who are aware that AI is shit at programming, and there are managers who have been sold a sales pitch and believe that they can replace half of their software engineers.
AI doesn’t actually need to be effective to cost a bunch of jobs, it just needs to have good salespeople. Those jobs will come back when the businesses which decided to rely on AI discover the hole they’ve dug for themselves. That might not be quick though, because there’s no rule saying that major businesses will have competent leaders with good foresight.


There are a million ways to back data up, many of them are as simple as “copy it to removable media”, and don’t require any clever operating system features at all.
What removable media you can use depends on the quantity of data, and how long you need the backup to last. Maybe DVDs, or USB drives. If the data is valuable enough, you can also pay for cloud storage and upload it


Be cautious about trusting the AI-detection tools, they’re not much better than the AI they’re trying to detect, because they’re just as prone to false positives and false negatives as the agents they claim to detect.
It’s also inherently an arms race, because if a tool exists which can easily and reliably detect AI generated content then they’d just be using that tool for their training instead of what they already use, and the AI would quickly learn to defeat it. They also wouldn’t be worrying about their training data being contaminated by the output of existing AI, Which is becoming a genuine problem right now


Oh, absolutely. It’s not something which should be encouraged, and against a well designed modern system it probably isn’t possible (there must be some challenge-response type NFC systems on the market).
I’m just saying it isn’t unambiguously “illegitimate”


That’s probably debatable, if they have permission. They probably shouldn’t have been given permission, but that’s a separate issue


It’s only true of badly designed bridges, these days. Modern engineering tools can calculate the resonant frequencies, and they make certain that those are far away from the frequencies which humans or wind can create


An experimental capability being kicked out of the kernel, so that it has to settle for being a kernel module or custom forks of the kernel, is absolutely a minor matter


This is a non-issue, being over-reported by people looking for clicks. A minor technical matter being handled by the person ultimately responsible for handling such things


Israel and trump appear to be claiming to have defeated the Iranian air defense, and achieve air supremacy over the Iranian capital.
If that’s true then Iran is in deep trouble, and inviting them to surrender wouldn’t be unreasonable. I very much doubt that it is true, but that’s what they seem to believe


It’s far harder to achieve mass manipulation of the ballot when it’s all being handled by a lot of human hands. If it’s managed by computers, then by finding a bug or other vulnerability in the software or database you could alter the whole election.
Meanwhile, to manipulate a paper ballot & hand-counted election in the same way you’d need the cooperation of a huge number of people, and you’d need them all to keep their mouths shut. That’s far more difficult than defeating a computerised system


Honestly I think it’s misleading to describe it as being “defined” as 1, precisely because it makes it sounds like someone was trying to squeeze the definition into a convenient shape.
I say, rather, that it naturally turns out to be that way because of the nature of the sequence. You can’t really choose anything else


X^0 and 0! aren’t actually special cases though, you can reach them logically from things which are obvious.
For X^0: you can get from X^(n) to X^(n-1) by dividing by X. That works for all n, so we can say for example that 2³ is 2⁴/2, which is 16/2 which is 8. Similarly, 2¹/2 is 2⁰, but it’s also obviously 1.
The argument for 0! is basically the same. 3! is 1x2x3, and to go to 2! you divide it by 3. You can go from 1! to 0! by dividing 1 by 1.
In both cases the only thing which is special about 1 is that any number divided by itself is 1, just like any number subtracted from itself is 0


Training LLMs on text which has been generated by an LLM is actually pretty problematic. The model can easily collapse, becoming completely useless. That’s why they always try and source really clean training data, which is becoming increasingly difficult


You’re still putting words in my mouth.
I never said they weren’t stealing the data
I didn’t comment on that at all, because it’s not relevant to the point I was actually making, which is that people treating the output of an LLM as if it were derived from any factual source at all is really problematic, because it isn’t.


You’re putting words in my mouth, and inventing arguments I never made.
I didn’t say anything about whether the training data is stolen or not. I also didn’t say a single word about intelligence, or originality.
I haven’t been tricked into using one piece of language over another, I’m a software engineer and know enough about how these systems actually work to reach my own conclusions.
There is not a database tucked away in the LLM anywhere which you could search through and find the phrases which it was trained on, it simply doesn’t exist.
That isn’t to say it’s completely impossible for an LLM to spit out something which formed part of the training data, but it’s pretty rare. 99% of what it generates doesn’t come from anywhere in particular, and you wouldn’t find it in any of the sources which were fed to the model in training.


That simply isn’t true. There’s nothing in common between an LLM and a search engine, except insofar as the people developing the LLM had access to search engines, and may have used them during their data gathering efforts for training data


Except these AI systems aren’t search engines, and people treating them like they are is really dangerous


I couldn’t find the actual pinout for the 8 pin package, but the block diagrams make me think they’re power, ground, and 6 general purpose pins which can all be GPIO. Other functions, like ADC, SPI and I2C (all of which it has) will be secondary or tertiary functions on those same pins, selected in software.
So the actual answer you’re looking for is basically that all of the pins are everything, and the pinout is almost entirely software defined
I imagine he means things like Chromebook, rather than Chromebook itself. Mass-market consumer hardware which comes with Linux by default