

It’s misleading.
IBM is very much into AI, as a modest, legally trained, economical tool. See: https://huggingface.co/ibm-granite
But this is the CEO saying “We aren’t drinking the Kool-Aid.” It’s shockingly reasonable.


It’s misleading.
IBM is very much into AI, as a modest, legally trained, economical tool. See: https://huggingface.co/ibm-granite
But this is the CEO saying “We aren’t drinking the Kool-Aid.” It’s shockingly reasonable.


That’s interesting.
I dunno if that’s any better. Compiler development is hard, and expensive.
I dunno what issue they have with LLVM, but it would have to be massive to justify building around it and then switching away to re-invent it.


And the discoverability pipe is breaking.
No one reads oldschool curators like RockPaperShotgun anymore. They’re barely afloat.
Generic algorithmic social media like YouTube tends to snowball a few games.
Forums are dead. Reddit is dystopian.
That leaves Steam’s algorithm, and a sea of sparsely seen solo reviewers. But there are billions of people oblivious to passion projects they’d love, and playing AAAs or gacha phone apps instead.


…The same Zig that ditched LLVM, to make their own compiler from scratch?
This is good. But also, this is sort of in character for Zig.


They’re pretty bad outside of English-Chinese actually.
Voice-to-voice is all relatively new, and it sucks if it’s not all integrated (eg feeding a voice model plain text so it loses the original tone, emotion, cadence and such).
And… honestly, the only models I can think of that’d be good at this are Chinese. Or Japanese finetunes of Chinese models. Amazon certainly has some stupid policy where they aren’t allowed to use them (even with zero security risk since they’re open weights).


Hostly, even a dirt cheap language model (with sound input) would tell you it’s garbage. It could itemize problematic parts of the sub.
But they didn’t use that because this isn’t machine learning. Its Tech Bro AI.


All true, yep.
Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That’s even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.


First thing, Lemmy is in need of content and likes recruiting. Hence you got 315 replies, heh.
Basically, if you aren’t a bigot, you don’t have to worry about what you say. You can be politically incorrect in any direction and not get a global/shadowban from the Fediverse.
Each instance has its own flavor and etiquette.


They are human. There’s nothing wrong with acknowledging that, while also reiterating that they basically shouldn’t be in that state.
Also, I think it’s important to draw a line between the “rich” (well-off working professionals like researchers, doctors, small entrepreneurs), and people with more wealth than many sovereign nations put together.


This is interesting, because “add ads” usually means margins are slim, and the product is in a race to the bottom.
If ChatGPT was the transcendent, priceless, premium service they are hyping it as… why would it need ads?


Same with auto overclocking mobos.
My ASRock sets VSoC to a silly high coltage with EXPO. Set that back down (and fiddle with some other settings/disable the IGP if you can), and it does help a ton.
…But I think AMD’s MCM chips just do idle hotter. My older 4800HS uses dramatically less, even with the IGP on.


Yeah.
In general, ‘big’ CPUs have an advantage because they can run at much, much lower clockspeeds than atoms, yet still be way faster. There are a few exceptions, like Ryzen 3000+ (excluding APUs), which idle notoriously hot thanks to the multi-die setup.


Eh, older RAM doesn’t use much. If it runs close to stock voltage, maybe just set it at stock voltage and bump the speed down a notch, then you get a nice task energy gain from the performance boost.


Depends.
Toss the GPU/wifi, disable audio, throttle the processor a ton, and set the OS to power saving, and old PCs can be shockingly efficient.


The base Mac Mini is not super powerful. Physically, the silicon comparable to AMD Strix Point, which you’d find in any AMD laptop.
I am not trying to rag on Apple here: their stuff is fine. It’s ridiculously power efficient. It’d be beyond excellent for a handheld like the Steam Deck, or a VR headset.
…But a plug in gaming console? That’s more ‘M4 Pro’ silicon. And what they charge for that speaks for itself.


Do other instances defederate with them, though?


M chips are super expensive. They’re optimized for low clockspeed/idle efficiency and pay through the nose for cutting edge processes, whereas most gaming hardware is optimized for pure speed/$, with the smallest die area and cheapest memory possible, at the expense of power efficiency.
And honestly the CPU/GPU divide over traces is more economical. “Unified memory” programming isn’t strictly needed for games at the moment.
And, practically, Apple demands very high margins. I just can’t see them pricing a console aggressively.


This is what basically anyone in the ML research/dev industry will tell you, heh.


I’m in a similar boat, though I’ve been present for some time.
Dbzer0 seems like the best “fit” for me, but practically I just want the instance that’s not defederated/blocking other instances.
…Not sure which that is. But I’d look at Piefed before Lemmy, since they work together, but Piefed seems more desirable feature-wise.
I mean, many forums are still live.
The problem is engagement. Discord, YouTube, even Lemmy all ping you in your pocket and offer more “instant” dopamine hits than a forum or news site, hence they’ve sucked all the attention.
It works. I’m guilty of falling into it for sure, even when I keep telling myself I will change my information diet.