

Strix Halo (AI Max CPUs) are basically that.
But they’re still DDR5 hanging off a bus, manufactured in the same place as sticks, so that wouldn’t really affect the price.


Strix Halo (AI Max CPUs) are basically that.
But they’re still DDR5 hanging off a bus, manufactured in the same place as sticks, so that wouldn’t really affect the price.


I was just about to bang out that they must lose a lot of heat from the compression. But apparently not! That’s amazing.
I’m struggling to think of systems that would significantly outperform “75%+”. Chilled superconducting coils? Those are expensive, and would fail rather catastrophically.


https://www.openbible.info/topics/oil
Then Samuel took a flask of oil and poured it on his head and kissed him and said, “Has not the Lord anointed you to be prince over his people Israel? And you shall reign over the people of the Lord and you will save them from the hand of their surrounding enemies. And this shall be the sign to you that the Lord has anointed you to be prince over his heritage.
You have loved righteousness and hated wickedness. Therefore God, your God, has anointed you with the oil of gladness beyond your companions;
And they cast out many demons and anointed with oil many who were sick and healed them.
When my steps were washed with butter, and the rock poured out for me streams of oil!
The jar of flour shall not be spent, and the jug of oil shall not be empty, until the day that the Lord sends rain upon the earth.’” And she went and did as Elijah said. And she and he and her household ate for many days. The jar of flour was not spent, neither did the jug of oil become empty, according to the word of the Lord that he spoke by Elijah.
Oh my god. This explains so much about US politics.


A prudent strategy would be a 2nd account for orders, maybe?


Yeah :(
Still though, Intel has their own fabs not really restricted by any of this. And not as easy to spin down as PCB making. So the CPU is likely to be the cheapest of anything.


You don’t need LLMs for that. An iPhone is plenty powerful enough for image recognition and text classification.
That’s sorta the funny thing about AI. There’s tons of potential, but it’s just unimplemented. Even on PC, you pretty much have to have some Nvidia GPU and fight pip setting up python repos to get anything working.


The last feature is the mildly interesting one, but in my experience just not useful enough to do much, even on specific browsing finetunes or augmented APIs.
I guess shake to summarize is mildly interesting, but not really? I simply can’t trust it. And I can just paste the (much more concise) relevant text into a chat window and get a much better answer.


Does anyone even talk about what the “AI features” are?
Could I, liked recolor webpages? Automate ublock filters? Detect SEO/AI slop? Create a price/feature table out of a shopping page?
See, this would all be neat like auto translate is neat.
But I’m not really interested in the 7 millionth barebones chatbot UI. I’m not interested in loading a whole freaking LLM to auto name my tabs, or in some cutsie auto navigation agent experiment that still only works like 20% of the time with a 600B LLM, or a shopping chatbot that doesn’t do anything like Amazon/Perplexity.
That’s the weird thing about all this. I’m not against neat features, but “AI!” is not a feature, and everyone is right to assume it will be some spam because that’s what 99% of everything AI is. But it’s like every CEO on Earth has caught the same virus and think a product with “AI” in the name is like a holy grail, regardless of functionality.


Again, they’re tools. Some of the most useful applications for LLMs I’ve worked on are never even seen by human eyes, like ranking, then ingesting documents and filling out json in pipelines. Or as automated testers.
Another is augmented diffusion. You can do crazy things with depth maps, areas, segmentation, mixed with hand sketching to “prompt” diffusion models without a single typed word. Or you can use them for touching up something hand painted, spot by spot.
You just need to put everything you’ve ever seen with ChatGPT and copilot and the NotebookLM YouTube spam out of your head. Banging text into a box and “prompt engineering” is not AI. Chat tuned decoder-only LLMs are just one tiny slice that a few Tech Bros turned into a pyramid scheme.


An OpenAI subscription does not count.
Otherwise, yeah… but it helps them less, proportionally. AAAs still have the fundamental Issue of targeting huge audiences with bland games. Making them even more gigantic isn’t going to help much.
AAs and below can get closer to that “AAA” feel with their more focused project.


Then most just won’t go on the Game Awards, and devs will go on using Cursor or whatever they feel comfortable with in their IDE setup.
I’m all against AI slop, but you’re setting an unreasonably absolute standard. It’s like saying “I will never use any game that was developed in proximity to any closed source software.” That is possible, technically, but most people aren’t gonna do that. It’s basically impossible on a larger team. Give them some slack with the requirement; it’s okay to develop on Windows or on Steam, just open the game’s source.
Similarly, let devs use basic tools. Ban slop from the end product.


Now my blood boils like everyone else’s when it comes to being forced to use AI at work, or when I hear the AI Voice on Youtube, or the forced AI updates to Windows and VS Code
You don’t hate AI. You hate Big Tech Evangelism. You hate corporate enshittification, AI oligarchs, and the death of the internet being shoved down your throat.
…I think people get way too focused on the tool, and not these awful entries wielding them while conning everyone. They’re the responsible party.
You’re using “AI” as a synonym for OpenAI, basically, but that’s not Joel Haver’s rotoscope filter at all. That’s niche machine learning.
As for the exponential cost, that’s another con. Sam Altman just wants people to give him money.
Look up what it takes to train (say) Z Image or GLM 4.6. It’s peanuts, and gets cheaper every month. And eventually everyone will realize this is all a race to the bottom, not the top… but it’s talking a little while :/


Yeah.
Maybe a technicality too. The rule said “no AI,” and E33 used AI.
I get their intent: keep AI slop games out. But in hindsight, making the restriction so absolute was probably unwise.


If we’re banning games over how they make concept art… I’m not sure how you expect to enforce that. How could you possibly audit that?
Are you putting coding tools in this bucket?


Then you’re going to get almost no games.
Or just get devs lying about using cursor or whatever when they code.
If that’s the culture of the Game Awards, if they have to lie just to get on, that… doesn’t seem healthy.


That’s just not going to happen.
Nearly any game with more than a few people involved is going have someone use cursor code completion, or use one for reference or something. They could pull in libraries with a little AI code in them, or use an Adobe filter they didn’t realize is technically GenAI, or commission an artist that uses a tiny bit in their workflow.
If the next Game Awards could somehow audit game sources and enforce that, it’d probably be a few solo dev games, and nothing elsex
Not that AI Slop should be tolerated. But I’m not sure how it’s supposed to be enforced so strictly.


Oh, yes. Big publisher will try it on a huge scale. They cant help themselves.
And they’re going to get sloppy results back. If they wanna footgun themselves, it’s their foot to shoot.
Some mid sized devs may catch this “Tech Bro Syndrome” too, unfortunately.


I think AI is too dumb, and will always be too dumb, to replace good artists.
I think most game studios can’t afford full time art house across like 30 countries, nor should they want the kind of development abomination Ubisoft has set up. That’s what I’m referring to when I say “outsourced”; development that has just gotten too big, with too many people and too generic a target market. And yes, too many artists working on one game.
I think game artists should have a more intimate relationship with their studio, like they did with E33.
And it’d be nice for them have tools to make more art than they do now, so they can make bigger, richer games, quicker, with less stress and less financial risk. And no enshittification that happens when their studio gets too big.
Well, the next question is “what do you do when you drive?” Cars and trucks have wildly different roles they’re good at.
So basically, what do you want your computer to be good at doing? That dictates your hardware purchase and the OS you will end up using.
CPU makers can’t really make system memory affordably, unfortunately. That’s why it’s separate in the first place :(
Intel has actually done this in the past, with a little eDRAM cache for their integrated graphics on some older 5000 series CPUs, like the 5775C. It topped out at 128MB.
AMD already does something similar with their X3D CPUs, albeit with SRAM… it tops out at 64MB.
They will sell you a bigger version, with IIRC 768MB of L3 memory, for many thousands of dollars.
Another issue is that CPU designs take many, many years to go from initial idea to manufacturing, along with truckloads of cash. So they couldn’t even respond to this shortage in 2026 if they wanted to.
Another is that AMD outsources their manufacturing anyway, though not Intel.