• 4 Posts
  • 506 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle
  • Not everyone’s a big kb/mouse fan. My sister refuses to use one on the HTPC.

    Hence I think that was its non-insignificant niche; couch usage. Portable keyboards are really awkward and clunky on laps, and the steam controller is way better and more ergonomic than an integrated trackpad.

    Personally I think it was a smart business decision, because of this:

    It doesnt have 2 joysticks so I just buy an Xbox one instead.

    No one’s going to buy a steam-branded Xbox controller, but making it different does. And I think what killed it is that it wasn’t plug-and-play enough, eg it didn’t work out of the box with many games.




  • A lot, but less than you’d think! Basically a RTX 3090/threadripper system with a lot of RAM (192GB?)

    With this framework, specifically: https://github.com/ikawrakow/ik_llama.cpp?tab=readme-ov-file

    The “dense” part of the model can stay on the GPU while the experts can be offloaded to the CPU, and the whole thing can be quantized to ~3 bits average, instead of 8 bits like the full model.


    That’s just a hack for personal use, though. The intended way to run it is on a couple of H100 boxes, and to serve it to many, many, many users at once. LLMs run more efficiently when they serve in parallel. Eg generating tokens for 4 users isn’t much slower than generating them for 2, and Deepseek explicitly architected it to be really fast at scale. It is “lightweight” in a sense.


    …But if you have a “sane” system, it’s indeed a bit large. The best I can run on my 24GB vram system are 32B - 49B dense models (like Qwen 3 or nemotron), or 70B mixture of experts (like the new Hunyuan 70B).


  • DeepSeek, now that is a filtered LLM.

    The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.

    There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to “improve its risk profile”:

    https://huggingface.co/microsoft/MAI-DS-R1

    https://huggingface.co/perplexity-ai/r1-1776

    That’s the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.


    Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.

    Instruct LLMs aren’t trained on raw data.

    It wouldn’t be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked “anti woke” data to placate Musk real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.


    …Not that I don’t agree with you in principle. Twitter is a terrible source for data, heh.






  • OK, while in principle this looks bad…

    This is (looking it up) like an experienced engineer’s salary in Peru, in line with some other professions.

    It’s reasonable to compensate a president, and for the expectation to not be coming in rich/connected enough to not need a salary. Nor for them to broker power for personal wealth, all as long as other offices and reasonably compensated too.

    It avoids perverse incentives, doesn’t seem excessive and TBH is probably a drop in the Peruvian govt’s budget.





  • My last Android phone was a Razer Phone 2, SD845 circa 2018. Basically stock Android 9.

    And it was smooth as butter. It had a 120hz screen while my iPhone 16 is stuck at 60, and I can feel it. And it flew through some heavy web apps I use while the iPhone chugs and jumps around, even though the new SoC should objectively blow away even modern Android devices.

    It wasn’t always this way; iOS used to be (subjectively) so much faster that it’s not even funny, at least back when I had an iPhone 6S(?). Maybe there was an inflection point? Or maybe it’s only the case with “close to stock” Android stuff that isn’t loaded with bloat.





  • Yes, but its clearly a building block of Meta’s LLM training effort, and part of a pattern.

    One implication I didn’t mention, and don’t have hard proof I can point to, is garbage in garbage out. Meta let AI slop and human garbage proliferate on Facebook, squandering basically the biggest advantage (besides cash) they have. It’s often speculated that, as it turns out, Twitter and Facebook training data is kinda crap.

    …And they’re at it again. Zuckerberg pours cash into corporate trash and get slop back. It’s an internal disaster, like their own divisions.

    On the other side, it’s often thought that Chinese models are so good for their size/compute because they’re ahem getting data from the Chinese government, and don’t need to worry about legal issues.


  • The research community already knows this.

    Llama 4 (Meta’s flagship ‘AI’ project) was as bad release. That’s fine. This is interative research; not every experiment works out.

    …But it was also a messy and dishonest one.

    The release was pushed early and full of bugs. They lied about its performance, especially at long context, going so far as to game Chat Arena with a finetune. Zuckerberg hyped the snot out of it, to the point I saw ads for it on Axios.

    Instead of Meta saying they’ll do better, they said they’re reorganizing their divisions to focus on ‘applications’ instead of fundamental research, aka exactly the wrong thing. They’ve hermmoraged good researchers and kept AI bros, far as I can tell from the outside.

    Every top LLM trainer has controversies. Just recently Qwen (Alibaba) closed off their top base models just to spite Deepseek, so they can’t distill them. Deepseek is almost certainly training on Google Gemini traces. Google hoards their best research for API models and has chased being sycophantic like ChatGPT. X’s Grok is a joke, and muddied by Musk’s constant lies about, for instance, open sourcing it. Some great outfits like 01ai (the Yi series) faded into the night.

    …But I haven’t seen self-destruction quite like Meta’s. Especially considering the ‘f you’ money and GPU farm they have. They’re still pushing interesting research now, but the trajectory is awful.


  • ChatGPT (last time I tried it) is extremely sycophantic though. Its high default sampling also leads to totally unexpected/random turns.

    Google Gemini is now too.

    And they log and use your dark thoughts.

    I find that less sycophantic LLMs are way more helpful. Hence I bounce between Nemotron 49B and a few 24B-32B finetunes (or task vectors for Gemma) and find them way more helpful.

    …I guess what I’m saying is people should turn towards more specialized and “openly thinking” free tools, not something generic, corporate, and purposely overpleasing like ChatGPT or most default instruct tunes.