✺roguetrick✺

  • 3 Posts
  • 170 Comments
Joined 2 years ago
cake
Cake day: February 16th, 2024

help-circle










  • It’s been pretty much party line that they want this to happen for some time now. Pooh bear famously made this houses are for living in speech and directly wanted to knock out speculation. They do not want to be the West and define themselves by asset prices rising with no relationship to productivity. What the West would view as a major crisis then is the CCP’s intended effect. They specifically targeted these highly leveraged developers with the theory (correct in my opinion) that the loans themselves are what are driving this “growth” and it’s largely creating a situation of extraction that’s pricing out regular Chinese from housing. Even making noises in that direction would seriously spook Western housing prices. Harris’s solution was famously increasing lending if you remember.




  • Activated carbon does absorb lead because it has a variety of binding sites that will bind to lead ions. The problem is, those binding sites are limited and will get quickly used up if you’re having to actually deal with any significant amount of lead and if you have other metal ions (like copper) trying to compete for binding sites the whole profile looks worse. This means if you’ve got hard water with a ton of competing ions, the filter will likely do dick for lead. So the Brita filters do do something, but if there’s an actual utility to what they do in regards to heavy metals depends on the water.


  • Pre print journalism fucking bugs me because the journalists themselves can’t actually judge if anything is worth discussing so they just look for click bait shit.

    This methodology to discover what interventions do in human environments seems particularly deranged to me though:

    We address this question using a novel method – generative social simulation – that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms.

    LLM agents trained on social media dysfunction recreate it unfailingly. No shit. I understand they gave them personas to adopt as prompts, but prompts cannot and do not override training data. As we’ve seen multiple times over and over. LLMs fundamentally cannot maintain an identity from a prompt. They are context engines.

    Particularly concerning sf the silo claims. LLMs riffing on a theme over extended interactions because the tokens keep coming up that way is expected behavior. LLMs are fundamentally incurious and even more prone to locking into one line of text than humans as the longer conversation reinforces it.

    Determining the functionality of what the authors describe as a novel approach might be more warranted than making conclusions on it.