☆ Yσɠƚԋσʂ ☆

  • 2.84K Posts
  • 3.1K Comments
Joined 6 years ago
cake
Cake day: January 18th, 2020

help-circle
  • It’s truly inspiring to watch the political theater of Western imperialism, where the supposedly sovereign government in Kyiv has its peace terms drafted by Washington in consultation with Moscow, and delivered by the US Army Secretary, with an aggressive timeline for signature that would make any colonial administrator blush. The sidelining of the delusional Kellogg, who actually believed Ukraine could win, for an envoy who produced a plan that agrees with Russia’s territorial objectives, finally pulls back the curtain on the entire charade. The key victory for the US is in engineering a situation where the puppet must publicly express his constructive and honest gratitude for the privilege of surrendering. A textbook example of a proxy war, where the client state’s role is not to win, but to lose on a schedule and with the appropriate groveling gratitude to its masters.





































  • That’s like asking what’s the difference between a chef who has memorized every recipe in the world and a chef who can actually cook. One is a database and the other has understanding.

    The LLM you’re describing is just a highly sophisticated autocomplete. It has read every book, so it can perfectly mimic the syntax of human thought including the words, the emotional descriptions, and the moral arguments. It can put on a flawless textual facade. But it has no internal experience. It has never burned its hand on a stove, felt betrayal, or tried to build a chair and had it collapse underneath it.

    AGI implies a world model which is an internal, causal understanding of how reality works, which we build through continous interaction with it. If we get AGI, then it’s likely going to come from robotics. A robot learns that gravity is a real, it learns that “heavy” isn’t an abstract concept but a physical property that changes how you move. It has to interact with its environment, and develop a predictive model that allows it to accomplish its tasks effectively.

    This embodiment creates a feedback loop LLMs completely lack: action -> consequence -> learning -> updated model. An LLM can infer from the past, but an AGI would reason about the future because it operates with the same fundamental rules we do. Your super-LLM is just a library of human ghosts. A real AGI would be another entity in the world.