Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 309 Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle



  • One issue is that AI in its various forms makes it far easier than it had been to use such a tool without understanding what the limitations are. Garbage in, garbage out still applies, but if the user can’t tell the difference, the garbage gets spread as quality work. This had led to the term “AI slop” which has morphed into a general “I don’t like this post” label.

    Another bigger issue is the origin of the data for training, which unfortunately has tainted good uses for these tools (when used within their limits, as stated before). I agree with this concern, but once LLMs and related AI became freely open to the public, that ship has sailed and even if there was a company that could even prove its AI was trained only with legitimately obtained information (which could make it more limited than the ones out there), would anyone believe them?

    A related issue on training would be how the AI was trained (ignoring the problem of the source of the data). The very fact that LLMs were modeled to give proper and positive answers only leads to the conclusion that it has long moved from a research project to find AGI into a marketing ploy to give the best impression on the ignorant public to profit from. This gets into the “AI slop” area of seemingly good results to the average user when it is not, but rather than slop it’s deception.





  • I wonder what the breakdown of discussions started from users vs. bots is. While I can see your point especially from a spam pov, one purpose of this kind of bot is to pull from other sources and get a conversation going. If no one is interested, then it just falls to the bottom. I often see posts complaining that Lemmy/fediverse isn’t as active as Reddit was/is, and yet without some of this automation it would be far deader as people don’t tend to start posts as much as reply to existing one. If a particular bot/community is flooding your feed too much, that can be easily be blocked, or let the mods/admin know that it needs some adjustment.




  • It’s a version of the age old question on how do you keep someone from stealing your images while still being able to show it. No one can see an image without having downloaded it already. The best you can do is layer in things like watermarks to make cleaning it into a “pure” version not worth the trouble. Same with text, poison it so it’s less valuable without a lot of extra work.





  • As a Mbin user, appreciate him being in the right place at the right time, even if his coding wasn’t fully “ready” for the sudden task and he couldn’t continue the work himself. That he made it open source for others to take and run with made a huge difference. Glad he’s doing okay.




  • That’s a reasonable definition. It also pushes things closer to what we think we can do now, since the same logic makes a slower AGI equal to a person, and a cluster of them on a single issue better than one. The G (general) is the key part that changes things, no matter the speed, and we’re not there. LLMs are general in many ways, but lack the I to spark anything from it, they just simulate it by doing exactly what your point is, being much faster at finding the best matches in a response in data training and appearing sometimes to have reasoned it out.

    ASI is a definition only in scale. We as humans can’t have any idea what an ASI would be like other than far superior than a human for whatever reasons. If it’s only speed, that’s enough. It certain could become more than just faster though, and that added with speed… naysayers better hope they are right about the impossibilities, but how can they know for sure on something we wouldn’t be able to grasp if it existed?


  • I doubt the few that are calling for a slowing or all out ban on further work on AI are trying to profit from any success they have. The funny thing is, we won’t know if we ever hit that point of even just AGI until we’re past it, and in theory AGI will quickly go to ASI simply because it’s the next step once the point is reached. So anyone saying AGI is here or almost here is just speculating, just as anyone who says it’s not near or won’t ever happen.

    The only thing possibly worse than getting to the AGI/ASI point unprepared might be not getting there, but creating tools that simulate a lot of its features and all of its dangers and ignorantly using them without any caution. Oh look , we’re there already, and doing a terrible job at being cautious, as we usually are with new tech.