• Derpenheim@lemmy.zip
    link
    fedilink
    English
    arrow-up
    25
    ·
    10 hours ago

    My knee jerk is no, because fuck ai, but LLMs are literally made to parse vast amounts of data quickly. The analysis and corrections needs to be done manually, but finding these errors are literally what they were originally made to do

    • CptBread@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Well it could have the issue of overloading volunteers with issues. Especially bad if the false positive rate is high enough.

  • FriendBesto@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 hours ago

    I watch a YT channel that talks and researches History on Wales, and on that somewhat narrow topic alone, he has found some ridiculous mistakes on Wikipedia. There are tons but few people are aware as they may lack the suffiency in knowledge or background to know how wrong they are. AI will surely make that problem worse. I have caught ChatGTP to be wrong numerous times on some topics within my wheelhouse. When I tell it is wrong it “apologizes,” corrects itself and just adds what I told it. Well, if it had found the data before, then why does it have to wait until it is corrected? If kids use this for school, they are so fucked.

    Who wants to put glue on their pizza?

    • TheLeadenSea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      11 hours ago

      I know everyone on Lemmy hates LLMs, but analysing large amounts of text to fond inconsistencies is actually something they’re good at. Not correcting them, of course, that can be left to humans. Just finding them.

        • artyom@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          4 hours ago

          That’s why you have to manually review them. The biggest problem with LLMs is abuse. People just print their outputs without ever checking their validity.