• rah@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    LLM limitations like “they only predict the next token” and other things that have already been falsified

    What do LLMs do beyond predicting the next token?

    • kromem@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      A few months back it was found that when writing rhyming couplets the model has already selected the second rhyming word when it was predicting the first word of the second line, meaning the model was planning the final rhyme tokens at least one full line ahead and not just predicting that final rhyme when it arrived at that token.

      It’s probably wise to consider this finding in concert with the streetlight effect.