Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • Womble@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Yes of course it works on similarities, I havent disputed that. My point was that the transformations of the token vectors are a transfer of information, and that this transfer of information is not lost as things move out of the context length. That information may slowly decohere over time if it is not reinforced, but the model does not immediately forget things as they move out of context as you originally claimed.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 months ago

      It does, as model only works with a well defined chunk of tokens of a given length. Everything before is lost. Clearly part of the information of previous context is in that chunk.

      But let’s say that I am talking about wine, at some point I talk about chianti. I and the chatbot go on discussing for over 4k words (I am using chatgpt as an example) without mentioning chianti. After that the chatbot will know we are discussing about wine, but it won’t know we covered the topic of chianti.

      This is what I meant.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        I’m only going to reply this time then I’m done here as we are going round in circles. I’m saying that is not what happens as the attention network would link Chianti and wine together in that case and move information between them. So even after Chianti has gone out of the context window it is more likely to pick Chianti than Merlot when it requires a type of wine.