By June, “for reasons that are not clear,” ChatGPT stopped showing its step-by-step reasoning.

  • Captain Janeway@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I don’t get it. I thought these models were “locked”. Shouldn’t the same input produce near-identical output? I know the algorithm has some fuzzing to help produce variation. But ultimately it shouldn’t degrade, right?

    • dave@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      The big pre-training is pretty much fixed. The fine tuning is continuously being tweaked, and as shown, can have dramatic effects on the results.

      The model itself just does what it does. It is, in effect, and ‘internet completer’. But if you don’t want it to just happily complete what it found on the internet (homophobia, racism, and all), you have to put extra layers in to avoid that. And those layers are somewhat hand-crafted, sometimes conflicting, and therefore unlikely to give everyone what they consider to be excellent results.

      • Captain Janeway@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Ok but, regardless, they can just turn back the clock to when it performed better right? Use the parameters that were set two months ago? Or is it impossible to roll that back?

        • dave@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Better for one obscure use case? Or just ‘better’? That’s the real issue here. OpenAI have an agenda (publicly, a helpful assistant, privately, who knows…). They’re not really interested in a system that can identify prime numbers.