• Prunebutt@slrpnk.net
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    4
    ·
    edit-2
    3 days ago

    Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition.

    That stems from the fact that AI is an ill-defined term that has no actual meaning. Before Google maps became popular, any route finding algorithm utilizing A* was considered “AI”.

    And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work.

    Bullshit. These people know exactly how LLMs work.

    LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

    • lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      24
      ·
      edit-2
      3 days ago

      LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

      Even if (and that’s a big if) an AGI is going to be achieved at some point, there will be people calling it parroting by that definition. That’s the Chinese room argument.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          2 days ago

          Me? How can I move goalposts in a single sentence? We’ve had no previous conversation… And I’m not agreeing with the previous poster either…

          • Prunebutt@slrpnk.net
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            By entering the discussion, you also engaged in the previops context. The discussion uas about LLMs being parrots.

            • lunarul@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              2 days ago

              And the argument was if there’s meaning behind what they generate. That argument applies to AGIs too. It’s a deeply debated philosophical question. What is meaning? Is our own thought pattern deterministic, and if it is, how do we know there’s any meaning behind our own actions?

              • Prunebutt@slrpnk.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 days ago

                The burden of proof lies on the people making the claims about intelligence. “AI” pundits have supplied nothing but marketing-hype.

    • fartsparkles@sh.itjust.works
      cake
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      24
      ·
      edit-2
      3 days ago

      It’s not that it has no meaning, it’s that the meaning has become overloaded.

      That’s why the term “Artificial General Intelligence” came into use to denote an artificial intelligence that surpasses human capabilities across a wide range of tasks. A* is ultimately narrow AI. So are LLMs.

      • Prunebutt@slrpnk.net
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        7
        ·
        edit-2
        2 days ago

        AI is a marketing buzzword. When someone claims that so-called “AGI” is close, they’re either doing marketing or falling for marketing.

        Since you didn°t address the “parroting” part, I’m assuming that you retract your point.