Welcome to today’s daily kōrero!

Anyone can make the thread, first in first served. If you are here on a day and there’s no daily thread, feel free to create it!

Anyway, it’s just a chance to talk about your day, what you have planned, what you have done, etc.

So, how’s it going?

  • @DaveOPMA
    link
    17 months ago

    Well now you mention it, how come AI is so crap in Star Trek and most other TV set in the future? Other than Data and the occasional other AI cameo, everything is surprisingly manual.

    If it’s possible to create an AI better than a person, and that AI creates an AI even better, and this repeats, are there still going to be jobs that only humans can do?

    • @absGeekNZ
      link
      English
      27 months ago

      What you are referring to is an AI superintelligence; the exponential growth is part of it.

      As for science fiction, AI superintelligence makes humans irrelevant.

      • @DaveOPMA
        link
        17 months ago

        Haha yeah I guess in a world with AI superintelligence, you don’t need any humans, then the show isn’t very interesting.

        Which raises the next question. In a galaxy with 100 billion stars, why hasn’t life on one planet somewhere that evolved a billion or half a billion years before us managed to make an AI replicating explorer that explores the galaxy? Maybe a superintelligent AI has no need to explore the galaxy?

        • @absGeekNZ
          link
          English
          37 months ago

          That is an interesting question in its own right.

          There are lots of theories on this:

          • From the mundane, maybe we are the first.
          • To the exotic, the “zoo” hypothesis states: that there is at least one group that are keeping us “blind” to the real universe by manipulating our measurements / ability to measure.

          Even if no “far future tech” is available, using just fusion based propulsion (near future tech). The galaxy could be colonized in a few million years. Which considering the age of the universe/galaxy is an extremely short time.

          • @DaveOPMA
            link
            17 months ago

            Yeah, I love the fermi paradox. What if AI is the great filter? Civilizations eventually build AI that can build better versions of itself, and the result is always that the AI kills the civilization (or some equivalent - say, people stop knowing how things work, the AI eventually breaks in some way, then people can’t survive in the world built for AI).

            I also like the Dark Forest idea too. From the book The Dark Forest which is the sequel to The Three Body Problem. But knowing about it might spoil the book so I don’t want to explain it here.

            • @absGeekNZ
              link
              English
              27 months ago

              I know about the dark forest idea, but it is a little flawed (a long with most of the solutions) it is predicated on the assumption where ALL civilizations follow the script.

              • @DaveOPMA
                link
                17 months ago

                No it doesn’t require that, because as soon as one doesn’t follow the script, they poke their metaphorical head up and BAM, get them before they get you. No one will have their head up for long because they get wiped out.

                • @absGeekNZ
                  link
                  English
                  2
                  edit-2
                  7 months ago

                  This is the case, but waging interstellar war necessarily will reveal your position to the grater galaxy. In which case, you have “poked your head up”. If you assume that there are more than two such civilizations then the war continues until there is only one.

                  At some point there are only two, it is unlikely that both will be eliminated to the point that neither can never rise again.

                  Run the thought experiment; assume you are civilization C, you detect that somewhere “near by” a civilization (Civ A) sends out a signal. A short time later a great war breaks out. Civ A is utterly eliminated, you detect that there are slagged planets and exploded moons. Whilst you could not detect directly that Civ B; you know that Civ B is out there somewhere, you know the time delay and thus can estimate the probable radius within which Civ B could exist. You step up your passive detection efforts focusing on eventually (100’s of years) you find Civ B. Now you know that they are doomed, but you need to ensure you don’t meet the same fate. But you also know that another civilization may have done exactly the same thing, and is watching Civ B for any sudden change. But you can’t let a know civilization exist when you known that they may find you at any time, and they will eliminate you as soon as they find you.

                  • @DaveOPMA
                    link
                    17 months ago

                    Well one interesting thing to add to the mix is that signals (like, say, wifi) don’t actually last that long. They disappate to below detectable levels pretty quick, to appear just part of the cosmic microwave background.

                    Therefore if that’s your method of detection, you can’t really detect very far across the galaxy. Maybe a few hundred or few thousand light years even with the best technology we could assume would exist in the near-mid future. And the milky way is about 90k light years across. And we are on the end of an arm, in a sparse area of the galaxy. Probably the bulk if life is with the bulk of stars towards the centre which we have pretty much no way of seeing what happens that far away (at a planet level).

                    So probably the inability to know what others are doing would be a big reason why dark forest doesn’t really work.

        • @deadbeef79000
          link
          27 months ago

          This is part of The Fermi Paradox.

          TL;DR: If the universe is ancient and infinite, where is everybody?

          Super AI is one possible answer: Everyone creates it, is subsumed by it.

          If you’ve got effectively unlimited computational power, you just simulate everything rather than having to explore it

          • @DaveOPMA
            link
            17 months ago

            Whoops I sorta responded to the wrong post with my thoughts but yes, maybe life loses all meaning if you have super AI. Or maybe they eventually put you in a version of the matrix after reasoning happiness is the only meaning of life and so they can optimise our happiness this way.

            Maybe it has already happened.