• thisfro@slrpnk.net
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    11 months ago

    Who is going to trust an AI so much that they won’t risk it making coding errors?

    Sadly, too many

      • Jeena@jemmy.jeena.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 months ago

        I don’t believe it. If it’s good enough then they will ship and make money, and those who put people on it will be so slow that they will be just outperformed by those who don’t.

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          11 months ago

          If your code doesn’t work because you rely entirely on an AI to do it, you don’t have a business you can run unless you want to go back to paper and pencil.

          • Jeena@jemmy.jeena.net
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            11 months ago

            If your code doesn’t work because you rely on humans understanding it, you don’t have a business you can run. We already are there where humans have no idea why the computer does this or that decision because it’s so complex especially with all the machine learning and complex training data, etc. let’s not pretend it will get less complex with time.

            • Flying Squid@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              11 months ago

              So your argument is that people will rely on AI entirely without making any redundancies, unlike now where they have more than one human so they can check for these issues because humans make coding errors?

              • enkers@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                11 months ago

                I kinda agree with them. Currently coding already is an abstraction. The average developer has very little idea what machine code their compiler actually produces, and for the most part they don’t need to care about this. Feeding an AI a specification is just a higher level of abstraction.

                For now, we’ll need people to check that AI produces code that does what we expect, but I believe at some point we’ll mostly take it for granted that they just do.

              • Jeena@jemmy.jeena.net
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                11 months ago

                My argument is that already today no human is able to and checks it when it comes to decision making models like for example if the car should go left or right around a obstacle. And over time we will have less straight forward classical programming doing decisions and more and more models doing decisions with hundreds or thousands of sensor inputs.