Self-driving tech is widely distrusted by the public, and Tesla’s huge Autopilot recall and Cruise’s scandals don’t seem to have helped.

  • @Fizz
    link
    05 months ago

    Self driving tech is pretty good and getting better at an insane rate. I think people only distrust it because of bad media reporting.

    • @[email protected]
      link
      fedilink
      55 months ago

      I don’t trust it because musk lies all the time. It may work fine, but you can’t tell lies like he does and expect people to believe you this time.

      • BruceTwarzen
        link
        fedilink
        25 months ago

        I don’t trust it because they can’t even get the car part of self driving car right.

      • @Fizz
        link
        15 months ago

        Self driving tech isn’t only tesla. There are many implementations and they are pretty amazing in my opinion.

        • @[email protected]
          link
          fedilink
          25 months ago

          Sure, but it’s impressive in the same way that a dancing bear is impressive - and it’s not because the bear dances well.

          Even the best self driving implementation are limited to warm sunny days in well mapped areas.

          • @[email protected]
            link
            fedilink
            05 months ago

            Actually, it can work pretty well. My Comma 3X could see and navigate the road better than I could in heavy rain on the highway. There’s many different levels of maturity here, but even lane keep assist makes driving easier and is useful for that.

            You’re still right to distrust these systems, but that doesn’t mean that they are bad.

            • @[email protected]
              link
              fedilink
              25 months ago

              Oh yeah, it can work great. And it can work terribly. We haven’t hit the point where it’s reliably “great” though. And that makes it rather more dangerous to me since it builds a sense of security that is unwarranted (not that I’m saying you disagree I’m just expanding on my distrust).

              One of the major problems is that the failure modes can be very different from how a person fails. Like when you see a car just sitting in the middle of a road because it can’t figure out what to do for some reason. A person you could wave on. An AI you can’t. We understand human behavior but can’t really understand the AI decision-making process.

              This is why I can’t quite get behind the “all AI needs to do is be slightly better than people” argument. On one hand, from a purely statistics pov, I get it I. But if self-driving cars were “basically perfect” except that every-now-and-then one of them randomly exploded (still killing fewer people than auto accidents) would people be okay with that? Automobile accidents aren’t truly “random” like that.