• Mr_Will@feddit.uk
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    The big question is how? The algorithms aren’t the root cause of the problem, they are just amplifying natural human behaviour.

    People have always fallen down these rabbit holes and any algorithm based on predicting what a person will be interested in will suffer a similar problem. How can you regulate what topics a person is interested in?

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Do we need algorithms that predict what we’re interested in though? At what point do we go “ah this is actually causing more trouble than it’s worth?”

      I’d be perfectly fine browsing content by category rather than having it fed to me based on some sort of black-box weighting system with no clear instructions for me to correct. I mean it works great here on Lemmy.

      • kenbw2@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Lemmy literally has an algorithm to rank posts

        Or do you sort your posts by new?

        What would you propose for YouTube?

      • Mr_Will@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Do you ever sort posts by “hot”, “active” or even “top 6 hours”? They’re all algorithms that predict what you’re interested in. Less complex than something like YouTube or Instagram, but the same core principle.

        The amount of content published on the internet each day makes some kind of sorting necessary. Browsing YouTube by “new” would be a cluttered mess, even with fairly narrow categories. Over 11,000 hours of new video are posted every hour - we need some way to automatically sort the wheat from the chaff, and that means some sort of algorithm.

        So how do we build an algorithm that delivers what we want, without giving people too much of what they want if they want something potentially harmful? As far as I know, nobody has found a good answer to that.

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Well I mean obviously I’m not against algorithms in general. They’re just mathematical functions to achieve a goal. Each HTTP request generally uses both encryption and compression algorithms and that’s highly useful.

          I’m questioning the usefulness of profiling and targeting users with specific content. The Lemmy algorithm isn’t that complex, it doesn’t build a user profile on you, it just goes by general user engagement. That’s fine. Further by virtue of it being open source, Lemmy wouldn’t have a “black box”, it’d be open for anyone to view and analyse.

          Comparing Lemmy to YouTube/Instagram/Facebook/Twitter and the like makes for a rather poor comparison.

          • Mr_Will@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Lemmy’s simpler algorithm still has the same the problem though. That’s been seen time and time again on Reddit. Humans will actively curate a feed of content they find engaging and avoid content they disagree with. This leads down exactly the same rabbit holes as if you let an algorithm curate a personalised feed for that user.

    • tony@lemmy.hoyle.me.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      My theory is society has a suppressing affect on these things… It’s not nice to be a nazi, or to mistreat people you don’t like, so these things get hidden.

      Algorithms do the opposite. Now someone with Nazi tendencies is surrounded by them and encouraged. Posts hating trans people get pushed by algorithms because they drive engagement (even if all the initial responses are negative, it’s still engagement to the algorithm, which will then boost the ‘popular’ post).

      Things like lemmy and mastodon don’t do that and end up nicer places as a result.

      • Mr_Will@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        You’re my mostly right about society but the problem is not algorithms, it’s echo-chambers. The KKK wasn’t driven by an algorithm but still radicalised people in the same way - once you’re able to find a bubble within society that accepts your views, it’s very easy for your views to grow more extreme. Doesn’t matter whether that’s fascism, racism, communism, no-fap or hydrohomies - the mechanisms work the same way.

        Reddit was arguably no more algorithm-led than Lemmy or Mastodon, but that hasn’t prevented the rise of a whole list of hate-fueled subs over there. The root problem is that people with Nazi tendancies find pro-nazi content engaging. The algorithm isn’t pushing it upon them, it’s just delivering what they want.

      • Dojan@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Thank you! I looked it up, and it sounds really interesting. Will have a deeper dive into it!

      • Mr_Will@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Thanks for the recommendation, it looks interesting but sounds like it pretty much agrees with what I’m saying.

        Algorithms do what they are designed to do, but nobody knows exactly how society will be impacted by that. On the surface, delivering people with a feed of information that matches their interests seems like a good idea. The problem is that people are often interested in divisive topics and reinforcing their existing views, so anything that makes it easier for people to find these topics has a divisive and radicalising effect.