- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Earlier this year, WIRED asked AI detection startup Pangram Labs to analyze Medium. It took a sampling of 274,466 recent posts over a six week period and estimated that over 47 percent were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” says Pangram CEO Max Spero. (The company’s analysis of one day of global news sites this summer found 7 percent as likely AI-generated.)
I just had one of these! Literally each image was AI generated and everything real like it was from openai. It was a Google search for something like “kubernetes custom deployment rules” and it was a result that was like “kubelat.medium.com” or something. They just take the most asked questions and generate entire articles about them.
I just went to the source and asked chatGpt directly. I got a better answer anyway
It was an SEO hellhole from the start, so this isn’t surprising.
Do Forbes next!
Is there a single good article on Forbes? It’s always fucking clickbait without actual content.
After all these years, I’m still a little confused about what Forbes is. It used to be a legitimate, even respected magazine. Now it’s a blog site full of self-important randos who escaped from their cages on LinkedIn.
There’s some sort of approval process, but it seems like its primary purpose is to inflate egos.
As of 2019 the company published 100 articles each day produced by 3,000 outside contributors who were paid little or nothing.[52] This business model, in place since 2010,[53] “changed their reputation from being a respectable business publication to a content farm”, according to Damon Kiesow, the Knight Chair in digital editing and producing at the University of Missouri School of Journalism.[52] Similarly, Harvard University’s Nieman Lab deemed Forbes “a platform for scams, grift, and bad journalism” as of 2022.[49]
they realized that they could just become an SEO farm/content mill and churn out absurd numbers of articles while paying people table scraps or nothing at all, and they’ve never changed
How well does the “AI detection startup’s” product work? This is a big unsolved problem but I’d be hecka skeptical.
It doesn’t, and never will
That’s because of bots like you. (I kid to make a point.)
That’s exactly what a bot would say, to stay undetected.
That is why I liked the comparison with articles from 2018. Then you have comparable texts in the same format and can more easily figure out differences in your analysis.
If true, a jump from 3% to 40% is significant to say the least.@Black616Angel numbers in the article are 7% for the pre-2018 corpus, and 47% for the post-2018 corpus. That is from less than 1 in 10 to almost 1 in 2, or a coin toss…
In 2018, 3.4 percent were estimated as likely AI-generated.
For 2024, with a sampling of 473 articles published this year, it suspected that just over 40 percent were likely AI-generated.
My numbers were from the Originality AI part.
@Black616Angel yes, I’ve realized that and corrected my post while you responded 😉
Maybe the blurb was AI-generated? 🤦🏻♂️
the first person who develops a browser that effectively filters out AI results is going to do very well
This could easily be done with AI. For a week or so, that is.
Shitty tech opinions were flooding Medium before, so it’s not much of a difference.
I think the difference is scale. Before it was x% of humanity making shitting opinions where x < 100. Now it’s x% of humanity+AI, where x is, say, 100,000% of humanity. I don’t think we’re currently equipped to separate the wheat from that much chaff.
Omg the amount of times I’ve clicked on a Medium article in the last month and immediately knew it was AI is so frustrating!!! They aren’t even helpful articles because you can tell there is no real understanding.
The best part about this, is that new models will be trained on the garbage from old models and eventually LLMs will just collapse into garbage factories. We’ll need filter mechanisms, just like in a Neal Stephenson book.
People learn and write program code with the help of AI. Let this sink in for a moment.
I’m in university and I’m hearing this more and more. I keep trying to guide folks away from it, but I also understand the appeal because an LLM can analyze the code in seconds and there’s no judgements made.
It’s not a good tool to rely on, but I’m hearing more and more people rely on it as I progress.
The true final exam would be writing code on an airgapped system.
I’m going into my midterm in 30 minutes where we will be desicrating the corpses of trees.
I knew it would be the first platform to go. The same goes for substack, thats next.
Perhaps, but I don’t read anything on Substack unless I’m subscribed. Reputation is the entire point on Substack, without it, the content will get no traffic.