So is this what Mozilla meant when they announced a privacy push back in February
So is this what Mozilla meant when they announced a privacy push back in February
How’s it compare to greenshot?
California has pushed out badly worded laws in the past. Here’s a definition from the bill.
“Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.
Tell me that wouldn’t also apply to a microwave oven.
After several years of using Linux for work and school, I made the leap to daily driving linux on my personal computer. I stuck with it for two years. Hundreds of hours I sunk into an endless stream of inane troubleshooting. Linux preys on my desire to fix stuff and my insane belief that just one more change, suggested by just one more obscure forum post will fix the issue.
… the lack of an increment operation, no “continue” instruction, and array indices starting from 1 instead of 0. These differences can be jarring
Understatement
It depends. It will not affect many of them until 2025 when enterprise support for v2 ends and by then other arrangements and fixes might be. Brave in particular I would not worry yet.
Something I often see missing from discussion on privacy is that it’s not always about you, the listener. Sometimes it’s about protecting the most vulnerable people around you. For example, someone escaping from domestic violence might have a different view on how their information is protected. People struggle to see the value in privacy because it’s not been a big problem for them personally or because they think it’s hopeless. An introduction to privacy in my view is all about teaching empathy, hope, and advocating for others.
Once they have that goal in mind, you can tie in how open source helps empower people to take back their privacy
I wonder how good this model would be at an obfuscated code challenge.
This is all they really said IMO:
My tendency these days is to try to use the term “machine learning” rather than AI
The initial results showed something that should have been obvious to anyone: *More data beats more parameters.
That makes a lot of sense!
Might be factoring in more than just state income tax. There’s also sales tax, property tax, etc.
Purely speculation but, I wonder if this is a case of having some old, very low quality photos and trying to enhance and upscale them for the show.
*Ten things that will pad out my list of generic rpg book topics. I definitely didn’t start with a clickable title and then fumble coming up with the ten things.
You can generate your own tracks using bing chat.
All the Suno tracks I’ve heard have a similar style. Very procedural and formulaic. Calling it AI seams like a stretch.
Relevant article: https://lemmy.ml/post/12857742
Prompt engineering is a thing, but I wouldn’t say it’s much of a job title. There are people doing it: optimizing system prompts, preprocessing and postprocessing, llms are just one piece of a complex pipeline and someone has to build all that. Prompt engineering is part the boot strapping for making better llms but this work is largely being done by data scientists who are on the forefront of understanding how AI works.
So is prompt engineering just typing questions? IDK. Who knows what those people mean when they say that but whatever it’s called there is a specialized field around improving AI tech and prompt engineering is certainly a part.
Containers are a really cool part of security. The security provided will depend on how the container is configured. For example if you give the container bridged networking permissions (or whatever equivalent term is used by your solution) then you’re giving the container access to communicate with other devices on your local network. This would be the opposite of what you want to do to prevent an attacker from pivoting through your LAN.
Other threats just aren’t within the set of protections that can be provided by a container. For example if you wish to protect your Minecraft world from being griefed the container won’t have any affect on this. Another example is hiding your IP.
Basically what I’m saying is that whenever you are looking at a security technology think about what guarantees it provides and realize that no single security measure provides protection against all threats.
You’re basically relying on the security of minecraft, and your ability to quickly patch. The Log4j exploit is one good example of the kind of threats you might face.
Another is just that revealing your ip can open an opportunity for various forms of harassment. Lots of us skate by on obscurity and luck without to many issues, but that’s not a very robust solution.
Nothing in the article corroborated the claim in the title that human intervention made things worse, just that the problem goes deeper.
Which comment?