There’s a safe bet that if you’ve put something on the internet, it’s been scraped by a bot by now for training. I don’t like that, for the record, just saying I’m not surprised at this point. Companies are morally bankrupt
I don’t know why everyone is all shocked all of a sudden, there have been various scraper bots collecting text info for…many years now, LONG before LLMs came onto the scene.
I agree, but it’s one thing if I post to public places like Lemmy or Reddit and it gets scraped.
It’s another thing if my private DMs or private channels are being scraped and put into a database that will most likely get outsourced for prepping the data for training.
Not only that, but the trained model will have internal knowledge of things that are sure to give anxiety to any cyber security experts. If users know how to manipulate the AI model, they could cause the model to divulge some of that information.
Stay away from proprietary crap like Discord, Slack, WhatsApp and Facebook Messenger. There are enough FOSS alternatives out there:
- You just want to message a friend/family member?
- Signal is the way to go
- You need strong privacy/security/anonymity?
- SimpleX
- Session
- Briar
- I can’t really tell you which one is the best, since I never used any of these (except for Session) for an extended period of time. Briar seems to be the best for anonymity, because it routes everything through the Tor network. SimpleX allows you to host your own node, which is pretty cool.
- You want to host an online chatroom/community?
- You need to message your team at work?
- You want a Zoom alternative?
- You just want to message a friend/family member?
You forgot the most important word from the title:
Yuck
I also scan Slack messages and never really read them unless they’re about food in the office kitchen.
The AI is paying more attention to your Slack messages than you are.
So slack is stealing trade secrets?
First all companies were afraid of giving access to these models, for trade secret issues and security. But then they basically all met at the white house to agree that they would make way more fucking money stealing it than they would pay in restitution or damages to people and small businesses.
Suddenly everybody had a chatbot and generated art ready for commercial sale. They also had to make the shift quickly enough before official laws and protections (mostly from the EU) came in.
Now AI is plateauing a bit so they must hurry to get valuated at 10 trillion dollars and get their energy needs subsidized and have taxpayers invest into the nation’s energy requirements on their behalf.
We talk fairly openly about everything but passwords on slack…
it’s funny how the conventional wisdom at the end of the last decade was that slack was preferred over other simpler/free alternatives because of its UX. People were hailing it for how simple and intuitive it was to use, etc.
5, 6 years later, it has become a bloated piece of crap riddled with bugs. And the UI changes which come unannounced… it should be a criminal offense to change UI through automated updates.
Anyway, here we are, companies have handed their data to this monster and we’ll see how they react when the data gets misused. Hopefully that would be the beginning of the end for it
I fucking hate slack. I very rarely get any notification of new messages, and if I do I have to restart the app to get them to actually show up
I love slack. But the only thing I can compare it with for corp use is teams. So if course it’s amazing
Sounds like a lot of this is for non-generative AI. It’s for dumb things like that frequently used emoji feature.
Knowing how my legal teams have worked in my tech companies, I’m a bet that a lawyer updated the terms language to be in compliance with privacy legislation, but they did a shit job, and didn’t clarify what specifically was being covered in the TOS. They were lazy, and crafted something broad, so they wouldn’t have to actually talk to product or marketing people in their org.
What is it like to live in a place with privacy legislation? Here we must sell our healthcare data for food, and sell our food for healthcare.
Where do you live?
Sounds like 'murrica.
The more they push to train AI on our shitpostings on social networks, the more I’m certain we’re fucking doomed if their AI ever reaches consciousness.
We may very well be doomed if AI reaches consciousness but I’m not quite convinced LLM’s is the way to get there but even if it was and it was solely trained on social media content I still wouldn’t expect it to adopt the behaviour of your typical social media commentor. The toxic behaviour on social media is, in my view, almost solely driven by our human ego and pettiness. It’s not obvious to me that AI would care about things like winning arguments or coming up with snide remarks and such. What I see as the most likely outcome would be endlessly patient and quite autistic-like being that’s balanced in it’s views and would most likely be pretty difficult to argue against. I doubt humans are anywhere even near the far-end of the intelligence spectrum and something with the information processing capability that’s orders of magnitude greater than ours would more than likely not get caught up in stuff like confirmation bias, partisan thinking, motivated reasoning, being tossed around by emotions, cognitive dissonance etc. Those are by definitions human features.
It will have the potency of a god, and the knowledge of 4Chan.
May god have mercy on us all
So you’re saying we can leak company data through Slack soon?
Always have been, apparently