Anything that pushes back copyright is fine by me.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
Anything that pushes back copyright is fine by me.
A local model is just a giant matrix of numbers, so as long as you’re running it locally you can be sure it’s not secretly recording or communicating information with any outside source. Just make sure you trust the software that’s running it (there’s plenty of open source alternatives for that that have nothing to do with China).
And since it’s an open weight model, any remaining reluctance to talk about whatever subject can be abliterated or fine-tuned away if it’s really a problem.
I read a proposal a while back for using the Yellowstone magma chamber for geothermal power generation. It’s not currently in danger of erupting as a supervolcano, but the paper worked the numbers and showed that it would actually be feasable with realistic engineering to tap enough heat from the magma chamber to literally “defuse” it if it actually came to that. And turn a profit while doing so.
Americans use those so it’s already accounted for.
Except when they say something we don’t want to believe, of course.
It’s a blow to the big closed-source AI companies, sure, but hardly a knockout one. If a small company can use a million dollars to produce a neat model perhaps a big company can use those same techniques and a billion dollars to produce a really neat model. Or at least build a lot more of the infrastructure that goes around those models and makes use of them. Code Copilot isn’t just selling a raw LLM API, they’re selling its integration into the Microsoft coding ecosystem. They may have wasted some money on their current-generation AIs but that’s just sunk cost. They’ve got more money to spend on future AIs.
The main problem will be if Western AI companies are prevented from adapting the techniques being used by these Chinese AI companies. If, for example, there are lots of onerous regulations on what training data can be used or requiring extreme “safety guardrails.” The United States seems likely to be getting rid of a lot of those sorts of obstructions over the next few years, though, so I wouldn’t count the West out yet.
I think it was the 1B model
Well there you go, you took a jet ski and then complained that it was having difficulty climbing steep inclines in mountains.
Small models like that are not going to “know” much. Their purpose is generally to process whatever information you give them. For example you could use one to quickly and cheaply categorize documents based on their contents, or use one as a natural-language interface you could use to ask it to execute commands on other tools.
The specific subject that Triton is telling Ariel about is where babies come from.
The problem isn’t stuff going in, it’s the baby coming out.
Wait until she finds out how she’ll be doing it once she’s human. I suspect she’ll prefer this approach.
Oh, neat. The first one blew up the door, and then the second one literally flew inside and went down the hallway to reach the cache.
And sometimes that’s exactly what I want, too. I use LLMs like ChatGPT when brainstorming and fleshing out fictional scenarios for tabletop roleplaying games, for example, and in those situations coming up with plausible nonsense is specifically the job at hand. I wouldn’t want to go “ChatGPT, I need a description of the interior of a wizard’s tower is like” and get the response “I don’t know what the interior of a wizard’s tower is like.”
Yup. Fortunately unsubscribing from politics subreddits is generally advisable whether one has been banned from them or not.
Being slightly wrong means more of an endorphin rush when people realize they can pounce on the flaw they’ve spotted, I guess.
Don’t sweat downvotes, they’re especially meaningless on the Fediverse. I happen to like a number of applications for AI technology and cryptocurrency, so I’ve certainly collected quite a few of those and I’m still doing okay. :)
There was a politics subreddit I was on that had a “downvoting is not allowed” rule. There’s literally no way to tell who’s downvoting on Reddit, or even if downvoting is happening if it’s not enough to go below 0 or trigger the “controversial” indicator.
I got permabanned from that subreddit when someone who’d said something offensive asked “why am I being downvoted???” And I tried to explain to them why that was the case. No trial, one million years dungeon, all modmail ignored. I guess they don’t get to enforce that rule often and so leapt at the opportunity to find an excuse.
Downvotes for not getting it right, I presume.
Which makes me concerned that the “Hole for Pepnis” answer has so many upvotes.
Those holes look open to me.
I recall reading once upon a time that the original idea for this exemption was that it was for literal scholars - a few hundred priestly intellectual sorts that were professional serious full-time Torah-studiers. But the exemption didn’t have any specific criteria listed for what that meant, so the ultra-orthodox all wound up saying “yeah, I study the Torah all day too, so I qualify.”
Ah, so it’s useless then.