Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.
Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.
That’s a very silly take
Worst. Coup d’etat. Ever.
I assume you’re referring to couping, as opposed to storing chickens
Your statement may be true, but also doesn’t prove the original claim
The definition of fascism is not when someone is racist, or when someone does a coup
The whole “fascism is when thing I don’t like” is exactly the thing the commenter above me was complaining about
That still doesn’t prove the claim “America was always fascist”
Partially because being copied by the Nazis doesn’t intrinsically mean you’re fascist (they copied a hell of a lot of things, including but not limited to fascism)
And partially because that doesn’t cover the “always” part at all
I think it’s reasonable to call it centrist, despite also being right-wing (ie centre-right)
To me, centrism isn’t just about being somewhere in the middle between the left and right of the political environment, but also about having policies that make small adjustments to the current system, as opposed to fundamental, large scale change
I put my steel in the fridge and it’s completely dried out now
Granted, it wasn’t very wet when it went in
Presumably what the other commenter was referring to is the US having the oldest codified constitution
Why? This is a scientific article with a shitpost as the title
I thought it looked like a stealthy power ranger
Yeah, just create an entirely new, incompatible extension engine from scratch for this one feature specifically!
Pleasure doing business, good sir
Ah, I missed that alt text specifically is local, but the point stands, in that allowing (opt-in) access to a 3rd party service is reasonable, even if that service doesn’t have the same privacy standards as Mozilla itself
To pretty much every non-technical user, an AI sidebar that won’t work with ChatGPT (Google search’s equivalent from my example previously) may as well not be there at all
They don’t want to self host an LLM, they want the box where chat gpt goes
Well except for the fact that the salary option is:
If they would be able to get even a slightly worse salaried job instead of being an MP, then the financial motive is - in contrast to your claim - actually in favour of him losing
There’s plenty of situations where even a contextless generated alt-text is a huge improvement on no alt-text at all
Mozilla isn’t in charge of the extension API, it uses Chromium’s WebExtensions API
The alternative is only supporting self hosted LLMs, though, right?
Imagine the scenario: you’re a visually impaired, non-technical user. You want to use the alt-text generation. You’re not going to go and host your own LLM, you’re just going to give up and leave it.
In the same way, Firefox supports search engines that sell your data, because a normal, non-technical user just wants to Google stuff, not read a series of blog posts about why they should actually be using something else.
I’m not necessarily saying they’re conflicting goals, merely that they’re not the same goal.
The incentive for the generator becomes “generate propaganda that doesn’t have the language chatacteristics of typical LLMs”, so the incentive is split between those goals. As a simplified example, if the additional incentive were “include the word bamboo in every response”, I think we would both agree that it would do a worse job at its original goal, since the constraint means that outputs that would have been optimal previously are now considered poor responses.
Meanwhile, the detector network has a far simpler task - given some input string, give back a value representing the confidence it was output by a system rather than a person.
I think it’s also worth considering that LLMs don’t “think” in the same way people do - where people construct an abstract thought, then find the best combinations of words to express that thought, an LLM generates words that are likely to follow the preceding ones (including prompts). This does leave some space for detecting these different approaches better than at random, even though it’s impossible to do so reliably.
But I guess really the important thing is that people running these bots don’t really care if it’s possible to find that the content is likely generated, just so long as it’s not so obvious that the content gets removed. This means they’re not really incentivised to spend money training models to avoid detection.