From Wikipedia: this is only a 1-sigma result compared to theory using lattice calculations. It would have been 5.1-sigma if the calculation method had not been improved.
Many calculations in the standard model are mathematically intractable with current methods, so improving approximate solutions is not trivial and not surprising that we’ve found improvements.
Oh certainly, that series took quite a risk on writing style and it’s quite divisive.
If you enjoy fantasy, you could try her other series as an alternative. The Inheritance Trilogy is a more standard writing style.
I almost put The Fifth Season down after the first chapter, I remember thinking: “This author has a chip on their shoulder”. I’m glad I persevered though, and I definitely recommend the series to people as it is quite different. I’d suggest giving it another shot.
I might try jumping in again on season 2, thanks.
Claude 2 would have a much better chance at this because of the longer context window.
Though there are plenty of alternate/theorised/critiqued endings for Game of Thrones online, so current chatbots should have a better shot at doing a good job vs other writers who haven’t finished their series in over a decade.
As a counterpoint to other comments here, I didn’t like Babylon 5. I gave up in the first season on the episode about religions, where each alien race shows a single religion but then humanity shows an enormous number of them.
Showing planets in sci fi as homogenous is a common trope, but such a simplistic take. This resonated poorly with me as I felt the aliens all behaved exactly like humans as well, to the point where you have stand-ins for Jehovah’s witnesses. That episode cemented for me the feeling I had when watching. Babylon 5 is racist against aliens.
Why do you say they have no representation? There are a lot of specific bodies operating in the government, advisory and otherwise, with the sole focus of indigenous affairs. And of course, currently, indigenous Australians are over represented in terms of parliamentarian race (more than 4% if parliamentarians are of indigenous descent).
While in general, I’d agree, look at the damage a single false paper on vaccination had. There were a lot of follow up studies showing that the paper is wrong, and yet we still have an antivax movement going on.
Clearly, scientists need to be able to publish without fear of reprisal. But to have no recourse when damage is done by a person acting in bad faith is also a problem.
Though I’d argue we have the same issue with the media, where they need to be able to operate freely, but are able to cause a lot of harm.
Perhaps there could be some set of rules which absolve scientists of legal liability. And hopefully those rules are what would ordinarily be followed anyway, and this be no burden to your average researcher.
Taking 89.3% men from your source at face value, and selecting 12 people at random, that gives a 12.2% chance (1 in 8) that the company of that size would be all male.
Add in network effects, risk tolerance for startups, and the hiring practices of larger companies, and that number likely gets even larger.
What’s the p-value for a news story? Unless this is some trend from other companies run by Musk, there doesn’t seem to be anything newsworthy here.
So, taking the average bicep volume as 1000cm3, this muscle could: exert 1 tonne of force, contact 8% (1.6cm for a 20cm long bicep), and require 400kV and must be above 29 degrees Celcius.
Maybe someone with access to the paper can double check the math and get the conversion efficiency from electrical to mechanical.
I expect there’s a good trade-off to be made to lower the force but increase the contraction and lower the voltage. Possibly some kind of ratcheting mechanism with tiny cells could be used to overcome the crazy high voltage requirement.
DALL-E was the first development which shocked me. AlphaGo was very impressive on a technical level, and much earlier than anticipated, but it didn’t feel different.
GANs existed, but they never seemed to have the creativity, nor understanding of prompts, which was demonstrated by DALL-E. Of all things, the image of an avocado-themed chair is still baked into my mind. I remember being gobsmacked by the imagery, and when I’d recovered from that, just how “simple” the step from what we had before to DALL-E was.
The other thing which surprised me was the step from image diffusion models to 3D and video. We certainly haven’t gotten anywhere near the quality in those domains yet, but they felt so far from the image domain that we’d need some major revolution in the way we approached the problem. The thing which surprised me the most was just how fast the transition from images to video happened.
I asked the same question of GPT3.5 and got the response “The former chancellor of Germany has the book.” And also: “The nurse has the book. In the scenario you described, the nurse is the one who grabs the book and gives it to the former chancellor of Germany.” and a bunch of other variations.
Anyone doing these experiments who does not understand the concept of a “temperature” parameter for the model, and who is not controlling for that, is giving bad information.
Either you can say: At 0 temperature, the model outputs XYZ. Or, you can say that at a certain temperature value, the model’s outputs follow some distribution (much harder to do).
Yes, there’s a statistical bias in the training data that “nurses” are female. And at high temperatures, this prior is over-represented. I guess that’s useful to know for people just blindly using the free chat tool from openAI. But it doesn’t necessarily represent a problem with the model itself. And to say it “fails entirely” is just completely wrong.
Looks like the same guys were doing publicity around 2019 https://www.abc.net.au/news/rural/2019-07-30/australia-joins-lab-grown-meat-industry/11360506
At the time, they claimed the cost to make a single hamburger was $30-$40, and now 4 years later, they claim to have gotten it down to $5-$6 per patty.
The article claims the first demonstration of a lab-grown hamburger was in 2013.
So 6 years from proof of concept to (probably) first capital raise, then 4 years to start regulatory approval, 1 year for approval to take place (target is March next year).
!literature@kbin.social should go in your list, it has more of a poetry slant.
!books@kbin.social has almost 3000 members
I’m sure you can find plenty more on kbin.social as well.
Haha, thanks for the correction. If you have to use your degree in ethics, perhaps you could add your perspective to the thread?
If you can get past the weird framing device, the Plinkett reviews of the Star Wars prequels are an excellent deep dive into the issues with those films: https://www.youtube.com/watch?v=FxKtZmQgxrI&list=PL5919C8DE6F720A2D
Jenny Nicholson’s videos are great, but her documentary on “The Last Bronycon” is special, as the realization dawns on you while watching that she has more connection to Brony culture than you might have guessed: https://www.youtube.com/watch?v=4fVOF2PiHnc
According to consequentialism:
From this perspective, the only issue one could have with deep fakes is the distribution of pornography which should only be used privately. The author dismisses this take as “few people see his failure to close the tab as the main problem”. I guess I am one of the few.
Another perspective is to consider the pornography itself to be impermissible. Which, as the author notes, implies that (1) is also impermissible. Most would agree (1) is morally fine (some may consider it disgusting, but that doesn’t make it immoral).
In the author’s example of Ross teasing Rachel, the author concludes that the imagining is the moral quandry, as opposed to the teasing itself. Drinking water isn’t amoral. Sending a video of drinking water isn’t amoral. But sending that video to someone dying of thirst is.
The author’s conclusion is also odd:
Today, it is clear that deepfakes, unlike sexual fantasies, are part of a systemic technological degrading of women that is highly gendered (almost all pornographic deepfakes involve women) […] Fantasies, on the other hand, are not gendered […]
For microcontrollers, quite often. Mainly because visibility is quite poor, you’re often trying to do stupid things, problems tend to be localized, and JTAG is easier than a firmware upload.
For other applications, rarely. Debuggers help when you don’t understand what’s going on at a micro level, which is more common with less experience or when the code is more complex due to other constraints.
Applications running in full fledged operating systems often have plenty of log output, and it’s trivial to add more, formatted as you need. You can view a broad slice of the application with printouts, and iteratively tune those prints to what you need, vs a debugger which is better suited for observing a small slice of the application.
Mirroring the comments on Ars: Why should AI child porn be illegal? Clearly the demand is there, and if you cut off the safe supply, don’t you just drive consumers to sources which involve the actual abuse of minors?
Another comment I saw was fretting that AI was being fed CSAM, and that’s why it can generate those images. That’s not true. Current image generating algorithms can easily generate out of distribution images.
Finally, how does the law deal with sharing seed+prompt (the input to the ai) instead of the images themselves? Especially as such a combination may produce child porn in only 1 model out of thousands.
That reminds me of a joke.
A museum guide is talking to a group about the dinosaur fossils on exhibit.
“This one,” he says, “Is 6 million and 2 years old.”
“Wow,” says a patron, “How do you know the age so accurately?”
“Well,” says the guide, “It was 6 million years old when I started here 2 years ago.”