It means they admit they were wrong and you were correct. As in, “I have been corrected.”
It means they admit they were wrong and you were correct. As in, “I have been corrected.”
I have read programs a lot shorter than 500 lines which I don’t have the expertise to write.
It has access to a python interpreter and can use that to do math, but it shows you that this is happening, and it did not when i asked it.
That’s not what I meant.
You have access to a dictionary, that doesn’t prove you’re incapable of spelling simple words on your own, like goddamn people what’s with the hate boners for ai around here
??? You just don’t understand the difference between a LLM and a chat application using many different tools.
ChatGPT uses auxiliary models to perform certain tasks like basic math and programming. Your explanation about plausibility is simply wrong.
If you fine tune a LLM on math equations, odds are it won’t actually learn how to reliably solve novel problems. Just the same as it won’t become a subject matter expert on any topic, but it’s a lot harder to write simple math that “looks, but is not, correct” than it is to waffle vaguely about a topic. The idea of a LLM creating a robust model of the semantics of the text it’s trained on is, at face value, plausible; it just doesn’t seem to actually happen in practice.
Well, we knew he was a shitbag beforehand, so that’s not really what’s in question
I’m not a physicist, I don’t know one way or another. But it’s possible that there’s a leading explanation for the formation of the universe based on a mathematical model that predicts exactly one big bang.
Based on the comment you’re replying to, I assume they would say “no, nothing materialized from nothing because there wasn’t a ‘before’ in which nothing could have existed”
It wouldn’t have been published, and he’s only relatively famous if you’re a topologist, but it was Charlie Frohman. Not that it must carry the same weight for you, but I value his insight highly, even if it’s just a quip.
Yes, but it proves that termwise comparison with the harmonic series isn’t sufficient to tell if a series diverges.
The assumption is that the size decreases geometrically, which is reasonable for this kind of self similarity. You can’t just say “less than harmonic” though, I mean 1/(2n) is “slower”.
Quoting a relatively famous mathematician, linear algebra is one of the few branches of math we’ve really truly understood. It’s very, very well behaved
Yes, with Iosevka font
Google it? Axiomatic definition, dedekind cuts, cauchy sequences are the 3 typical ones and are provably equivalent.
I’m fully aware of the definitions. I didn’t say the definition of irrationals was wrong. I said the definition of the reals is wrong. The statement about quantum mechanics is so vague as to be meaningless.
That is not a definition of the real numbers, quantum physics says no such thing, and even if it did the conclusion is wrong
U good?
Being suitable for human consumption doesn’t mean it’s not also suitable for playing a role in a more efficient food chain
Stokes’ theorem. Almost the same thing as the high school one. It generalizes the fundamental theorem of calculus to arbitrary smooth manifolds. In the case that M is the interval [a, x] and ω is the differential 1-form f(t)dt on M, one has dω = f’(t)dt and ∂M is the oriented tuple {+x, -a}. Integrating f(t)dt over a finite set of oriented points is the same as evaluating at each point and summing, with negatively-oriented points getting a negative sign. Then Stokes’ theorem as written says that f(x) - f(a) = integral from a to x of f’(t) dt.
Yes, speed and the benefits of all the tooling and static analysis they’re bringing to Python. Python is great for many things but “analyzing Python” isn’t necessarily one of them.