I just had Copilot hallucinate 4 separate functions, despite me giving it 2 helper files for context that contain all possible functions available to use.
My wife uses AI tools a lot (while I occasionally talk to Siri). But she uses it for things like: she’s working on a book and so she used it to develop book cover concepts that she then passed along to me to actually design. I feel like this is the sort of thing most of us want AI for—an assistant to help us make things, not something to make the thing for us. I still wrestle with the environmental ethics of this, though.
The environmental impacts can be solved easily by pushing for green tech. But that’s more a political problem that a technical problem IMO.
Like stop subsizing oil and gas and start subsizing nuclear (in the short term) and green energy in the long term.
It’s cutting my programming work in half right now with quality .NET code. As long as I stay in the lead and have good examples + context in my codebase, it saves me a lot of time.
This was not the case for co-pilot though, but Cursor AI combined with Claude 3.7 is quite advanced.
If people are not seeing any benefit, I think they have the wrong use cases, workflow or tools. Which can be fair limitations depending on your workplace of course.
You could get in a nasty rabbit hole if you vibe-code too much though. Stay the architect and check generated code / files after creation.
I use it extremely sparingly. I’m critical of anything it gives me. I find I waste more time fixing its work and removing superfluous code more than I do gaining value from it.
The Copilot code completion in VSCode works surprisingly well. Asking Copilot in the web chat about anything usually makes me want to rip my hair out. I have no idea how these two could possibly be based on the same model
It quite depends on your use case, doesn’t it? This decades-old phrase about an algorithm in Fractint always stuck with me: “[It] can guess wrong, but it sure guesses quickly!”
Part of my job is getting an overview - just some generic leads and hints - about topics completely unknown to me, really fast. So I just ask an LLM, verify the links it gives and create a response within like 10-15 minutes. So far, no complaints.
Yeah I find the code completion is pretty consistent and learns from other work I’m doing. Chat and asking it to do anything though is a hallucinogenic nightmare.
I just had Copilot hallucinate 4 separate functions, despite me giving it 2 helper files for context that contain all possible functions available to use.
AI iS tHe FuTuRE.
Even if it IS the future, it is not the present. Stop using it now.
Not the person you replied to, but my job wants us to start using it.
The idea that this will replace programmers is dumb, now or ever. But I’m okay with a tool to assist. I see it as just another iteration of the IDE.
Art and creative writing are different beasts altogether, of course.
My wife uses AI tools a lot (while I occasionally talk to Siri). But she uses it for things like: she’s working on a book and so she used it to develop book cover concepts that she then passed along to me to actually design. I feel like this is the sort of thing most of us want AI for—an assistant to help us make things, not something to make the thing for us. I still wrestle with the environmental ethics of this, though.
The environmental impacts can be solved easily by pushing for green tech. But that’s more a political problem that a technical problem IMO. Like stop subsizing oil and gas and start subsizing nuclear (in the short term) and green energy in the long term.
It’s cutting my programming work in half right now with quality .NET code. As long as I stay in the lead and have good examples + context in my codebase, it saves me a lot of time.
This was not the case for co-pilot though, but Cursor AI combined with Claude 3.7 is quite advanced.
If people are not seeing any benefit, I think they have the wrong use cases, workflow or tools. Which can be fair limitations depending on your workplace of course.
You could get in a nasty rabbit hole if you vibe-code too much though. Stay the architect and check generated code / files after creation.
I use it extremely sparingly. I’m critical of anything it gives me. I find I waste more time fixing its work and removing superfluous code more than I do gaining value from it.
Our tiny company of software engineers have embraced it in the IDE for what it is, a tool.
As a tool we have saved a crazy amount of man hours and as I don’t work for ghouls we recently got pay increases and a reduction in hours.
There are only 7 of us including the two owner / engineers and it’s been a game changer.
The Copilot code completion in VSCode works surprisingly well. Asking Copilot in the web chat about anything usually makes me want to rip my hair out. I have no idea how these two could possibly be based on the same model
It quite depends on your use case, doesn’t it? This decades-old phrase about an algorithm in Fractint always stuck with me: “[It] can guess wrong, but it sure guesses quickly!”
Part of my job is getting an overview - just some generic leads and hints - about topics completely unknown to me, really fast. So I just ask an LLM, verify the links it gives and create a response within like 10-15 minutes. So far, no complaints.
Yeah I find the code completion is pretty consistent and learns from other work I’m doing. Chat and asking it to do anything though is a hallucinogenic nightmare.