My company now made mandatory copilot trainings. Nobody wants to use it, but a guy in a suit made them spend hundreds of thousands on it and now it’s our problem.
Isn't the entire purpose of copilot that it shouldn't need much in the way of training? I think the extent of it at my employer is "this is the one you use."
I've tried it a few times, the only thing it seems remotely good for is when your recollection of a source is too fuzzy to form a traditional search query around. "What's that book series I read in the early 2000s about kids who traveled to another world and the things they brought back from it just looked like junk." Kind of questions.
This was our company too. They struck some sort of deal with chat gpt that we use their base code, but aren't connected to their machine learning. Feels like a pretty reasonable approach in my opinion.
So our training was, "use ours. Don't use anyone else's because we don't want our proprietary information out there to never be able to be scrubbed from the internet"
It's pretty decent at unimportant optimisation tasks with limited options. Like "I'm driving from X to Y, my friend travels by train from Z, what are good places to pick them up?"
I'm a self-taught C# dev, I've found tremendous success specifically just describing what I want to do in dumb language that I'd feel stupid asking people IRL about and that aren't googleable without knowing what both the terms "null-coalescing" and "non-merchandise supergroup" are describing.
There are a lot of patterns that don't have obvious names and that aren't easily described without describing a specific scenario in a way that might only make sense institutionally, or with additional context that your average person might not have. ChatGPT is fairly good at being the "buddy that you have a bunch of in-jokes with that can remember things better than you". I can skip a lot of explaining why I need to do a thing a certain way like I can with my coworkers (who all aren't programmers), and I can get helpful answers for programming questions that my coworkers don't know the answers to.
It's frustrating to see this incredibly advanced context-aware autocorrect on steroids get used in ways that don't acknowledge the inherent strengths of what LLMs are actually great at doing. It's infuriating to have that potential be actively misused and packaged as a service and have that mediocre service sold to you once a month as a necessity by idiots in suits watching a line on a chart.
Yeah, this is much the same kind of use. If you work on the assumption that it is just something that has read everything, and everything that has been written about everything you can find it's utility. Folk want it to be some kind of fact genie, but the only facts it knows are what words go together, and it literally doesn't know the difference between real and made up.
Dude, they flubbed this so damn hard by over reaching. A few years ago, when they mentioned there would be a button in word that you could use to make a slide deck of your word dock, I was so excited. The teams meeting part where it will summarize meetings is honestly fantastic in doing Roberts rules of order type stuff. My response was "I hate what this means in terms of privacy, but godamn that sounds useful".
In turning into an everything all or nothing they massively screwed up. I have a self hosted instance of llama-gpt that I use to solve the "blank page" problem that AI was actually great at.
I have a lot of issues with AI on principle, like a lot of folks. But it blows my mind how hard they screwed up delivery (and I don't just mean the startups, that's to be expected). There's plenty to be said about uber at a principle level, but it's still bloody convenient. The entire roll out of a AI-ecosystem reeks of this meme: "but we made plans!".
Are you talking about Github Copilot or Microsoft Copilot? Because I really think the 1st one is pretty useful, although I don't think it needs any training. The 2nd one one the other side is complete bullshit.
My company is all in on GitHub Copilot. They have very unrealistic expectations for how much it will increase productivity. I suspect they were sold on data from junior developers, who I think it helps the most. Anyways, now they are measuring how much engineers use it, so there is some amount of pressure to use it more often.
The training was a little worrisome and disingenuous. The internal team advocating for it aren't strong coders and kept showing examples of it automating antipatterns, like writing useless tests that duplicate an if statement in the tested function, writing very verbose and vague comments (meaningless), or taking an example function and making a new one in a boiler plate way (that cut/pastes common code rather than extracting it into a shared function).
Really, I think it's helpful -- sometimes. Especially to new engineers or when dealing with an unfamiliar library. But I do worry it will lower the bar, and feel over using it can be a waste of time.