The CEO of a software company has stated that utilizing ChatGPT has drastically reduced the amount of time it takes to finish coding tasks, from approximately nine weeks to just a few days.
#Software #company #CEO #ChatGPT #cuts #time #takes #complete #coding #tasks #weeks #days
believe the CEO, they never lie, or hype their wares
From my experience, it speeds things up, but you need to have deep knowledge of the programming language you’re using to be able to use it effectively. ChatGPT is only aware of syntax that it’s been trained on (about 20 languages) and that training data is a bit outdated. For at least half the queries I give it, it uses older libraries or deprecated functions and I need to either replace them myself or get even more specific in my queries. It’s also only capable of outputting code snippits or smaller functions.
It can be great for beginners or common code generation, but experienced programmers won’t use it for that, they already have their own code and knowledge to get them that far. They’re using it for unusual edge cases or new processes and ChatGPT has limited value there.
As a software developer, and chatgpt user, I don’t understand this.
I keep hearing about all these devs using chatgpt to x10 their productivity. Really?
It helps me out a lot of small things like: “I started using a new mocking library, write a unit test that does X”, or “Write a regex that does X”.
It saves me an 1-3 hours a week, max.
It’s not like I can import my companies codebase (don’t do that) of 1 million lines and tell it “write feature X”.
These CEOs are delusional. I’ve used it extensively for work and it can help but 9 weeks to a few days is just ridiculous. Not even close
Software company CEO tells us he doesn’t really know much about coding without telling us he doesn’t know anything about coding
Lots of students turned in chatGPT generated code last semester. The trouble, aside from the fact that they were told not to use chatGPT, was they turned in code that they could not reasonably explain or defend. On top of that, chatGPT would hallucinate and sometimes generate a mixture of languages.
They were also asked to turn in code that used specific functions along with a required pseudocode flow. This is where chatGPT would go off the rails with convoluted code that was not efficient or understandable by the student.
I see junior devs taking the same shortcuts – they are supposed to be using a certain set of classes in thei work but they chatGPT around the requirements and say “but it works”. They see their job as complete even if they could not explain why the code uses the functions or classes that it does. Their mantra is “anything is fine as long as I got the right output”/
Completing and completing successfully are two different things though
This basically says “Please don’t use any new software at my company.”
maybe don’t broadcast that AI can build a clone of your software in a matter of months
I’ve been trying for weeks to get meaningful, non-boilerplate code out of the thing. Not happening. Love to hear specific use cases.
I’ll agree with others that it’s an awesome research assistant, though. I personally shaved off weeks of googling, so as a search engine replacement…slam dunk.
Absolute bullshit, trust nothing from this CEOs company.
The best coding tool for AI is Github Copilot and it helps for tasks but not architecture, integration and shipping. It *can* speed up the idea/inception/start phase but past that you aren’t putting your whole codebase into ChatGPT… ffs. This is tabloid level FOMO conman marketing, near MLM or timeshare level bullshit.
I think GitHub Copilot is really more of a glimpse into how LLMs can be a game-changing tool than the genetic ChatGPT app.
But that’s just it: it’s a tool. It doesn’t get you the whole way to a finished product and it definitely doesn’t get you the expertise needed to adapt and support it for customers and integration into an ecosystem. It can get you 80% of the way there a lot of the time, but the remaining 20% requires a refreshed skill set to identify and effectively execute on.
“80% done” is often enough for good PR and demos, so it doesn’t surprise me that non-technical roles are exaggerating like this. Get the shocked Pikachu faces ready for the wave of “unforeseeable technical challenges delaying product rollout” announcements in the next couple of years.
Limitations aside, though, it *is* a game-changer. Just like using a power drill and nail gun can give you faster, more consistent results when you’re building something with screws and nails, LLMs are going to dramatically accelerate velocity when used correctly — and that multiplier is only going to get bigger, fast. It’s all contingent on us adapting and learning to use the tools, though.
As for implications, massively increased productivity implies either faster innovation and development or fewer people needed. Probably a decent chunk of each. We should be very worried about *that* across the entire archetype of knowledge work, but that’s its own ball of wax.
I find it’s good for little snippets like what I used to search on stackoverflow, eg “write a function that paginates through the repositories in ECR”, but it’s not very good at figuring out how to organize a project, or what abstractions should exist, and it’s no good at all when you move to multi/distributed system design. I’m sure it will get better though
That company must have some shitty software
And their code is not secure, lacks stability and is poor quality.
People throwing code into production quickly, especially that they don’t understand either at all or in the context they operate on is what keeps me employed.
I welcome everything about this, and available when the house is on fire and no one knows what to do. Haha
And debugging from 9 weeks to 9 months.
QA team gone from 9 days to test cycle to 8 weeks.
Are you all just using ChatGPT proper to generate code or are we using “ChatGPT” as a generalized term for some GPT-based tool (a la AutoGPT)?
(I understand why OpenAI has doubled down on using the name given the popularity, but it annoys me to no end when I’m trying to figure out whether an article is talking about the API, some third party tool using the API, or literal ChatGPT).
I’d played around and found the code it generates will technically work, but if you know the language well, the results usually come back like a first-year intern wrote it. I basically had to treat it as such, “Well, let’s talk about why this isn’t the best way to approach this problem and will cause performance issues down the road.”
Cuts the time it takes to do the needful
What this says to executives:
“If we give our engineers ChatGPT, we can fire 90%+ of them!”
Haha this dude is full of shit. I am a software developer by trade and also use chatGPT. It can speed things up and also slow things down. It doesn’t produce scalable code. Certainly doesn’t create 9 weeks worth of work in a few days that is an absurd statement.
That is of course, if you know what you are doing. We have to be real, it saves times but it is not a complete tool. Some code works, some have to be tweaked to make it work.
It is a tool that will reduce dev time, but it wont replace the dev behind the screen
I can get a good look at a T-bone by sticking my head up a bull’s ass, but I’d rather take a butcher’s word for it.
It only shows how redundant and non innovative is the code that his company produces.
Here come the snake oil salesmen
“Any fool can code. It’s a programmer who knows how to debug.”
Yeah, exactly. Especially when it comes to researching new stuff, if I want to do something but I have no idea what options I have (what packages and implementations exist), chatGPT literally cuts *days* of research. Every Dev has been in a situation where we spend 3 days reading documentation on package, coding with it and then we realize it doesn’t actually do what we need… Well, chatgpt cuts that to an hour or so.