Let's take a first look at the new ChatGPT o1 model - a state-of-the-art reasoning AI model from OpenAI that shows unmatched abilities in math, science, and coding.
Writing code is a small part of being a software engineer and compared to those coding tasks with very detailed instructions about the input, constraints and the output (even with examples), actual tasks are usually missing lots of information you need to find out from different people and there is a huge code base that can’t be transfered to the model.
If it can fully replace a software developer, it can replace almost anyone’s job.
Technology is always progressing but nobody can say what the next big thing will be, if you really think you are that prescient you can make loads of cash predicting things. Companies are hungry for the next big thing though and will do everything to convince us that they have it, AI is an enticing grift because it’s so misunderstood. The next big thing wasn’t AR or VR or the metaverse, and I don’t think it’s going to be generative AI either, it’s already plateauing and not profitable, even with billions of dollars behind it.
Most adults can also learn to code, if they actually tried. If you’re gonna add the argument that most people can’t code proficiently, most people can’t drive proficiently, either.
Also, driving and coding are completely different set of skills that it’s kinda worthless to compare them. Some people can code just fine but might never learn how to drive because they didn’t need to, so to consider driving as a prerequisite skill to coding doesn’t make sense.
Well I think you’re wrong here, and about any adult can learn how to drive, but only a small subset can learn how to code. Not learning how to throw a simple script together, real codeing.
Coding is engineer level, engineers build cars, they dont only drive them. For me the difference is the same between a developer of a software and the user of said soft.
One it way way way more complicated, and IA is supposed to do that “soon” when it can’t even drive a car.
I think you’re completely wrong by still comparing skills that have no relation to each other. What’s the similarity between driving and coding that would require an LLM to be need to do one before you can believe it can do the other? Explain that leap in logic properly before you continue with your argument.
An LLM is designed to output text. Expecting them to drive to prove their ability to output code is like expecting them to dance to prove their ability to produce poems. It’s inability to do an unrelated skill has no bearing on it’s ability to do a different one. You’re basically judging a fish on its ability to walk on land, and using that as the basis to judge its ability to swim.
What does that even mean? Neural networks have varying levels of complexity, even within the same technology. Even the same LLM model can have different number of tokens that differentiate the complexity of their operation.
So instead of using a neural network that is designed to input and output text and making it learn to output coding, which is also text, you think it’s supposed to be easier for them to make it instead analyse various video and audio input from multiple cameras, and then output the various actions that is required for it to drive a car?
Does that make sense to you?
Why could it not replace an engineer?
The previous limits of technology exploded less than half a decade ago, seems wild to assume that’s the end of that kind of growth.
Eventually, we might get there, sure. But I don’t see any reason to believe this is it, and I use AI to assist in my programming every day.
If you instead said, some engineers will be replaced by AI. I’d definitely agree, and without a doubt they’ll try, repeatedly.
In its current state?
Writing code is a small part of being a software engineer and compared to those coding tasks with very detailed instructions about the input, constraints and the output (even with examples), actual tasks are usually missing lots of information you need to find out from different people and there is a huge code base that can’t be transfered to the model.
If it can fully replace a software developer, it can replace almost anyone’s job.
Technology is always progressing but nobody can say what the next big thing will be, if you really think you are that prescient you can make loads of cash predicting things. Companies are hungry for the next big thing though and will do everything to convince us that they have it, AI is an enticing grift because it’s so misunderstood. The next big thing wasn’t AR or VR or the metaverse, and I don’t think it’s going to be generative AI either, it’s already plateauing and not profitable, even with billions of dollars behind it.
Sure, make it drive a car first, a thing 99% of the population can do before attempt coding ^^
Coding is actually quile complicated, especially in old existing codebases. Add that they train them on any crap code they can find…
No way are you going to convince me 99% of the population can drive. Go get a more accurate statistics before trying to use it to dismiss something.
It’s an example, and most adults (if I have to explicit it) can drive, or can learn how to.
Coding not so much.
So AI that can’t even drive, can code suddenly? I don’t think so.
Better like that?
Most adults can also learn to code, if they actually tried. If you’re gonna add the argument that most people can’t code proficiently, most people can’t drive proficiently, either.
Also, driving and coding are completely different set of skills that it’s kinda worthless to compare them. Some people can code just fine but might never learn how to drive because they didn’t need to, so to consider driving as a prerequisite skill to coding doesn’t make sense.
Well I think you’re wrong here, and about any adult can learn how to drive, but only a small subset can learn how to code. Not learning how to throw a simple script together, real codeing.
Coding is engineer level, engineers build cars, they dont only drive them. For me the difference is the same between a developer of a software and the user of said soft.
One it way way way more complicated, and IA is supposed to do that “soon” when it can’t even drive a car.
Nah, not happening any time soon.
I think you’re completely wrong by still comparing skills that have no relation to each other. What’s the similarity between driving and coding that would require an LLM to be need to do one before you can believe it can do the other? Explain that leap in logic properly before you continue with your argument.
An LLM is designed to output text. Expecting them to drive to prove their ability to output code is like expecting them to dance to prove their ability to produce poems. It’s inability to do an unrelated skill has no bearing on it’s ability to do a different one. You’re basically judging a fish on its ability to walk on land, and using that as the basis to judge its ability to swim.
Neural networks are quite similar in complexity, whatever they output.
Driving is way less complex than programming.
What does that even mean? Neural networks have varying levels of complexity, even within the same technology. Even the same LLM model can have different number of tokens that differentiate the complexity of their operation.
So instead of using a neural network that is designed to input and output text and making it learn to output coding, which is also text, you think it’s supposed to be easier for them to make it instead analyse various video and audio input from multiple cameras, and then output the various actions that is required for it to drive a car? Does that make sense to you?
…officer! I told the car to not hurt anyone! I can’t help it if he’s sarcastic!