Exactly. As a SW engineer, I don’t know how far we are from an AGI exactly, but I am confident enough Altman and openAi have no idea where to even start.
As a software engineer that works in AI, the “breakthrough” we’ve made is in proving that LLM’s can perform well at scale, and that hallucinations aren’t as big a problem as initially thought. Most tech companies didn’t do what OpenAI did because hallucinations are brand-damaging, whereas OpenAI didn’t give a fuck. In the next few years, all existing AI systems will be through LLM’s, and probably as good at ChatGPT.
We might make more progress now that researchers and academics see the value in LLM’s, but my weakly held opinion is that it’s mostly surrounded by hype.
We’re nowhere near what most would call AGI, although to be blunt, I don’t think the average person on here could truly tell you what that looks like without disagreeing with AI researchers.
Exactly. As a SW engineer, I don’t know how far we are from an AGI exactly, but I am confident enough Altman and openAi have no idea where to even start.
I read that and immediately thought of you working in a Star Wars hangar, fixing rebel ships
That sounds like the general consensus from most SW engineers.
As a software engineer that works in AI, the “breakthrough” we’ve made is in proving that LLM’s can perform well at scale, and that hallucinations aren’t as big a problem as initially thought. Most tech companies didn’t do what OpenAI did because hallucinations are brand-damaging, whereas OpenAI didn’t give a fuck. In the next few years, all existing AI systems will be through LLM’s, and probably as good at ChatGPT.
We might make more progress now that researchers and academics see the value in LLM’s, but my weakly held opinion is that it’s mostly surrounded by hype.
We’re nowhere near what most would call AGI, although to be blunt, I don’t think the average person on here could truly tell you what that looks like without disagreeing with AI researchers.
We are so fucked.
Removed by mod