Do you think AI and / or AGI is a possibly at all given enough time?
Because if the answer is yes, then don’t we need people working on it all the time to keep inching towards that? I’m not saying that the current implementations are anywhere close, but they do have their use cases. I’m a software developer and my boss the lead engineer (the smartest person I’ve ever met) has made some awesome tools tools that save our company of 7 people maybe a 100 hours of work a month.
People used to complain about the LHC and that’s made countless discoveries that help in other fields.
LLMs and GANs in general are to AI and AGI like a hand pumped well is to the ISS. Sure, they both technological marvels of their time, but if you’re wanting to float in microgravity there is no possible adjustment you can make to the former to get it to actually behave like the latter.
Powered flight was an important goal, but that wouldn’t have justified throwing all the world’s resources at making Da Vinci’s flying machine work. Some ideas are just dead ends.
Transformer based generative models do not have any demonstrable path to becoming AGI, and we’re already hitting a hard ceiling of diminishing returns on the very limited set of things that they actually can do. Developing better versions of these models requires exponentially larger amounts of data, at exponentially scaling compute costs (yes, exponentially… To the point where current estimates are that there literally isn’t enough training data in the world to get past another generation or two of development on these things).
Whether or not AGI is possible, it has become extremely apparent that this approach is not going to be the one that gets us there. So what is the benefit of continuing to pile more and more resources into it?
Do you think AI and / or AGI is a possibly at all given enough time?
Because if the answer is yes, then don’t we need people working on it all the time to keep inching towards that? I’m not saying that the current implementations are anywhere close, but they do have their use cases. I’m a software developer and my boss the lead engineer (the smartest person I’ve ever met) has made some awesome tools tools that save our company of 7 people maybe a 100 hours of work a month.
People used to complain about the LHC and that’s made countless discoveries that help in other fields.
LLMs and GANs in general are to AI and AGI like a hand pumped well is to the ISS. Sure, they both technological marvels of their time, but if you’re wanting to float in microgravity there is no possible adjustment you can make to the former to get it to actually behave like the latter.
Powered flight was an important goal, but that wouldn’t have justified throwing all the world’s resources at making Da Vinci’s flying machine work. Some ideas are just dead ends.
Transformer based generative models do not have any demonstrable path to becoming AGI, and we’re already hitting a hard ceiling of diminishing returns on the very limited set of things that they actually can do. Developing better versions of these models requires exponentially larger amounts of data, at exponentially scaling compute costs (yes, exponentially… To the point where current estimates are that there literally isn’t enough training data in the world to get past another generation or two of development on these things).
Whether or not AGI is possible, it has become extremely apparent that this approach is not going to be the one that gets us there. So what is the benefit of continuing to pile more and more resources into it?