
[ad_1]
If the entire artificial intelligence is A bit of a failure?
“Is this all hype and no substance?” More and more people are asking this question. ask recent Regarding generative artificial intelligence, it is pointed out Model release delaysThat Commercial applications are slow to develop,Right now The success of the open source model This makes it harder to make money from proprietary technology, and the whole thing Spend a lot of money.
I think many of those who are calling for an “AI recession” don’t have a good grasp of the big picture. Some of them have been insisting that It makes no sense for generative AI This view of AI as a technology is woefully out of touch with many real-world users of AI. use.
I think some people’s views on how quickly commercialization should happen are silly. Even for a technology that is extremely valuable and promising and will ultimately be transformative, it takes time from invention to the first release of a highly desirable consumer product based on that technology. (E.g., electricity, It took decades (The gap between invention and truly widespread adoption.) It seems true that “the killer app for generative AI has not yet been invented,” but this is also not a good reason to assure everyone that it won’t be invented soon.
But I think there is a sober “failure case” that doesn’t rely on misunderstanding or underestimating the technology. The next wave of super expensive models still don’t seem to have solved the hard problems that make them worth spending billions of dollars to train — and if that happens, we may be entering a less exciting period. More iteration and improvement of existing products, fewer blockbuster new product launches, and less wild coverage.
If this happens, it could have a huge impact on attitudes toward AI safety, although in principle the case for AI safety does not depend on the AI hype of the past few years.
I have been thinking about the basic issues of AI safety Write about This was long before ChatGPT and the recent AI craze. In short, there was no reason to think that AI models that could reason as well as humans and faster were impossible, and we knew they would have huge commercial value if developed. And we knew that developing and releasing powerful systems that could act independently in the world without supervision and oversight that we didn’t actually know how to provide was extremely dangerous.
Many people working on large language models believe The system is powerful enough to turn these security issues from theory into reality Just around the cornerThey may be right, but they may also be wrong. The people I agree with most are engineers. Alex Irpan: “The chances of achieving this with the current paradigm (just building bigger language models) are slim. But the chances are still higher than I’m comfortable with.”
The next generation of large language models may not be dangerous enough yet. But many people working in the field believe they will be, and given Huge consequences The possibility of uncontrolled AI is not so small that it can be easily ignored, so some supervision is necessary.
How AI safety and AI hype are intertwined
In fact, if the next generation of large language models is not significantly better than the ones we have now, I expect AI will still change our world — just more slowly. Many poorly conceived AI startups will go bankrupt, and many investors will lose money — but people will continue to improve our models at a fairly rapid pace, making them cheaper and eliminating their most irritating flaws.
Even the most ardent skeptics of generative AI, like Gary Marcus, tend to tell me that superintelligence is possible; they just think it requires a new technological paradigm that somehow combines the power of large language models with some other approach that can compensate for their shortcomings.
Although Marcus calls himself an AI skeptic, it is often difficult to find significant differences between his views and those of Ajeya Cotra and others. Powerful intelligent system It might be driven by a language model, similar to how a car is driven by an engine, but there will be a lot of additional processes and systems to transform its output into something reliable and usable.
People I know who worry about AI safety often hope that things will move in this direction. It means we have more time to better understand the systems we are creating, and time to see the consequences of using them before they become unimaginably powerful. AI safety is a set of hard problems, but not unsolvable ones. Give it time, and maybe we’ll solve them all.
But my sense of the public discussion about AI is that many people think of “AI safety” as a particular worldview that is inseparable from the AI craze of the past few years. What they understand by “AI safety” is the claim that superintelligent systems will emerge within the next few years—a reference to Leopold Aschenbrenner’s “Situational Awareness” and is fairly common among AI researchers at top companies.
If we don’t achieve superintelligence in the next few years, then I expect to hear a lot of people saying “it turns out we don’t need AI safety”.
Focus on the big picture
If you’re an investor in an AI startup today, will GPT-5 be delayed for six months or will OpenAI Raise funds at a devalued valuation next time.
However, if you are a policymaker or a concerned citizen, I think you should take a wider distance and separate the question of whether current investors’ bets will pay off from the question of where we are going as a society.
Whether GPT-5 is a powerful intelligent system or not, a powerful intelligent system will have commercial value, and there are thousands of people working hard from different perspectives to build such a system. We should think about how to treat such systems and ensure that their development is safe.
If a company makes a high-profile announcement that they’re going to build a powerful but dangerous system and it fails, the answer shouldn’t be “I guess we have nothing to worry about” but “I’m glad we have more time to develop the best policy response.”
As long as people try to build extremely powerful systems, security is critical – the world can neither be blinded by the hype nor reactively ignore it.
[ad_2]
Source link