OpenAI’s new goal of artificial superintelligence

Artificial intelligence at today’s level does not satisfy its creators, who want to do great things with it. For years now, there has been talk of aspirations for AGI, or strong artificial intelligence (Artificial general intelligence or “Strong AI”). It continues to be the Holy Grail for OpenAI. Its director Sam Altman, meanwhile, writes about artificial superintelligence (“superintelligence”). How does this relate to AGI?

Sam Altman, CEO of OpenAI, published a blog post sharing his thoughts after the new year and a month since ChatGPT’s second birthday. This latest development, as Altman describes it, has seen OpenAI move to the next paradigm of models that can perform complex inference. He also believes that as we approach AGI, it is worth taking a look at the progress that has been made. Below are, in our opinion, the most important and interesting excerpts that speak to both the past and future of OpenAI, artificial intelligence and its impact on the world.

We started with OpenAI almost nine years ago because we believed that AGI was possible and that it could be the most influential technology in human history. We wanted to figure out how to build it and make it beneficial at scale. We were excited about trying to make history. Our ambitions were extremely high, as was our belief that this work could benefit society in an equally remarkable way. At the time, few people cared, and if they did, it was mainly because they thought we had no chance of success. (…)

Our vision has not changed; our tactics will continue to evolve. For example, when we started, we had no idea that we would have to build a manufacturing company. Rather, we thought we would simply do great research. We also had no idea that we would need such an insane amount of capital. There are new things we need to build now that we didn’t understand a few years ago. There will also be new things in the future that we can barely imagine now.

We are proud of our track record in research and deployment and are committed to further developing our thinking on security and benefit sharing. We still believe that the best way to make an AI system safe is to iteratively and gradually release it to the world, giving society time to adapt and coevolve with the technology, learn from the experience and continue to make the technology safer. (…)

We are now confident that we know how to build AGI as we traditionally understand it. We believe that in 2025 we may see the first AI agents “joining the workforce” and materially changing the performance of companies. We continue to believe that iteratively putting great tools into the hands of people leads to great, widespread results.

We are beginning to direct our goal beyond that – to superintelligence in the true sense of the word. We love our current products, but we are here for a bright future. With superintelligence, we can do everything else. Super-intelligent tools can powerfully accelerate scientific discoveries and innovations far beyond what we can do on our own, and thus greatly increase abundance and prosperity.

It sounds like science fiction today and something too crazy to even talk about. It’s OK, we’ve been here before and it’s OK for us – to be here again. We’re pretty sure that in the next few years everyone will see what we see, and that the need to act with great care while constantly maximizing broad benefits and empowerment is so important. Given the capabilities of our work, OpenAI cannot be a normal company.

– Sam Altman, CEO of OpenAI

OpenAI seems to be communicating that the real AI breakthrough is yet to come, and that today’s achievements are only a prelude to the real revolution. What might it actually mean? We wrote about this some time ago on CentrumXP in the article Microsoft 365 and artificial intelligence – what will it really change?

Source: https://blog.samaltman.com/reflections

Author: Krzysztof Sulikowski

Do you have questions?