Elon Musk wanted to take over OpenAI for $97.4 billion. The offer was rejected.

Elon Musk still can’t get over the fact that he no longer has control of OpenAI. His court action backfired, so he tried another door, gathering a group of investors and offering $97.4 billion to take over the company. The offer was rejected in a big way, and OpenAI CEO Sam Altman was not spared the sarcasm.

Musk’s consortium of investors includes his own company xAI, as well as Valor Equity Partners, Baron Capital, Atreides Management, Vy Capital, 8VC and Ari Emanuel, CEO of Endeavor. If he were successful, Musk would likely pursue a merger between xAI and OpenAI. This is the time for OpenAI to return to the open-source, security-focused force it once was. We’ll make sure that happens – said the controversial billionaire in a statement. The head of OpenAI, Sam Altman, responded to this without hesitation and without sparing any malice – No thanks, but we can buy Twitter for 9.74 billion if you want….

Elon Musk’s lawyer, Marc Toberoff, said Monday he had made an offer to acquire all of the nonprofit’s assets to OpenAI’s board of directors. Although rejected, the offer complicates carefully crafted plans for OpenAI’s future, including its conversion to a for-profit model and the use of up to 500 billion dollars to build AI infrastructure as part of the Stargate project. Indeed, OpenAI and Musk are already fighting in court over the company’s direction.

Our structure ensures that no individual can take control of OpenAI…. These are tactics that try to undermine us because we are making great progress – Altman wrote to employees.

Another Microsoft investment in Poland

By 2026, Microsoft plans to invest about a bagatelle 2.8 billion PLN in Poland!

The funds will be used to develop computing power, cloud and artificial intelligence. In practice, this means a further expansion of the Microsoft Technology Center, which the company opened in Warsaw in 2019, allowing Polish customers to benefit from the competence laboratory and technological expertise of Microsoft and its partners.

In an interview with Prime Minister Donald Tusk, Microsoft Vice Chairman and President Brad Smith also mentioned a third phase of investment after 2026. However, no financial details were given at this point. Undoubtedly, the business giant’s investment is further evidence of Poland’s strategic importance during its already 30-year presence in our country.

Meta trained AI on pirated ebooks

Training artificial intelligence on copyrighted data stirs up a lot of controversy. On the one hand, we are fascinated by the vision of AI possessing all of humanity’s written knowledge, while on the other hand, the unfettered use of protected intellectual property can be considered a violation of ethical and legal principles. The Met doesn’t seem to care about them.

Meta (formerly Facebook) is one of the leading developers of artificial intelligence today. As with most models, its were also trained on large data sets. It turns out that they weren’t exactly legally acquired. In January, a lawsuit was filed accusing Meta of training its AI models on a data set consisting of pirated ebooks and articles. The disclosed emails provided new evidence against Mark Zuckerberg’s company in a copyright case brought by book authors.

The authors accused the Met of illegally training AI models on pirated books. These allegations have now been further corroborated by disclosed correspondence. The emails revealed that Meta admitted to downloading from torrents the controversial large LibGen dataset, which contains tens of millions of pirated books. According to the authors of the lawsuit, Meta downloaded at least 81.7 terabytes of data from a number of shady libraries through the Anna’s Archive site, including at least 35.7 TB of LibGen and Z-Library data. In addition, the company had previously downloaded 80.6 TB of data from LibGen.

Prosecutors described the scale of this illegal activity as staggering, noting that much smaller acts of data piracy – a mere 0.008% of the amount of copyrighted works that Meta seized – led judges to refer the case to the US Attorney’s Office for criminal investigation. Emails showed that Meta employees were aware of the legal risks associated with their actions. In April 2023. Nikolay Bashlykov, a research engineer at Meta, wrote that downloading torrents from a company laptop did not seem appropriate.

A few months later, Bashlykov contacted the legal team. Using torrents means ‘seeding’ files, i.e., making content available to the outside world. This may be legally not OK – he wrote. Despite these warnings, the authors believe, Meta decided to hide its seeding activity by setting it to the lowest possible bandwidth. The company also likely tried to avoid the risk of anyone tracking it as a seeder/downloader by downloading the data to other servers.

How does artificial intelligence help the planet?

Schools in sunny Georgia have the same problem as furniture stores in snowy Minnesota. When it gets hot or cold outside, both places have one way to make it comfortable inside: block the outside air and turn on the heating or air conditioning. Both of these solutions are energy eaters.

That was the case before. Now AI tools are helping them automate energy-saving strategies, including integrating fresh air and reducing airflow to empty rooms. “We’ve taken the complexity of building management – real-time tracking of weather, occupancy, air quality and equipment performance – and created a ‘smart energy’ autopilot that works in buildings of any size and is easy to use,” says 75F’s Dave Koerner.

It’s impossible to predict exactly how AI will affect sustainability progress in the long run, but we do know that it has three skills that could prove to be game-changers. What are these skills, and how is AI already helping the planet? Find out in this article: “How does artificial intelligence help the planet? Learn 3 ways .”

OpenAI’s 2026 plan: mass production of AI chips

OpenAI plans to become independent of Nvidia for chip supply and is developing the first generation of its own AI chips. The company is finalizing the design of the first chip and intends to ship it to Taiwan Semiconductor Manufacturing Co (TSMC) for production within a few months. The process of shipping the first design to the chip factory is called taping out.

According to sources, OpenAI is on track to mass-produce its own AI chips in 2026. A typical tape-out costs tens of millions of dollars and takes about six months. However, OpenAI may pay more for accelerated production. There is no guarantee that the first tape-out will be successful. If not, it will be necessary to diagnose the problem and repeat the step.

At OpenAI, this training-focused chip is seen as a strategic tool to strengthen OpenAI’s negotiating position with other chip vendors. After this first chip, OpenAI engineers plan to develop more and more advanced processors with each new iteration. If the first tape-out is successful, it will enable the mass production of the first proprietary AI chip and test it as an alternative to Nvidia chips later this year.

Large technology companies, such as Microsoft and Meta, have also tried their hand at producing AI chips of their own, but despite years of effort, they have not achieved satisfactory results. The chip being built by OpenAI seems more promising. It is being designed by an in-house team led by Richard Ho, which has doubled in size to 40 in recent months, in cooperation with Broadcom. Ho joined OpenAI more than a year ago, moving over from Google, where he helped lead the company’s own AI chip program.

The team is smaller than similar groups at Google or Amazon. A new chip design for an ambitious large-scale program can cost as much as $500 million for a single version. Twice that amount is needed to build the necessary software and peripherals. And how come so many chips are needed in the first place?

Today’s generative artificial intelligence runs primarily on Nvidia chips, which have about 80% market share. With increasingly bold AI projects and rising costs, relying on only one chip supplier may be unwise. That’s why its main customers – Microsoft, Meta and OpenAI – are trying to develop alternatives. In the case of OpenAI, the chip, capable of both training and running artificial intelligence models, would initially be introduced on a smaller scale and mainly used to run models. The chip will reportedly have a limited role in the company’s infrastructure.

Generative artificial intelligence (GenAI) models are getting smarter and smarter, and as a result they have an insatiable demand for data center chips. Large-scale investments are already being made by all the leading players – Meta will spend $60 billion on AI infrastructure next year, and Microsoft will spend $80 billion by 2025. OpenAI, in turn, is participating in the Stargate project, which is a 4-year, $500 billion investment to lead to the creation of AGI.

When will GPT-5 come out? We already know the schedule

In recent months, the topic of GPT has quieted down a bit after the release of the OpenAI o1 and other cheaper and more efficient variants. However, this does not mean the end of the GPT line. On the contrary, its next iterations are already in the planning. Altman himself outlined the schedule for GPT-4.5 and GPT-5.

Just two years ago, it was assumed that the GPT-5 model might already be a strong artificial intelligence, that would be developed by the end of 2023. As we know, nothing of the sort has happened, and work on AGI – including in the powerfully funded project Stargate – will only continue for the next four years. In parallel with this OpenAI is working on further versions of the GPT artificial intelligence model. We learn the details from Sam Altman’s latest tweet:

“UPDATE OF OPENAI ACTION PLAN FOR GPT-4.5 and GPT-5:

We want to better share our roadmap and greatly simplify our product offerings.

We want AI to “just work” for you; we realize how complex our model and product offerings have become.

We hate the model selector as much as you do, and we want to return to magic unified intelligence.

Next, we will introduce GPT-4.5, a model internally called “Orion,” as our final model not based on the thought chain concept [non-chain-of-thought model].

After that, our main goal will be to unify the o-series and GPT-series models by creating systems that can use all of our models, know when to think for a long time and when not to, and are generally useful in a wide range of tasks.

For both ChatGPT and our API, we will release GPT-5 as a system that integrates many of our technologies, including o3. We will no longer make o3 available as a standalone model. The free version of ChatGPT will have unlimited access to GPT-5 as a standard intelligence setting (!!!), subject to abuse thresholds.

Plus plan subscribers will be able to run GPT-5 at a higher level of intelligence, while Pro subscribers will be able to run GPT-5 at an even higher level of intelligence. These models will include voice, canvass, search, deep research and more.– Sam Altman, CEO of OpenAI

It is not yet known when exactly the new unified AI models will be released, but given the pace of work and the scale of investment, we already expect at least a preview version this year.

Do you have questions?