Connect with us

Tech

OpenAI says Codex is coming to your phone

Published

on

Codex is going mobile. The coding tool — which OpenAI launched approximately a year ago — has now been integrated into the ChatGPT app, allowing users to monitor and manage their development workflows remotely.

The new function allows users to see their Codex live environments in any devices where it is running. The company announced the changes Thursday; the update, which is currently in preview, is now available to all plans on iOS and Android.

“This is more than the ability to remotely control a single task or dispatch new tasks to your computer,” OpenAI said in a statement. “From your phone, you can work across all of your threads, review outputs, approve commands, change models, or start something new.”

Last month, OpenAI also gave Codex the ability to run in the background in desktop environments — empowering the tool to take care of various tasks autonomously. Earlier this month, the company also introduced a Chrome extension that allows the agent to work in live browser sessions.

In February, Anthropic released a similar feature — Remote Control — which allows users to remotely monitor Claude Code’s work from afar.

The flurry of feature releases from both OpenAI and Anthropic speaks to the tense competition between the two over whose agentic coding tool will become the most widely used. Over the past year, Anthropic’s Claude Code has gained in popularity among businesses and tech professionals alike, although both tools continue to be widely used.

>

Continue Reading

Tech

What the jury will actually decide in the case of Elon Musk vs. Sam Altman

Published

on

Nine California jurors are now deliberating over the future of OpenAI, the world-leading artificial intelligence lab.

While the trial exploring Elon Musk’s case against OpenAI’s other cofounders and Microsoft has covered territory ranging from the breakup of the founders in 2018 to Altman’s firing and rehiring in 2023, the jurors will be considering a set of fairly narrow questions.

  • Breach of charitable trust — essentially, did OpenAI and cofounders Sam Altman and Greg Brockman violate a specific agreement with Musk to use his donations to OpenAI for a specific, charitable purpose and not general use by the non-profit?
  • Unjust enrichment — did the defendants use Musk’s donations to enrich themselves through OpenAI’s for-profit arm, instead of for charitable purposes?
  • Aiding and abetting breach of charitable trust — Did Microsoft, through its interactions with OpenAI, know that Musk had specific conditions on its donations, and play a significant role in causing harm to Musk?

OpenAI has also made three arguments in its defense that the jury will weigh:

  • Statute of limitations — a legal deadline by which a lawsuit must be filed. Here, if OpenAI can prove that any harms to Musk happened before August 5, 2021 for the first count; August 5, 2022 for the second count; and November 14, 2021 for the first count, then his claims will be moot.
  • Unreasonable delay — Musk, by filing his lawsuit in 2024, delayed his claim in a way that made his request for damages unreasonable.
  • Unclean hands — a legal doctrine holding that Musk’s conduct related to his claims against OpenAI was unconscionable and renders them invalid.

If Musk wins out, it could mean the end of OpenAI as a for-profit company, but it’s not entirely clear what will result. Next week, the judge will begin a set of new hearings where lawyers from both sides will debate what the consequences of a verdict in favor of the plaintiffs might be. That process could be rendered moot by a negative verdict, however.

Breach of charitable trust

Musk’s attorneys say the defendants clearly understood that Musk wanted to support a non-profit that would ensure the benefits of AI to the world, and prevent it from being controlled by any one organization. In particular, they say a $10 billion investment from Microsoft in 2023 into OpenAI’s for-profit affiliate—the first to happen after the statute of limitations—was the event that turned Musk’s concern into conviction.

That deal, Musk’s lawyers say, was different from previous investments and led to OpenAI’s investors being enriched by the company’s commercial products, at the expense of the charitable mission of AI safety that Musk promoted.

OpenAI’s attorneys have asked every witness to describe specific restrictions put on Musk’s donations, and none have, including his financial adviser Jared Birchall, his chief of staff Sam Teller, or his special adviser Shivon Zilis. They say everyone involved agreed that private fundraising would be required to achieve its goals, and note that Musk himself attempted to launch an OpenAI-affiliated for-profit he would personally control, and later to merge OpenAI into his company Tesla. They also note the organization’s other donors haven’t said their charitable trust was violated.

Importantly, a forensic accountant hired by OpenAI testified that all of Musk’s donations had been used by OpenAI well before the key date of August 5, 2021. That is evidence that Musk’s donations were already used for their purpose well before he brought his lawsuit, invalidating any charitable trust that may have existed.

Mainly, they insist that the for-profit affiliate that conducts most of OpenAI’s actual activity continues to fulfill the organization’s mission, and has generated nearly $200 billion in equity value to support the non-profit foundation. Notably, Sam Altman argued that providing ChatGPT for free helps fulfill the mission of sharing the benefits of AI with the world.

Unjust enrichment

The plaintiffs point to the multibillion-dollar valuations of stakes held by OpenAI founders like Brockman and Ilya Sutskever, as well as Microsoft itself, as a sign that Musk’s donations were ultimately used for personal benefit, as opposed to supporting the mission of the charity. They argue that the work at OpenAI’s for-profit was commercially focused, while the foundation itself was left essentially dormant, without full-time employees, and, ultimately, not even in control of the for-profit.

OpenAI says all of Musk’s contributions were used by the foundation by 2020, and that equity distributions came well after he left the organization in 2018. Even beforehand, evidence shows the key players agreed that being able to compensate researchers with stock was key to developing AGI, the hypothetical form of AI capable of performing any intellectual task a human can. OpenAI executives maintain that the for-profit’s work meaningfully advanced the foundation’s mission, including safety activities. They say the non-profit board continues to control the for-profit, and instituted new governance controls following “the blip,” when Altman was fired by OpenAI’s non-profit board in 2023 for lack of candor and then rehired just days later.

Aiding and abetting

Musk’s case focused on the events of the blip, when Microsoft CEO Satya Nadella, whose company depended on OpenAI’s tech, was personally involved with helping to bring Altman back and creating a new board to govern OpenAI. They note that Microsoft executives wondered if their commercial agreement might conflict with the non-profit’s goals, and suggest that Microsoft’s commercial priorities led OpenAI away from its mission. They’ve focused attention on a clause in Microsoft’s agreement with OpenAI that gave Microsoft veto rights over major corporate decisions at OpenAI.

Microsoft’s witnesses have insisted that the company’s executives didn’t know of any specific conditions on Musk’s donations despite extensive due diligence, and never vetoed any decision by OpenAI. They note that the company’s investments and compute power allowed OpenAI to achieve its biggest triumphs.

Statute of Limitations

Musk has suggested that his skepticism of his cofounders grew over time, until in the fall of 2022 he finally decided they had betrayed him when he found out about Microsoft’s plans for a new $10 billion investment that took place in 2023. He wouldn’t file his lawsuit until mid-2024.

OpenAI’s attorneys argue that the terms of that deal were spelled out in a term sheet for a previous fundraising round in 2018, which Musk received and his advisers reviewed, but Musk said he didn’t read in detail. They also note numerous blog posts and other communications from over the years that show Musk could have known what OpenAI was doing well before he brought them to court, including tweets where Musk criticized the company years before the suit. Zilis, Musk’s adviser, even voted to approve these transactions as a member of the OpenAI board.

Ultimately, the OpenAI attorneys emphasize that Musk’s formal role in the organization ended in 2018 and his last donations took place in 2020.

Unreasonable delay

OpenAI’s attorneys say the real reason that Musk filed his suit was he realized that he was wrong about OpenAI, after its launch of ChatGPT revolutionized the business of artificial intelligence. They argue that OpenAI has operated under its current structure since its first Microsoft investment in 2018, and that forcing the organization to restructure eight years later is unreasonable.

Unclean hands

There is evidence that Musk was planning his own competing AI efforts while he was still the chair of OpenAI, and hired OpenAI employees to work on AI at Tesla. OpenAI’s attorneys argue that these efforts undermined OpenAI at a time when it was using Musk’s donations to pursue its mission. They noted that Zilis, the mother of three of Musk’s children, didn’t disclose her personal relationship to other OpenAI board members for years. And they argue that Musk withheld his donations in 2017 in an effort to win control of a planned for-profit affiliate of OpenAI. Finally, “Mr. Musk abandoned OpenAI for dead in 2018,” Bill Savitt, OpenAI’s lead attorney, told the jury.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

>

Continue Reading

Tech

Elon Musk’s SpaceXAI has been bleeding staff since its merger

Published

on

Elon Musk’s newly rebranded SpaceXAI is reportedly losing top talent, with more than 50 researchers and engineers departing since February, according to The Information. The exits include key leaders across coding, world models, and Grok voice. 

Rivals like Meta and Thinking Machine Labs are reportedly scooping up former staff, with the company’s core pre-training team dwindling to just a handful of people. Since February, at least 11 xAI employees have defected to Meta, according to The Information’s report. At least seven have left to join Mira Murati’s Thinking Machine Labs. TechCrunch has previously reported on 11 of the xAI departures announced directly after the merger, including two co-founders.

SpaceX acquired xAI — two companies owned by Musk — in February and has since installed new leadership at the company. Musk renamed the combined company SpaceXAI earlier this month.

The pre-training departures, which followed the exit of team lead Juntang Zhuang, have particularly concerned employees and people close to SpaceXAI, per The Information. Pre-training is the first step to building new AI models, and many have questioned whether the company is still committed to developing leading models. 

The report also found that Musk’s culture of extreme work led some staff to leave — something Musk employees across his companies, including Tesla, have complained about. A source who spoke to The Information said Musk set unrealistic deadlines for training models, which led to cutting corners on Grok. 

Of course, several of the exits could have been driven by a desire to cash out.

SpaceX regularly offers tenders so employees can sell vested shares privately. Others might simply feel confident that their equity is close to liquidity given the company’s blockbuster IPO expectations. Once employees see the financial upside light at the end of the tunnel, they’re less likely to work at a company that puts undue pressure on them and may not be building the leading models they want to work on.

TechCrunch has reached out to SpaceX for comment.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

>

Continue Reading

Tech

Lovable just backed a company that’s looking to bring vibe coding to hardware

Published

on

Lovable, the AI-powered app-building platform, has backed a Danish hardware startup, Atech, that wants to introduce “vibe coding” to the process of creating hardware. Lovable was part of an $800,000 pre-seed round that also included a16z’s scout fund, Sequoia Scout Fund, and Nordic Makers

In a chat with TechCrunch, Atech’s head of customer experience, Gustav Hugod, said the platform’s workings are quite simple. Users buy a starter hardware kit for whatever they are trying to build from Atech’s site. Then, they open a tab at the site, talk to an AI chatbot, describe the hardware concept they’re trying to build, and the AI tool generates code that helps them build a working prototype. Hugod said the company’s user base is pretty broad right now, “from four-year-olds building cars to a hydrogen synthesis plant that needs precise voltage sensing.”

Typically, building any type of hardware prototype requires decades of experience or finding pricy but talented engineers. But Hugod said that as the “accessibility gap of software has collapsed,” so will the difficulty of building in the hardware space. “Hardware, in a democratized world, has to be available to everyone,” he said. The new capital will be used for research and development, marketing, and hiring.

>

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.