Tech
Apple to pay $250M to settle lawsuit over Siri’s delayed AI features
Apple has agreed to pay $250 million to settle a class-action lawsuit over how it marketed its AI features ahead of the launch of the iPhone 16. The Financial Times was the first to report the news.
The lawsuit alleged that Apple exaggerated the breadth of features Apple Intelligence would bring, which included a significantly upgraded version of its assistant, Siri. The complaint alleges that the company created the impression that advanced AI capabilities would be available to users sooner than they actually were. In particular, the plaintiffs allege that Apple overstated both the readiness and functionality of these features, particularly the promised improvements to Siri, which have yet to fully materialize.
As a result, the complaint claims, people who bought the iPhone 15 or iPhone 16 believed they were paying for cutting-edge AI tools that were not actually available at the time of purchase. The lawsuit framed this as false advertising, and says Apple’s marketing influenced buying decisions based on features that were incomplete or delayed.
Apple did not admit to wrongdoing in court, but has chosen to settle the case rather than continue with litigation. Under the proposed agreement, eligible U.S. customers who purchased the iPhone 15 or iPhone 16 between June 10, 2024 and March 29, 2025 could receive up to $95 per device.
Apple has been touting a more advanced version of Siri ever since it unveiled Apple Intelligence in 2024 during WWDC. The anticipated updates are expected to help Siri function more like modern AI chatbots such as ChatGPT or Claude. The upgraded experience is rumored to be powered by Google Gemini, though newer reports state the company’s next iPhone operating system may let users choose from a number of third-party large language models.
The settlement arrives ahead of Apple’s annual developer conference on June 8, when the company is expected to preview a version of its AI-enhanced Siri.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
>
Tech
Five architects of the AI economy explain where the wheels are coming off
Earlier this week, five people who touch every layer of the AI supply chain sat down at the Milken Global Conference in Beverly Hills, where they talked with this editor about everything from chip shortages to orbital data centers to the possibility that the whole architecture that undergirds the tech is wrong.
On stage with TechCrunch: Christophe Fouquet, CEO of ASML, the Dutch company that holds a monopoly on the extreme ultraviolet lithography machines without which modern chips would not exist; Francis deSouza, COO of Google Cloud, who is overseeing one of the biggest infrastructure bets in corporate history; Qasar Younis, co-founder and CEO of Applied Intuition, a $15 billion physical AI company that started in simulation and has since moved into defense; Dimitry Shevelenko, the chief business officer of Perplexity, the AI-native search-to-agents company; and Eve Bodnia, a quantum physicist who left academia to challenge the foundational architecture most of the AI industry takes for granted at her startup, Logical Intelligence. (Meta’s former chief AI scientist, Yan LeCun, signed on as founding chair of its technical research board earlier this year.)
Here’s what the five had to say:
The bottlenecks are real
The AI boom is running into hard physical limits, and the constraints begin further down the stack than many may realize. Fouquet was the first to say it, describing a “huge acceleration of chips manufacturing,” while expressing his “strong belief” that despite all that effort, “for the next two, three, maybe five years, the market will be supply limited,” meaning the hyperscalers — Google, Microsoft, Amazon, Meta — aren’t going to get all the chips they’re paying for, full stop.
DeSouza highlighted how big — and how fast growing — an issue this is, reminding the audience that Google Cloud’s revenue crossed $20 billion last quarter, growing 63%, while its backlog — the committed but not yet delivered revenue — nearly doubled in a single quarter, from $250 billion to $460 billion. “The demand is real,” he said with impressive calm.
For Younis, the constraint comes primarily from elsewhere. Applied Intuition builds autonomy systems for cars, trucks, drones, mining equipment and defense vehicles, and his bottleneck isn’t silicon — it’s the data that one can only gather by sending machines into the real world and watching what happens. “You have to find it from the real world,” he said, and no amount of synthetic simulation fully closes that gap. “There will be a long time before you can fully train models that run on the physical world synthetically.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The energy problem is also real
If chips are the first bottleneck, energy is the one looming behind it. DeSouza confirmed that Google is exploring data centers in space as a serious response to energy constraints. “You get access to more abundant energy,” he noted. Of course, even in orbit, it isn’t simple. DeSouza observed space is a vacuum, so eliminates convection, leaving radiation as the only way to shed heat into the surrounding environment (a much slower and harder-to-engineer process than the air and liquid cooling systems that data centers rely on today). But the company is still treating it as a legitimate path.
The deeper argument de Souza made, somewhat unsurprisingly, was about efficiency through integration. Google’s strategy of co-engineering its full AI stack — from custom TPU chips through to models and agents — pays dividends in watts per flop that a company buying off-the-shelf components simply can’t replicate, he suggested. “Running Gemini on TPUs is much more energy efficient than any other configuration,” because chip designers know what’s coming in the model before it ships, he said. In a world where energy availability is becoming a massive constraint on how far this tech can go, that kind of vertical integration is a major competitive advantage.
Fouquet’s echoed the point later in the discussion. “Nothing can be priceless,” he said. The industry is in an strange moment right now, investing extraordinary amounts of capital, driven by strategic necessity. But more compute means more energy, and more energy has a price.
A different kind of intelligence
While the rest of the industry debates scale, architecture, and inference efficiency within the large language model paradigm, Bodnia is building something very different.
Her company, Logical Intelligence, is built on so-called energy-based models (EBMs), a class of AI that doesn’t predict the next token in a sequence but instead attempts to understand the rules underlying data, in a way she argues is closer to how the human brain actually works. “Language is a user interface between my brain and yours,” she said. “The reasoning itself is not attached to any language.”
Her largest model runs to 200 million parameters — compared to the hundreds of billions in leading LLMs — and she claims it runs thousands of times faster. More importantly, it’s designed to update its knowledge as data changes, rather than requiring retraining from scratch.
For chip design, robotics and other domains where a system needs to grasp physical rules rather than linguistic patterns, she argues EBMs are the more natural fit. “When you drive a car, you’re not searching for patterns in any language. You look around you, understand the rules about the world around you, and make a decision.” It’s an interesting argument and one that’s likely to attract more attention in the coming months, given the AI field is beginning to ask whether scale alone is sufficient.
Agents, guardrails, and trust
Shevelenko spent much of the conversation explaining how Perplexity has evolved from a search product into something it now calls a “digital worker.” Perplexity Computer, its newest offering, is designed not as a tool a knowledge worker uses, but as a staff that a knowledge worker directs. “Every day you wake up and you have a hundred staff on your team,” he said of the opportunity. “What are you going to do to make the most of it?”
It’s a compelling pitch; it also raises obvious questions about control, so I asked them. His answer was granularity. Enterprise administrators can specify not just which connectors and tools an agent can access, but whether those permissions are read-only or read-write — a distinction that matters enormously when agents are acting inside corporate systems. When Comet, Perplexity’s computer-use agent, takes actions on a user’s behalf, it presents a plan and asks for approval first. Some users find the friction annoying, Shevelenko said, but he said heconsiders it essential, particularly after joining the board of Lazard, where said he has found himself unexpectedly sympathetic to the conservative instincts of a CISO protecting a 180-year-old brand built entirely on client trust. “Granularity is the bedrock of good security hygiene,” he said.
Sovereignty, not just safety
Younis offered what may have been the panel’s most geopolitically charged observation, which is that physical AI and national sovereignty are entangled in ways that purely digital AI never was.
The internet initially spread as American technology and faced pushback only at the application layer — the Ubers and DoorDashes — when offline consequences became visible. Physical AI is different. Autonomous vehicles, defense drones, mining equipment, agricultural machines — these manifest in the real world in ways governments can’t ignore, raising questions about safety, data collection, and who ultimately controls systems that operate inside a nation’s borders. “Almost consistently, every country is saying: we don’t want this intelligence in a physical form in our borders, controlled by another country.” Fewer nations, he told the crowd, can currently field a robotaxi than possess nuclear weapons.
Fouquet framed it a little differently. China’s AI progress is real — DeepSeek’s release earlier this year sent something close to a panic through parts of the industry — but that progress is constrained below the model layer. Without access to EUV lithography, Chinese chipmakers cannot manufacture the most advanced semiconductors, and models built on older hardware operate at a compounding disadvantage no matter how good the software gets. “Today, in the United States, you have the data, you have the computing access, you have the chips, you have the talent. China does a very good job on the top of the stack, but is lacking some elements below,” Fouquet said.
The generation question
Near the end of our panel, someone in the audience asked the obvious uncomfortable question: is all of this going to impact the next generation’s capacity for critical thinking?
The answers were, perhaps unsurprisingly, optimistic, though not naively so. De Souza pointed to the scale of problems that more powerful tools might finally let humanity address. Think neurological diseases whose biological mechanisms we don’t yet understand, greenhouse gas removal, and grid infrastructure that has been deferred for decades. “This should unleash us to the next level of creativity,” he said.
Shevelenko made a more pragmatic point: the entry-level job may be disappearing, but the ability to launch something independently has never been more accessible. “[For] anybody who has Perplexity Computer . . . the constraint is your own curiosity and agency.”
Younis drew the sharpest distinction between knowledge work and physical labor. He pointed to the fact that the average American farmer is 58 years old and that labor shortages in mining, long-haul trucking, and agriculture are chronic and growing — not because wages are too low, but because people don’t want those jobs. In those domains, physical AI isn’t displacing willing workers. It’s filling a void that already exists and looks only to deepen from here.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
>
Tech
A 20-minute pitch wins Indian startup Pronto backing from Lachy Groom
Lachy Groom, one of Silicon Valley’s most closely watched solo investors, decided to back Indian startup Pronto just 20 minutes into his first meeting with its 24-year-old founder.
The meeting, which took place in February through a mutual connection, led to Groom investing $20 million in Pronto as an extension of its Series B round, valuing the startup at $200 million after the investment — double its valuation just over two months earlier, as TechCrunch had previously reported. The deal came together within weeks, bringing the solo investor on board as the Bengaluru-based startup expands to meet growing demand for on-demand home services in India.
Groom said he was drawn to Pronto’s ambition to build what he called the world’s largest platform for organizing domestic labor, starting with India’s vast and largely unstructured workforce. “The work underneath that is genuinely hard, and most attempts in adjacent categories have struggled with the operational discipline,” he said, adding that Pronto founder Anjali Sardana (pictured above) and her team were operating “at a level I haven’t seen elsewhere in this space.”
Before founding Pronto in 2025, Sardana worked at Bain Capital and venture firm 8VC, where she gained early exposure to investing and high-growth startups. The startup connects households with workers for everyday tasks such as cleaning and basic home services.
The introduction was arranged through Paul Hudson, founder of Glade Brook Capital, who connected Groom and Sardana during her trip to San Francisco earlier this year. Glade Brook has backed startups founded by both: Pronto, which Sardana leads, and Physical Intelligence, where Groom is a co-founder. Hudson and Groom have also backed Indian quick-commerce startup Zepto.
Sardana said Groom’s investment approach is heavily founder-driven. “He indexes two things. One is the founder, and that’s 95% of it. If he loves the founder, then he will invest,” she told TechCrunch, adding that the rest comes down to the scale and potential of the business.
Groom’s bet comes as a clutch of startups in India race to build instant home services platforms, a category that is seeing rapid adoption among urban households as more consumers turn to on-demand help for everyday tasks.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The opportunity is significant. A recent Bank of America note, reviewed by TechCrunch, estimates the instant home services market in India could grow into a $15 billion to $18 billion industry by the end of the decade, as companies including Pronto, Snabbit, and Urban Company’s InstaHelp compete for share in the fast-growing category.
Competition is intensifying, with heavy capital inflows and aggressive pricing, particularly to attract first-time users. Bank of America estimates that Snabbit and Urban Company’s InstaHelp each account for about 40% of the market, while Pronto has around a 20% share, even as it scales rapidly. The category is expected to remain “burn-heavy” over the next two to three years.
Despite trailing larger rivals, Pronto has been scaling rapidly, growing from around 18,000 bookings a day to 26,000 in just over a month. The startup is focused on driving repeat usage, betting that turning occasional demand into frequent, habit-driven usage will be key to winning the category, with its top 10% of users accounting for about 40% of bookings.
This growth has also brought challenges, particularly in building out supply. Pronto has expanded its network of service workers to 6,500, up from 1,440 in January. But Sardana said demand continues to outpace supply, making forecasting and capacity management key challenges as the startup grows.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
>
Tech
Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.
Billionaire media mogul Barry Diller doesn’t think OpenAI CEO Sam Altman is untrustworthy, despite recent reporting to the contrary. On stage at The Wall Street Journal’s “Future of Everything” conference this week, Diller vouched for the AI exec, who has been accused by some former colleagues and board members of being manipulative and deceptive at times.
Diller, who is friendly with Altman, was responding to a question about whether or not people should put their faith in Altman to ensure that artificial intelligence benefits humanity.
In particular, he was asked about the theoretical form of AI known as Artificial General Intelligence, or AGI, which could one day outperform humans on any task.
The media exec, a co-founder of Fox Broadcasting and chairman of IAC and Expedia Group, said that while he believes Altman is sincere in his pursuits, that’s not really the area of concern people should be focused on. Rather, it’s the unknown consequences that will result from AI.
“One of the big issues with AI is it goes way beyond trust,” Diller said. “It may be that trust is irrelevant because the things that are happening are a surprise to the people who are making those things happen. And I’ve spent a lot of time with various people who’ve been in the creation mode of AI, and they have a sense of wonder themselves. So…it’s the great unknown. We don’t know. They don’t know,” he explained.
“We have embarked on something that is going to change almost everything. It is not under-reported. Now, whether these huge investments are going to come through — I couldn’t care less. I’m not invested in it, but progress is going to be made,” Diller added.
Still, the media mogul said he believes that most of the people leading the charge are good stewards, saying he believes that Altman is sincere and “a decent person with good values.” (Diller wouldn’t say which of the AI leaders he thinks is insincere, we should note.)
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
“But the issue is not their stewardship. The issue is … it’s dealing truly with the unknown. They don’t know what can happen once you get AGI, and we’re close to it. We’re not there yet, but we’re getting closer and closer, quicker and quicker. And we must think about guardrails,” Diller noted.
Plus, he warned, if humans don’t think about guardrails, then the alternative is that “another force, an AGI force, will do it themselves. And once that happens, once you unleash that, there’s no going back,” Diller said.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
>
-
Fashion9 years ago
These ’90s fashion trends are making a comeback in 2017
-
Fashion9 years ago
According to Dior Couture, this taboo fashion accessory is back
-
Fashion9 years ago
Your comprehensive guide to this fall’s biggest trends
-
Fashion9 years ago
Model Jocelyn Chew’s Instagram is the best vacation you’ve ever had
-
Fashion9 years ago
A photo diary of the nightlife scene from LA To Ibiza
-
Fashion9 years ago
The tremendous importance of owning a perfect piece of clothing
-
Fashion9 years ago
Emily Ratajkowski channels back-to-school style
-
Fashion9 years ago
9 Celebrities who have spoken out about being photoshopped