Tech
This is what some of the world’s largest banks of malware look like stacked as hard drives
Malware research group vx-underground, which says it has the largest collection of malware source code, said in a post on X that its archive of data amounts to about 30 terabytes.
A reply by Bernardo Quintero, founder of VirusTotal, an online service that scans files for malware across multiple antivirus engines at once, said his service has about 31 petabytes of malware samples that users have contributed to date. (A petabyte is ~1,000x larger than a terabyte.)
In both cases, that’s a lot of data. For context, cybersecurity companies, AI researchers, and threat intelligence firms treat repositories like these as critical for training detection models and understanding how attacks evolve. But this had us wondering: What would these enormous datasets actually look like stacked as hard drives one on top of the other and side by side? And how would they compare to, say, the Eiffel Tower?
Someone in our newsroom asked an AI chatbot this question, and it got it incredibly wrong.
Instead, we did some rough back-of-a-napkin math to figure out how tall these data banks would be. Since vx-underground and VirusTotal both have “about” that much data each, “about” is good enough for us in this case.
Let’s say we’re using 1 terabyte capacity internal hard drives, since these are generally designed to be the same physical size to fit inside any computer. These standardized 3.5-inch internal hard drives are 1 inch in height, which for the sake of stacking one on top of the other is really what we want to know here.
We’re also assuming that the hard drives we’re using in this example are exactly 1 terabyte, because in reality the total usable file capacity of a hard drive is generally somewhat less.
Using this online conversion tool, it looks like vx-underground’s 30 terabytes of malware data could fill 30 hard drives stacked on top of one another, reaching 30 inches, or about 2.5 feet tall.
For reference, this reporter is 6 feet tall. (See visual below, and yes, terrible opsec, I know.)
With that same logic, VirusTotal’s 31 petabytes of submitted data would fill 31,744 hard drives, which stacked on top of one another would reach about 2,645 feet.
The world’s tallest building, the Burj Khalifa in Dubai, is slightly taller at 2,722 feet.
The Eiffel Tower is 1,083 feet tall. By that logic, VirusTotal has about two and a half Eiffel Towers’ worth of data.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
>
Tech
What the jury will actually decide in the case of Elon Musk vs. Sam Altman
Nine California jurors are now deliberating over the future of OpenAI, the world-leading artificial intelligence lab.
While the trial exploring Elon Musk’s case against OpenAI’s other cofounders and Microsoft has covered territory ranging from the breakup of the founders in 2018 to Altman’s firing and rehiring in 2023, the jurors will be considering a set of fairly narrow questions.
- Breach of charitable trust — essentially, did OpenAI and cofounders Sam Altman and Greg Brockman violate a specific agreement with Musk to use his donations to OpenAI for a specific, charitable purpose and not general use by the non-profit?
- Unjust enrichment — did the defendants use Musk’s donations to enrich themselves through OpenAI’s for-profit arm, instead of for charitable purposes?
- Aiding and abetting breach of charitable trust — Did Microsoft, through its interactions with OpenAI, know that Musk had specific conditions on its donations, and play a significant role in causing harm to Musk?
OpenAI has also made three arguments in its defense that the jury will weigh:
- Statute of limitations — a legal deadline by which a lawsuit must be filed. Here, if OpenAI can prove that any harms to Musk happened before August 5, 2021 for the first count; August 5, 2022 for the second count; and November 14, 2021 for the first count, then his claims will be moot.
- Unreasonable delay — Musk, by filing his lawsuit in 2024, delayed his claim in a way that made his request for damages unreasonable.
- Unclean hands — a legal doctrine holding that Musk’s conduct related to his claims against OpenAI was unconscionable and renders them invalid.
If Musk wins out, it could mean the end of OpenAI as a for-profit company, but it’s not entirely clear what will result. Next week, the judge will begin a set of new hearings where lawyers from both sides will debate what the consequences of a verdict in favor of the plaintiffs might be. That process could be rendered moot by a negative verdict, however.
Breach of charitable trust
Musk’s attorneys say the defendants clearly understood that Musk wanted to support a non-profit that would ensure the benefits of AI to the world, and prevent it from being controlled by any one organization. In particular, they say a $10 billion investment from Microsoft in 2023 into OpenAI’s for-profit affiliate—the first to happen after the statute of limitations—was the event that turned Musk’s concern into conviction.
That deal, Musk’s lawyers say, was different from previous investments and led to OpenAI’s investors being enriched by the company’s commercial products, at the expense of the charitable mission of AI safety that Musk promoted.
OpenAI’s attorneys have asked every witness to describe specific restrictions put on Musk’s donations, and none have, including his financial adviser Jared Birchall, his chief of staff Sam Teller, or his special adviser Shivon Zilis. They say everyone involved agreed that private fundraising would be required to achieve its goals, and note that Musk himself attempted to launch an OpenAI-affiliated for-profit he would personally control, and later to merge OpenAI into his company Tesla. They also note the organization’s other donors haven’t said their charitable trust was violated.
Importantly, a forensic accountant hired by OpenAI testified that all of Musk’s donations had been used by OpenAI well before the key date of August 5, 2021. That is evidence that Musk’s donations were already used for their purpose well before he brought his lawsuit, invalidating any charitable trust that may have existed.
Mainly, they insist that the for-profit affiliate that conducts most of OpenAI’s actual activity continues to fulfill the organization’s mission, and has generated nearly $200 billion in equity value to support the non-profit foundation. Notably, Sam Altman argued that providing ChatGPT for free helps fulfill the mission of sharing the benefits of AI with the world.
Unjust enrichment
The plaintiffs point to the multibillion-dollar valuations of stakes held by OpenAI founders like Brockman and Ilya Sutskever, as well as Microsoft itself, as a sign that Musk’s donations were ultimately used for personal benefit, as opposed to supporting the mission of the charity. They argue that the work at OpenAI’s for-profit was commercially focused, while the foundation itself was left essentially dormant, without full-time employees, and, ultimately, not even in control of the for-profit.
OpenAI says all of Musk’s contributions were used by the foundation by 2020, and that equity distributions came well after he left the organization in 2018. Even beforehand, evidence shows the key players agreed that being able to compensate researchers with stock was key to developing AGI, the hypothetical form of AI capable of performing any intellectual task a human can. OpenAI executives maintain that the for-profit’s work meaningfully advanced the foundation’s mission, including safety activities. They say the non-profit board continues to control the for-profit, and instituted new governance controls following “the blip,” when Altman was fired by OpenAI’s non-profit board in 2023 for lack of candor and then rehired just days later.
Aiding and abetting
Musk’s case focused on the events of the blip, when Microsoft CEO Satya Nadella, whose company depended on OpenAI’s tech, was personally involved with helping to bring Altman back and creating a new board to govern OpenAI. They note that Microsoft executives wondered if their commercial agreement might conflict with the non-profit’s goals, and suggest that Microsoft’s commercial priorities led OpenAI away from its mission. They’ve focused attention on a clause in Microsoft’s agreement with OpenAI that gave Microsoft veto rights over major corporate decisions at OpenAI.
Microsoft’s witnesses have insisted that the company’s executives didn’t know of any specific conditions on Musk’s donations despite extensive due diligence, and never vetoed any decision by OpenAI. They note that the company’s investments and compute power allowed OpenAI to achieve its biggest triumphs.
Statute of Limitations
Musk has suggested that his skepticism of his cofounders grew over time, until in the fall of 2022 he finally decided they had betrayed him when he found out about Microsoft’s plans for a new $10 billion investment that took place in 2023. He wouldn’t file his lawsuit until mid-2024.
OpenAI’s attorneys argue that the terms of that deal were spelled out in a term sheet for a previous fundraising round in 2018, which Musk received and his advisers reviewed, but Musk said he didn’t read in detail. They also note numerous blog posts and other communications from over the years that show Musk could have known what OpenAI was doing well before he brought them to court, including tweets where Musk criticized the company years before the suit. Zilis, Musk’s adviser, even voted to approve these transactions as a member of the OpenAI board.
Ultimately, the OpenAI attorneys emphasize that Musk’s formal role in the organization ended in 2018 and his last donations took place in 2020.
Unreasonable delay
OpenAI’s attorneys say the real reason that Musk filed his suit was he realized that he was wrong about OpenAI, after its launch of ChatGPT revolutionized the business of artificial intelligence. They argue that OpenAI has operated under its current structure since its first Microsoft investment in 2018, and that forcing the organization to restructure eight years later is unreasonable.
Unclean hands
There is evidence that Musk was planning his own competing AI efforts while he was still the chair of OpenAI, and hired OpenAI employees to work on AI at Tesla. OpenAI’s attorneys argue that these efforts undermined OpenAI at a time when it was using Musk’s donations to pursue its mission. They noted that Zilis, the mother of three of Musk’s children, didn’t disclose her personal relationship to other OpenAI board members for years. And they argue that Musk withheld his donations in 2017 in an effort to win control of a planned for-profit affiliate of OpenAI. Finally, “Mr. Musk abandoned OpenAI for dead in 2018,” Bill Savitt, OpenAI’s lead attorney, told the jury.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
>
Tech
Elon Musk’s SpaceXAI has been bleeding staff since its merger
Elon Musk’s newly rebranded SpaceXAI is reportedly losing top talent, with more than 50 researchers and engineers departing since February, according to The Information. The exits include key leaders across coding, world models, and Grok voice.
Rivals like Meta and Thinking Machine Labs are reportedly scooping up former staff, with the company’s core pre-training team dwindling to just a handful of people. Since February, at least 11 xAI employees have defected to Meta, according to The Information’s report. At least seven have left to join Mira Murati’s Thinking Machine Labs. TechCrunch has previously reported on 11 of the xAI departures announced directly after the merger, including two co-founders.
SpaceX acquired xAI — two companies owned by Musk — in February and has since installed new leadership at the company. Musk renamed the combined company SpaceXAI earlier this month.
The pre-training departures, which followed the exit of team lead Juntang Zhuang, have particularly concerned employees and people close to SpaceXAI, per The Information. Pre-training is the first step to building new AI models, and many have questioned whether the company is still committed to developing leading models.
The report also found that Musk’s culture of extreme work led some staff to leave — something Musk employees across his companies, including Tesla, have complained about. A source who spoke to The Information said Musk set unrealistic deadlines for training models, which led to cutting corners on Grok.
Of course, several of the exits could have been driven by a desire to cash out.
SpaceX regularly offers tenders so employees can sell vested shares privately. Others might simply feel confident that their equity is close to liquidity given the company’s blockbuster IPO expectations. Once employees see the financial upside light at the end of the tunnel, they’re less likely to work at a company that puts undue pressure on them and may not be building the leading models they want to work on.
TechCrunch has reached out to SpaceX for comment.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
>
Tech
OpenAI says Codex is coming to your phone
Codex is going mobile. The coding tool — which OpenAI launched approximately a year ago — has now been integrated into the ChatGPT app, allowing users to monitor and manage their development workflows remotely.
The new function allows users to see their Codex live environments in any devices where it is running. The company announced the changes Thursday; the update, which is currently in preview, is now available to all plans on iOS and Android.
“This is more than the ability to remotely control a single task or dispatch new tasks to your computer,” OpenAI said in a statement. “From your phone, you can work across all of your threads, review outputs, approve commands, change models, or start something new.”
Last month, OpenAI also gave Codex the ability to run in the background in desktop environments — empowering the tool to take care of various tasks autonomously. Earlier this month, the company also introduced a Chrome extension that allows the agent to work in live browser sessions.
In February, Anthropic released a similar feature — Remote Control — which allows users to remotely monitor Claude Code’s work from afar.
The flurry of feature releases from both OpenAI and Anthropic speaks to the tense competition between the two over whose agentic coding tool will become the most widely used. Over the past year, Anthropic’s Claude Code has gained in popularity among businesses and tech professionals alike, although both tools continue to be widely used.
>
-
Fashion9 years agoThese ’90s fashion trends are making a comeback in 2017
-
Fashion9 years agoAccording to Dior Couture, this taboo fashion accessory is back
-
Fashion9 years agoYour comprehensive guide to this fall’s biggest trends
-
Fashion9 years agoModel Jocelyn Chew’s Instagram is the best vacation you’ve ever had
-
Fashion9 years agoA photo diary of the nightlife scene from LA To Ibiza
-
Fashion9 years agoEmily Ratajkowski channels back-to-school style
-
Fashion9 years ago9 Celebrities who have spoken out about being photoshopped
-
Fashion9 years agoThe tremendous importance of owning a perfect piece of clothing
