Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

StrictlyVC San Francisco is less than a month away

London is closing in on its first robotaxi service as Waymo begins trials

Financial risk management platform Pillar raises $20 million in rounds led by a16z

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Anthropic’s co-founder confirms the company briefed the Trump administration on Mythos

    15 April 2026

    Microsoft is working on yet another OpenClaw-like agent

    14 April 2026

    OpenAI has acquired AI personal finance startup Hiro

    14 April 2026

    Largest orbital computing cluster is open for business

    13 April 2026

    Anthropic restricts Mythos traffic to protect the Internet — or does Anthropic?

    12 April 2026
  • Apps

    How the Freecash rewards app made it to the top of the app stores

    15 April 2026

    X brings voice memos back to X Chat

    14 April 2026

    Avec’s Tinder-style email app lets you swipe through your inbox

    14 April 2026

    Roblox introduces ‘Kids’ and ‘Select’ accounts for age-appropriate access to games and chats

    13 April 2026

    You can now edit your comments on Instagram

    13 April 2026
  • Crypto

    British cryptographer Adam Back denies NYT report that he is Bitcoin creator Satoshi Nakamoto

    9 April 2026

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025
  • Fintech

    Cash app launches ‘pay later’ feature for P2P transfers

    3 April 2026

    Doss raises $55 million for AI inventory management that connects to ERP

    24 March 2026

    Despite stiff competition, Kalshi, Polymarket CEOs back $35m VC fund projections

    23 March 2026

    Amid legal turmoil, Kalshi is temporarily banned in Nevada

    20 March 2026

    Nominations for the Startup Battlefield 200 are still open

    19 March 2026
  • Hardware

    AI data center startup Fluidstack is in talks for a $1 billion round at an $18 billion valuation months after raising $7.5 billion, report says

    15 April 2026

    Amazon is ending support for older Kindle devices

    9 April 2026

    Intel signs Elon Musk’s Terafab chip project

    8 April 2026

    The Xiaomi 17 Ultra has some impressive extras that make taking photos really fun

    6 April 2026

    In Japan, the robot doesn’t come for your job. fills the one no one wants

    6 April 2026
  • Media & Entertainment

    YouTube Live Streams will now withhold ads during peak engagement to protect the atmosphere

    14 April 2026

    X says he’s reducing payouts to clickbait accounts

    12 April 2026

    TechCrunch is headed to Tokyo — and it’s bringing the Startup Battlefield with it

    10 April 2026

    Spotify now allows everyone to turn off videos in its app

    9 April 2026

    As YouTube expands into TV, it sees more interactive video across all formats

    9 April 2026
  • Security

    Someone planted backdoors in dozens of WordPress plugins used on thousands of websites

    14 April 2026

    Anodot hack leaves over a dozen compromised companies facing extortion

    14 April 2026

    Booking.com confirms that hackers accessed customer data

    13 April 2026

    Convicted spyware maker Bryan Fleming avoids jail time on conviction

    12 April 2026

    The Trump administration plans to cut the cybersecurity agency’s budget by $700 million

    11 April 2026
  • Startups

    StrictlyVC San Francisco is less than a month away

    15 April 2026

    Walmart-owned Flipkart, Amazon are squeezing India’s e-commerce startups

    12 April 2026

    This founder helped build SpaceX’s most powerful rocket engine. Now he’s building a “fighter for orbit.”

    12 April 2026

    Sierra’s Bret Taylor says the era of button-clicking is over

    11 April 2026

    After the data breach, the $10 billion startup Mercor is one month old

    11 April 2026
  • Transportation

    London is closing in on its first robotaxi service as Waymo begins trials

    15 April 2026

    Tesla adds ‘ribs’, other stats to track how often drivers use Full Self-Driving software

    14 April 2026

    Uber and Nuro begin testing premium robotaxi service in San Francisco

    14 April 2026

    Slate Auto raises $650 million to fund its affordable EV truck plans

    13 April 2026

    TechCrunch Mobility: Who’s chasing all the self-driving talent?

    13 April 2026
  • Venture

    Financial risk management platform Pillar raises $20 million in rounds led by a16z

    15 April 2026

    Vercel CEO Guillermo Rauch signals IPO readiness as AI agents drive revenue

    14 April 2026

    Nvidia-backed SiFive hits $3.65 billion valuation for open AI chips

    11 April 2026

    How to make the Startup Battlefield Top 20 — and what each company gets regardless

    10 April 2026

    Collide Capital Raises $95M to Back Future-of-Work Fintech Startups

    9 April 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»AI training data comes at a price only Big Tech can afford
AI

AI training data comes at a price only Big Tech can afford

techtost.comBy techtost.com1 June 202408 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Ai Training Data Comes At A Price Only Big Tech
Share
Facebook Twitter LinkedIn Pinterest Email

Data is at the heart of today’s advanced AI systems, but it’s increasingly expensive — putting it out of reach for all but the wealthiest tech companies.

Last year, James Betker, a researcher at OpenAI, wrote one post on his personal blog about the nature of generative AI models and the datasets they are trained on. In it, Betker claimed that the training data—not the design, architecture, or any other feature of a model—was the key to increasingly sophisticated, capable AI systems.

“Trained on the same data set for a long time, almost every model converges to the same point,” Betker wrote.

Is Betker right? Is training data the biggest determinant of what a model can do, whether it’s answering a question, drawing human hands, or creating a realistic cityscape?

It’s certainly plausible.

Statistical machines

AI production systems are basically probabilistic models — a huge pile of statistics. They guess based on huge amounts of examples which data makes the most “sense” to put where (eg the word “go” before “to the market” in the sentence “I go to the market”). It seems intuitive, then, that the more examples a model has to follow, the better the performance of models trained on those examples.

“It seems that the performance gains come from data,” Kyle Lo, senior applied research scientist at the Allen Institute for AI (AI2), an artificial intelligence research nonprofit, told TechCrunch, “at least when you have a solid training organization. .”

Lo gave the example of Meta’s Llama 3, a text generation model released earlier this year that outperforms AI2’s own OLMo model, despite being architecturally very similar. Llama 3 was trained on significantly more data than OLMo, which Lo believes explains its superiority in many popular AI benchmarks.

(I’ll point out here that the benchmarks widely used in the AI ​​industry today aren’t necessarily the best gauge of a model’s performance, but outside of quality tests like ours, it’s one of the few measures it has going on.)

This is not to say that training on exponentially larger data sets is a sure path to exponentially better models. The models operate on a “garbage in, garbage out” paradigm, Lo notes, and so curation and data quality matter a lot, perhaps more than sheer quantity.

“It is possible that a small model with carefully designed data will perform better than a large model,” he added. “For example, Falcon 180B, a large model, is ranked 63rd in the LMSYS benchmark, while Llama 2 13B, a much smaller model, is ranked 56th.”

In an interview with TechCrunch last October, OpenAI researcher Gabriel Goh said that higher-quality annotations contributed significantly to improved image quality in DALL-E 3, OpenAI’s text-to-image model, over its predecessor DALL-E 2. This is the main source of improvements,” he said. “Text annotations are much better than they were [with DALL-E 2] — it’s not even comparable.”

Many artificial intelligence models, including DALL-E 3 and DALL-E 2, are trained by having human annotators label data so that a model can learn to correlate those labels with other, observed features of that data. For example, a model fed many cat images with annotations for each breed will eventually “learn” to associate terms such as short tail and short hair with their special visual characteristics.

Bad behaviour

Experts like Lo worry that the growing emphasis on large, high-quality training data sets will concentrate AI development among the few players with billion-dollar budgets who can afford to acquire those sets. Significant innovation in synthetic data or fundamental architecture could disrupt the status quo, but neither seems to be on the near horizon.

“Overall, entities that govern content that is potentially useful for AI development have incentives to lock down their material,” Lo said. “And as access to data closes, we’re essentially blessing some early movers to get data and move up the ladder so that no one else has access to data to catch up.”

Indeed, where the race to collect more education data hasn’t led to unethical (and perhaps even illegal) behavior such as surreptitiously hoarding copyrighted content, it has rewarded tech giants with deep pockets to spend on licensing data.

Artificial intelligence generation models like OpenAI are primarily trained on images, text, audio, video, and other data — some copyrighted — taken from public web pages (including, problematically, those generated by AI). The OpenAIs of the world claim that fair use protects them from legal retaliation. Many rights holders disagree — but, at least for now, there’s not much they can do to prevent the practice.

There are many, many examples of artificial intelligence builders acquiring massive data sets through questionable means in order to train their models. OpenAI According to reports transcribed more than a million hours of YouTube video without YouTube’s blessing—or the blessing of the creators—to power the flagship GPT-4 model. Google recently expanded its terms of service in part to allow public use of Google Docs, restaurant reviews on Google Maps, and other online material for its AI products. And Meta is said to have considered risking lawsuits trains her models to IP-protected content.

Meanwhile, large and small companies rely workers in third world countries paid only a few dollars an hour to create annotations for training sets. Some of these commenters — employed by mammoth startups like Scale AI — work literally days to complete tasks that expose them to graphic depictions of violence and gore with no benefits or guarantees of future gigs.

Rising costs

In other words, even the above data offerings aren’t exactly conducive to an open and fair AI ecosystem.

OpenAI has spent hundreds of millions of dollars licensing content from news publishers, media libraries, and more to train its AI models — a budget far larger than that of most academic research groups, nonprofits, and startups. Meta went so far as to weigh a takeover of publisher Simon & Schuster for the rights to e-book excerpts (eventually, Simon & Schuster sold to private equity firm KKR for $1.62 billion in 2023).

With the purchase of AI training data to be expected cultivate from about $2.5 billion now to nearly $30 billion within a decade, data brokers and platforms are rushing to charge top dollar — in some cases over the objections of their user bases.

Media library provided by Shutterstock inked deals with AI vendors ranging from $25 million to $50 million, while Reddit claims to have made hundreds of millions from licensing data to organizations like Google and OpenAI. Few platforms with abundant data accumulated organically over the years they do not have He signed deals with prolific AI developers, it seems — from Photobucket to Tumblr to Q&A site Stack Overflow.

It’s the platforms’ data for sale — at least depending on the legal arguments you believe. But in most cases, users don’t see a single penny of the earnings. And it hurts the wider AI research community.

“Smaller players will not be able to afford these data licenses and therefore will not be able to develop or study AI models,” Lo said. “I am concerned that this could lead to a lack of independent scrutiny of AI development practices.”

Independent efforts

If there is a ray of sunshine through the darkness, it is the few independent, non-profit efforts to create massive data sets that anyone can use to train a productive AI model.

EleutherAI, a non-profit grassroots research group that started as a loose Discord collective in 2020, is working with the University of Toronto, AI2, and independent researchers to create The Pile v2, a set of billions of text snippets mostly sourced from the public domain sector .

In April, the startup Hugging Face released FineWeb, a filtered version of Common Crawl—the eponymous dataset maintained by the nonprofit organization Common Crawl, consisting of billions upon billions of web pages—that Hugging Face claims improves the model’s performance on many reference points.

Some efforts to release open training datasets, such as the LAION team’s image sets, have struggled with copyright, data privacy, and more. equally serious ethical and legal challenges. But some of the most dedicated data curators are committed to doing better. Pile v2, for example, removes problematic copyrighted material found in its original dataset, The Pile.

The question is whether any of these open-source efforts can hope to keep pace with Big Tech. Since data collection and curation remains a matter of resources, the answer is probably no — at least not until some research breakthrough levels the playing field.

afford All included big data data sets Education Generative AI price tech training
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleInstagram is testing “test wheels” that aren’t shown to a creator’s followers
Next Article How (Re)vive grew 10x last year helping retailers recycle and sell returned items
bhanuprakash.cg
techtost.com
  • Website

Related Posts

Anthropic’s co-founder confirms the company briefed the Trump administration on Mythos

15 April 2026

AI data center startup Fluidstack is in talks for a $1 billion round at an $18 billion valuation months after raising $7.5 billion, report says

15 April 2026

Microsoft is working on yet another OpenClaw-like agent

14 April 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

StrictlyVC San Francisco is less than a month away

15 April 2026

London is closing in on its first robotaxi service as Waymo begins trials

15 April 2026

Financial risk management platform Pillar raises $20 million in rounds led by a16z

15 April 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Cash app launches ‘pay later’ feature for P2P transfers

3 April 2026

Doss raises $55 million for AI inventory management that connects to ERP

24 March 2026

Despite stiff competition, Kalshi, Polymarket CEOs back $35m VC fund projections

23 March 2026
Startups

StrictlyVC San Francisco is less than a month away

Walmart-owned Flipkart, Amazon are squeezing India’s e-commerce startups

This founder helped build SpaceX’s most powerful rocket engine. Now he’s building a “fighter for orbit.”

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.