Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Ultrahuman boosts US push with Ring Pro as Oura tightens its grip

Delve halts demos, Insight Partners sheds investment position amid ‘false compliance’ claims

Bengaluru food delivery startup Swish raises $38 million, its third round in 18 months

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Bernie Sanders’ AI ‘gotcha’ video fails, but the memes are great

    24 March 2026

    Are AI tokens the new signing bonus or just a cost of doing business?

    23 March 2026

    Want to build a robot snowman?

    23 March 2026

    Why Wall Street Didn’t Win Nvidia’s Big Conference

    22 March 2026

    New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared his relationship

    21 March 2026
  • Apps

    Apple Maps may receive advertisements

    24 March 2026

    Facebook is launching a new monetization program to attract popular creators from TikTok, YouTube

    23 March 2026

    Apps that distract you from the endless cycle of scrolling

    23 March 2026

    The features powered by Gemini in Google Workspace that are worth using

    22 March 2026

    Meta finally decides not to close Horizon Worlds in VR

    22 March 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Despite stiff competition, Kalshi, Polymarket CEOs back $35m VC fund projections

    23 March 2026

    Amid legal turmoil, Kalshi is temporarily banned in Nevada

    20 March 2026

    Nominations for the Startup Battlefield 200 are still open

    19 March 2026

    Kalshi’s legal woes pile up as Arizona files first criminal charges for ‘illegal gambling operation’

    17 March 2026

    Fuse raises $25M to disrupt legacy loan origination systems used by US credit unions

    16 March 2026
  • Hardware

    Ultrahuman boosts US push with Ring Pro as Oura tightens its grip

    24 March 2026

    Amazon is working on a new smartphone with Alexa at its core, the report says

    20 March 2026

    CEO Carl Pei says nothing about smartphone apps disappearing as they’re replaced by artificial intelligence agents

    18 March 2026

    MacBook Neo, AirPods Max 2, iPhone 17e and everything else Apple announced this month

    18 March 2026

    Oura enters India’s smart ring market with Ring 4

    17 March 2026
  • Media & Entertainment

    Tubi joins forces with popular TikTokers to create original streaming content

    19 March 2026

    Patreon CEO calls AI companies’ fair use argument ‘bogus’, says creators should be paid

    18 March 2026

    Meet Vurt, the first mobile streaming platform for indie filmmakers embracing vertical video

    18 March 2026

    BuzzFeed debuts AI applications for new revenue

    17 March 2026

    Facebook makes it easy for creators to report copycats

    14 March 2026
  • Security

    Delve halts demos, Insight Partners sheds investment position amid ‘false compliance’ claims

    24 March 2026

    The FBI says Iranian hackers are using Telegram to steal data in malware attacks

    23 March 2026

    Delve accused of misleading customers with ‘false compliance’

    22 March 2026

    Delve accused of misleading customers with ‘false compliance’

    21 March 2026

    The US accuses the Iranian government of operating a hacktivist group that hacked the Stryker

    20 March 2026
  • Startups

    Bengaluru food delivery startup Swish raises $38 million, its third round in 18 months

    24 March 2026

    Cursor admits that his new coding model was built on top of Moonshot AI’s Kimi

    23 March 2026

    Microsoft hires Sequoia-backed AI collaboration platform team Cove

    21 March 2026

    Consumer-focused privacy firm Cloaked raises $375 million as it expands into the enterprise

    20 March 2026

    Tools for founders to navigate and move past conflicts

    20 March 2026
  • Transportation

    Zipline raises another $200 million to fuel drone delivery expansion

    24 March 2026

    TechCrunch Mobility: Uber everywhere, at once

    23 March 2026

    The SEC ends its four-year investigation into EV startup Faraday Future

    23 March 2026

    Uber taps Rivian to build robotaxis in deal worth up to $1.25 billion

    22 March 2026

    Federal authorities intensify investigation into Tesla’s Full Self-Driving (Supervised) software

    21 March 2026
  • Venture

    Startup Gimlet Labs solves the AI ​​inference problem in a surprisingly elegant way

    24 March 2026

    AI startups are eating up the venture industry, and the returns, so far, are good

    21 March 2026

    Sequen raised $16 million to bring TikTok-style personalization technology to any consumer company

    19 March 2026

    AI ‘boys club’ could widen wealth gap for women, says Rana el Kaliouby

    18 March 2026

    Billionaires made a promise – now some want to leave

    17 March 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»The EU says incoming rules for general purpose AI can evolve over time
AI

The EU says incoming rules for general purpose AI can evolve over time

techtost.comBy techtost.com11 December 202308 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
The Eu Says Incoming Rules For General Purpose Ai Can
Share
Facebook Twitter LinkedIn Pinterest Email

The political agreement reached by European Union lawmakers late Friday on what the bloc bills as the world’s first comprehensive law to regulate artificial intelligence includes powers for the Commission to adapt the EU-wide AI rulebook to align with developments in the cutting edge sector. has confirmed.

Lawmakers’ choice of term to regulate the most powerful models behind the current boom in AI production tools — which EU law refers to as “general purpose” AI models and systems — rather than using industry-choice terms , such as “fundamental” or “frontier models” — was also chosen with a view to future protection of incoming law, according to the Commission, with the co-legislators preferring a general term to avoid a classification that could be linked to the use of a particular technology (eg machine learning based on transformers).

“In the future, we may have different technical approaches. And so we were looking for a more general term,” a Commission official suggested today. “Foundation models, of course, are part of general purpose AI models. These are models that can be used for a very wide variety of tasks, they can also be integrated into systems. To give you a concrete example, the general-purpose AI model will be GPT-4, and the general-purpose AI system will be ChatGPT — where GPT-4 is embedded in ChatGPT.”

As we reported earlier, the deal agreed by the bloc’s co-legislators includes a low-risk tier and a high-risk tier for regulating so-called general-purpose AI (GPAI) — such as models behind the virus outbreak in AI production tools such as ChatGPT by OpenAI. The trigger for the application of high-risk rules to artificial intelligence production technologies is determined by an initial threshold set in legislation.

Also, as we reported on Thursday, the agreed draft of the EU AI law refers to the amount of computation used to train the models, also known as floating point operations (or FLOPs) — setting the bar for a GPAI to be considered has “high impact capabilities”. at 10^25 FLOPs.

However, during a technical briefing with reporters today to review the political agreement, the Commission confirmed that this is merely an “initial cap”, confirming that it will have powers to update the cap over time through implementing/delegating acts (eg secondary law). He also said the idea is for the FLOPs limit to be combined, over time, with “other benchmarks” to be developed by a new expert oversight body set up within the Commission, called the AI ​​Office.

Why was 25 FLOPs chosen as the high-risk threshold for GPAIs? The Commission suggests that the number was chosen with the intention of capturing current generational frontier models. But he claimed lawmakers did not discuss or even consider whether it would apply to any models currently in play, such as OpenAI’s GPT-4 or Google’s Gemini, during marathon tripartite talks to agree its final shape rulebook.

A Commission official added that, in any case, it will be up to GPAI manufacturers to assess for themselves whether their models meet the FLOP threshold and thus whether they fall under the rules for GPAIs “with systemic risk” or not.

“There are no official sources that say ChatGPT or Gemini or Chinese models are at this FLOP level,” the official said during the press briefing. “Based on the information we have, and with that 10^25 we chose, we chose a number that could really capture, a little bit, the boundary models that we have. Whether it’s a record of GPT-4 or Gemini or others, we’re not here now to claim – because also, in our context, it’s the companies that should come and evaluate the amount of FLOPs or computing power themselves that they have used. But, of course, if you read the scientific literature, many will indicate that these numbers are the most advanced models at the moment. We will see what the companies will evaluate because they are the best to do that evaluation.”

“The rules have not been written with some companies in mind,” they added. “They’re really written with the idea of ​​setting the limit — which, by the way, can change because we have the ability to authorize ourselves to change that limit based on technological development. It could go up, it could go down, and we could also develop other benchmarks that in the future will be the most appropriate for benchmarking the different moments.”

GPAIs that fall into the AI ​​Act’s high-risk tier will face upfront regulatory requirements to assess and mitigate systemic risks — meaning they must proactively test model outputs to shrink real-world (or “ reasonably foreseeable”) adverse effects on public health. security, public safety, fundamental rights or for society as a whole.

While “low-level” GPAIs will face only lighter transparency requirements, including obligations to apply watermarking to AI output.

The watermarking requirement for GPAIs falls under an article that was in the Commission’s original version of the risk-based framework presented in April 2021, which focused on transparency requirements for technologies such as AI chatbots and deepfakes — but now also applies generally for general purpose artificial intelligence systems.

“There is an obligation to try to watermark [generative AI-produced] text based on the latest state-of-the-art technology available,” the Commission official said, detailing the agreed watermarking obligations. “Currently, technologies are much better at watermarking video and audio than text. But what we’re asking for is that this watermarking is done based on state-of-the-art technology — and then we expect, of course, that over time the technology will mature and be like [good] as much as possible.”

GPAI model makers must also commit to respecting EU copyright rules, including compliance with an existing machine-readable text and data mining exemption contained in the EU Copyright Directive — and a mapping of transparency requirements of the Open Source GPAI Act not to extend the exemption from copyright obligations, with the Commission confirming that the Copyright Directive will continue to apply to open source GPAI.

Regarding the Office of Artificial Intelligence, which will play a key role in setting risk classification thresholds for GPAIs, the Commission confirmed that no budget or staffing numbers for the expert body have yet been set. (Though, in the wee hours of Saturday, the bloc’s Internal Market Commissioner, Thierry Breton, suggested the EU was ready to welcome “many” new colleagues as it tooled up this general-purpose AI watchdog.)

Asked about resources for the AI ​​Office, a Commission official said it would be decided in the future by the EU executive taking “an appropriate and formal decision”. “The idea is that we can create a dedicated budget line for the Office and that we will also be able to recruit national experts from the Member States if we wish, on top of contract staff and on top of permanent staff. And some of these staff will also be deployed to the European Commission,” they added.

The Office of AI will work with a new scientific advisory group that the law also creates to help the agency better understand the potential of advanced artificial intelligence models to regulate systemic risk. “We have identified an important role for the establishment of a scientific team where the scientific team can effectively help the Office of Artificial Intelligence understand if there are new risks that have not yet been identified,” the official noted. And, for example, you also point out some notices about models not covered by the FLOP limit that for some reasons could actually create significant risks that governments should look at.”

While the EU executive appears keen to ensure the key details of the incoming law are published, despite the fact that there is no final text yet — because work to consolidate what was agreed by co-legislators during the marathon 38-hour talks completed on Friday night is the next task facing the block in the coming weeks — there could still be some devils lurking in that detail. Therefore, it is worth considering the text that will appear, probably in January or February.

Furthermore, while the full regulation won’t be in place for a few years, the EU will push for GPAIs to comply with codes of practice in the meantime — so AI giants will be under pressure to stay as close to tough regulations as possible which come as far down as possible, through the block’s Covenant AI.

The EU AI law itself likely won’t be in full force until sometime in 2026 — given that the final text, once drafted (and translated into member state languages), must be ratified by final votes in parliament and the Council, after which there is a short period before the text of the law is published in the Official Journal of the EU and another before it enters into force.

EU lawmakers also agreed a phased approach to the law’s compliance requirements, with 24 months to be allowed until the high-risk rules for GPAIs are implemented.

The list of strictly prohibited uses of AI will be implemented earlier, just six months after the law goes into effect — which could potentially mean banning some “unacceptable risk” uses of AI, such as social scoring or style Clearview AI Selfie scraping for facial recognition databases will go live in the second half of 2024, provided no last-minute opposition to the regulation emerges within the Council or Parliament. (For the full list of prohibited AI uses, read our previous post.)

eu ai act gpai rules evolve general general purpose a incoming purpose Regulation of artificial intelligence rules time
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleBeeper Mini is back in business after Apple tried to shut it down
Next Article Food tech update: VC funding declines again in Q3 amid fewer deals
bhanuprakash.cg
techtost.com
  • Website

Related Posts

Bernie Sanders’ AI ‘gotcha’ video fails, but the memes are great

24 March 2026

Are AI tokens the new signing bonus or just a cost of doing business?

23 March 2026

Want to build a robot snowman?

23 March 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Ultrahuman boosts US push with Ring Pro as Oura tightens its grip

24 March 2026

Delve halts demos, Insight Partners sheds investment position amid ‘false compliance’ claims

24 March 2026

Bengaluru food delivery startup Swish raises $38 million, its third round in 18 months

24 March 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Despite stiff competition, Kalshi, Polymarket CEOs back $35m VC fund projections

23 March 2026

Amid legal turmoil, Kalshi is temporarily banned in Nevada

20 March 2026

Nominations for the Startup Battlefield 200 are still open

19 March 2026
Startups

Bengaluru food delivery startup Swish raises $38 million, its third round in 18 months

Cursor admits that his new coding model was built on top of Moonshot AI’s Kimi

Microsoft hires Sequoia-backed AI collaboration platform team Cove

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.