Which risks will be covered by the new AI rules?

The deployment of #AI systems has great potential to deliver societal benefits, economic growth and boost #EU innovation as well as global competitiveness. In some cases, however, the specific characteristics of some AI systems may lead to new risks related to consumer safety and fundamental rights. Some powerful AI models that are widely used could even pose systemic #risks.
This leads to legal uncertainty for companies and a potentially slower uptake of AI technologies among businesses and citizens due to a lack of trust. An unsynchronized regulatory response by national authorities risks fragmenting the internal market.

Why should we regulate the use of artificial intelligence?

The potential benefits of artificial intelligence (#AI) for our societies are numerous—from better medical care to better education. Given the rapid technological development of #AI, the #EU is determined to act as one to make good use of these opportunities.

The #EU AI Act is the first comprehensive #AI legislation in the world. It aims to address risks to health, safety and fundamental rights. The regulation also protects democracy, the rule of law and the environment.

Although most AI systems involve little or no risk, some AI systems create risks that must be accounted for in order to avoid undesirable outcomes.

For example, the non-transparency of many algorithms can lead to uncertainty and hinder the effective implementation of existing safety and fundamental rights legislation. In response to these challenges, legislative action was needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately considered.

This includes applications such as biometric identification systems or decisions made by AI that affect important personal interests, for example in employment, education, healthcare or law enforcement.

Could the new EU AI Act stifle genAI innovation in Europe? A new study says it could

“The EU’s AI Act needs to be closely monitored to avoid overburdening innovative AI developers… with unnecessary red tape,” one industry body warns.

 

Europe’s growing generative artificial intelligence (GenAI) landscape is highly competitive, but additional regulation could stifle innovation, a new study has found.

The report by Copenhagen Economics said that there are no “immediate competition concerns” in Europe’s generative AI scene that would warrant regulatory intervention. The study comes as EU regulators try to tighten rules on regulating competition in the AI market with its Digital Markets Act, EU AI Act, and AI office.

The Computer & Communications Industry Association (CCIA Europe), which commissioned the independent report, warned that regulatory intervention would be premature, slow down innovation and growth, and reduce consumer choice in generative AI.

“Allowing competition to flourish in the AI market will be more beneficial to European consumers than additional regulation prematurely being imposed, which would only stifle innovation and hinder new entrants,” said Aleksandra Zuchowska, CCIA Europe’s Competition Policy Manager.

“Instead, the impact of new AI-specific rules, such as the EU’s recently adopted AI Act, needs to be closely monitored to avoid overburdening innovative AI developers with disproportionate compliance costs and unnecessary red tape”.

The authors of the study noted that there are a growing number of foundation model developers active in the EU, such as Mistral AI and Aleph Alph and recognised that Europe’s GenAI sector is vibrant.

Competition concerns

But while they said there were no competition concerns in the short-term, there may be some in the near future, which include uncertainty for GenAI start-ups as they face challenges in growing and regulatory costs, such as from the EU AI Act.

The study also warned there are potential competition concerns, which include limited access to data, partnerships between large companies and smaller ones, and leveraging behaviour by big companies.

France’s Mistral AI is a prime example of a start-up that signed a partnership with Big Tech after it made its large language model (LLM) available to Microsoft Azure customers in February and gave Microsoft a minor stake in the AI company.

The study noted that if a larger partner uses its market power to exercise decisive control over a start-up or gain privileged or exclusive access to its technology, it could harm competition.

But it said partnerships are less likely to create competition concerns if there are no or limited exclusivity conditions and limited privileged access to the startup’s valuable technological assets.

Source: https://www.euronews.com/next/2024/03/22/could-the-new-eu-ai-act-stifle-genai-innovation-in-europe-a-new-study-says-it-could