Posts

Europe regulates AI in order to boost investment in innovation and deep technologies

Europe regulates AI in order to boosts investment in innovation and deep technologies: European Innovation Council to invest €1.4 billion in deep technologies in 2025

Next year, the European Innovation Council (EIC) will boost deep technologies and high-potential start-ups from the EU with €1.4 billion. This is set out in the EIC Work Programme for 2025. The increase is €200 million compared to 2024 and aims to boost a more sustainable innovation ecosystem in Europe.

One of the main improvements to the programme is the EIC’s new scheme to expand the Strategic Technology Platform for Europe (STEP) – its budget is €300 million and will finance larger investments in companies aiming to bring strategic technologies to the EU market.

The remaining budget is distributed across 4 funding schemes:

EIC Pathfinder – for technology solutions with a technology readiness level of up to TRL 4 with the potential to lead to technological breakthroughs.

EIC Transition – an opportunity for consortia that have already achieved research results within the EIC Pathfinder or other Horizon 2020 and Horizon Europe programmes to turn them into innovations ready for market implementation.

EIC Accelerator – support for innovation projects in the final development phase.

The individual programmes are eligible for funding by research organisations, universities, SMEs, start-ups, manufacturing companies or sole traders, large companies, small mid-caps, etc.

Our CEO Dr. Galya Mancheva has provided an UpDate on EU AI Act on Bloomberg TV

Our CEO Dr. Galya Mancheva has provided today an UpDate on EuAiAct on Bloomberg TV.

Some of the insights are the following:

The ban of AI systems posing unacceptable risks will apply six months after the entry into force.

Codes of practice will apply nine months after entry into force.

Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

EU AI Act: first regulation on artificial intelligence

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

Learn more about what artificial intelligence is and how it is used
What Parliament wants in AI legislation

Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.

Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Learn more about Parliament’s work on AI
Learn more about Parliament’s vision for AI’s future
AI Act: different rules for different risk levels

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
Biometric identification and categorisation of people
Real-time and remote biometric identification systems, such as facial recognition

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

High risk

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

2) AI systems falling into specific areas that will have to be registered in an EU database:

Management and operation of critical infrastructure
Education and vocational training
Employment, worker management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum and border control management
Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.

Transparency requirements

Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

Disclosing that the content was generated by AI
Designing the model to prevent it from generating illegal content
Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Content that is either generated or modified with the help of AI – images, audio or video files (for example deepfakes) – need to be clearly labelled as AI generated so that users are aware when they come across such content.

Supporting innovation

The law aims to offer start-ups and small and medium-sized enterprises opportunities to develop and train AI models before their release to the general public.

That is why it requires that national authorities provide companies with a testing environment that simulates conditions close to the real world.

Next steps

The Parliament adopted the Artificial Intelligence Act in March 2024 and the Council followed with its approval in May 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

The ban of AI systems posing unacceptable risks will apply six months after the entry into force
Codes of practice will apply nine months after entry into force
Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

Source: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Who does the AI Act apply to?

The legislative framework will apply to both public and private entities inside and outside the #EU if the AI system is placed on the Union market or its use affects persons located in the #EU.

It can apply to both providers (e.g. the developer of a resume screening tool) and those implementing high-risk AI systems (e.g. a bank that purchased a resume screening tool). Importers of AI systems will must also ensure that the foreign supplier has already carried out the relevant conformity assessment procedure and that the relevant AI system bears a European Conformity Mark (CE) and is accompanied by the necessary documentation and instructions for use.

In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative #AI models.

Free open source model providers are exempt from most of these obligations. This exemption does not cover the obligations of providers of general purpose AI models with systemic risks.

The obligations also do not apply to pre-market research, development and prototyping activities, and the regulation does not apply to #AI systems that are exclusively for military and defense purposes or for purposes in the field of national security, regardless of the type of entity performing these activities

Which risks will be covered by the new AI rules?

The deployment of #AI systems has great potential to deliver societal benefits, economic growth and boost #EU innovation as well as global competitiveness. In some cases, however, the specific characteristics of some AI systems may lead to new risks related to consumer safety and fundamental rights. Some powerful AI models that are widely used could even pose systemic #risks.
This leads to legal uncertainty for companies and a potentially slower uptake of AI technologies among businesses and citizens due to a lack of trust. An unsynchronized regulatory response by national authorities risks fragmenting the internal market.

Why should we regulate the use of artificial intelligence?

The potential benefits of artificial intelligence (#AI) for our societies are numerous—from better medical care to better education. Given the rapid technological development of #AI, the #EU is determined to act as one to make good use of these opportunities.

The #EU AI Act is the first comprehensive #AI legislation in the world. It aims to address risks to health, safety and fundamental rights. The regulation also protects democracy, the rule of law and the environment.

Although most AI systems involve little or no risk, some AI systems create risks that must be accounted for in order to avoid undesirable outcomes.

For example, the non-transparency of many algorithms can lead to uncertainty and hinder the effective implementation of existing safety and fundamental rights legislation. In response to these challenges, legislative action was needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately considered.

This includes applications such as biometric identification systems or decisions made by AI that affect important personal interests, for example in employment, education, healthcare or law enforcement.

In front of 500 guests, dr. Galia Mancheva, shared the latest news regarding the regulation of artificial intelligence by the EU

On April 19th, 2024, dr. Galia Mancheva – the CEO of Ai advy, took part at the the only workshop of its kind by entrepreneurs for entrepreneurs.

Dr. Mancheva has attended the discussion panel “Products and services of the future. Innovations and automation in business development”, sharing the latest news on EU AI act.

Watch the panel discussion: https://www.youtube.com/watch?v=IM7vsZXh4Fc

Sales Club Conference is a modern conference focused on the needs, problems and challenges of modern entrepreneurs. At this stage, SCC is not just a popular corporate forum, but a growing community of active business leaders and professionals who want to develop their careers.

Could the new EU AI Act stifle genAI innovation in Europe? A new study says it could

“The EU’s AI Act needs to be closely monitored to avoid overburdening innovative AI developers… with unnecessary red tape,” one industry body warns.

 

Europe’s growing generative artificial intelligence (GenAI) landscape is highly competitive, but additional regulation could stifle innovation, a new study has found.

The report by Copenhagen Economics said that there are no “immediate competition concerns” in Europe’s generative AI scene that would warrant regulatory intervention. The study comes as EU regulators try to tighten rules on regulating competition in the AI market with its Digital Markets Act, EU AI Act, and AI office.

The Computer & Communications Industry Association (CCIA Europe), which commissioned the independent report, warned that regulatory intervention would be premature, slow down innovation and growth, and reduce consumer choice in generative AI.

“Allowing competition to flourish in the AI market will be more beneficial to European consumers than additional regulation prematurely being imposed, which would only stifle innovation and hinder new entrants,” said Aleksandra Zuchowska, CCIA Europe’s Competition Policy Manager.

“Instead, the impact of new AI-specific rules, such as the EU’s recently adopted AI Act, needs to be closely monitored to avoid overburdening innovative AI developers with disproportionate compliance costs and unnecessary red tape”.

The authors of the study noted that there are a growing number of foundation model developers active in the EU, such as Mistral AI and Aleph Alph and recognised that Europe’s GenAI sector is vibrant.

Competition concerns

But while they said there were no competition concerns in the short-term, there may be some in the near future, which include uncertainty for GenAI start-ups as they face challenges in growing and regulatory costs, such as from the EU AI Act.

The study also warned there are potential competition concerns, which include limited access to data, partnerships between large companies and smaller ones, and leveraging behaviour by big companies.

France’s Mistral AI is a prime example of a start-up that signed a partnership with Big Tech after it made its large language model (LLM) available to Microsoft Azure customers in February and gave Microsoft a minor stake in the AI company.

The study noted that if a larger partner uses its market power to exercise decisive control over a start-up or gain privileged or exclusive access to its technology, it could harm competition.

But it said partnerships are less likely to create competition concerns if there are no or limited exclusivity conditions and limited privileged access to the startup’s valuable technological assets.

Source: https://www.euronews.com/next/2024/03/22/could-the-new-eu-ai-act-stifle-genai-innovation-in-europe-a-new-study-says-it-could

Draft UN resolution on AI aims to make it ‘safe and trustworthy’

A new draft resolution aims to close the digital divide on artificial intelligence.

The United States is spearheading the first United Nations resolution on artificial intelligence (AI), aimed at ensuring the new technology is “safe, secure and trustworthy” and that all countries, especially those in the developing world, have equal access.

The draft General Assembly resolution aims to close the digital divide between countries and make sure they are all at the table in discussions on AI — and that they have the technology and capabilities to take advantage of its benefits, including detecting diseases, predicting floods and training the next generation of workers.

The draft recognises the rapid acceleration of AI development and use and stresses “the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems”.

It also recognises that “the governance of artificial intelligence systems is an evolving area” that needs further discussions on possible governance approaches.

Fostering ‘safe and trustworthy’ AI

US National Security Advisor Jake Sullivan said the United States turned to the General Assembly “to have a truly global conversation on how to manage the implications of the fast-advancing technology of AI”.

The resolution “would represent global support for a baseline set of principles for the development and use of AI and would lay out a path to leverage AI systems for good while managing the risks,” he said in a statement to The Associated Press.

If approved, Sullivan said, “this resolution will be an historic step forward in fostering safe, secure and trustworthy AI worldwide.”

The United States began negotiating with the 193 UN member nations about three months ago, spent hundreds of hours in direct talks with individual countries, 42 hours in negotiations and accepted input from 120 nations, a senior US official said.

The resolution went through several drafts and achieved consensus support from all member states this week and will be formally considered later this month, the official said, speaking on condition of anonymity because he was not authorised to speak publicly.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are an important barometre of world opinion.

A key goal, according to the draft resolution, is to use AI to help spur progress toward achieving the UN’s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.

The draft resolution encourages all countries, regional and international organisations, technical communities, civil society, the media, academia, research institutions and individuals “to develop and support regulatory and governance approaches and frameworks” for safe AI systems.

It warns against “improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law”.

New AI regulations

Lawmakers in the European Union are set to give final approval to the world’s first comprehensive AI rules on Wednesday. Countries around the world, including the US and China, or global groupings like the Group of 20 industrialised nations also are moving to draw up AI regulations.

The US draft calls on the 193 UN member states and others to assist developing countries in accessing the benefits of digital transformation and safe AI systems. It “emphasises that human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of artificial intelligence systems.”

US Ambassador Linda Thomas-Greenfield recalled President Joe Biden’s address to the General Assembly last year where he said emerging technologies, including AI, hold enormous potential.

She said the resolution, which is co-sponsored by dozens of countries, “aims to build international consensus on a shared approach to the design, development, deployment and use of AI systems,” particularly to support the 2030 UN goals.

The resolution responds to “the profound implications of this technology,” Thomas-Greenfield said, and if adopted it will be “an historic step forward in fostering safe, security and trustworthy AI worldwide.”

Source: https://www.euronews.com/next/2024/03/13/draft-un-resolution-on-ai-aims-to-make-it-safe-and-trustworthy

EU Policy. Lawmakers approve AI Act with overwhelming majority

Lawmakers in the European Parliament today (13 March) approved the AI Act, rules aimed at regulating AI systems according to a risk-based approach.

Lawmakers in the European Parliament today (13 March) approved the AI Act, rules aimed at regulating AI according to a risk-based approach with an overwhelming majority. The law passed with 523 votes in favour, 46 against and 49 abstentions.

The act, which needed final endorsement after approval on political and technical level, will now most likely enter into force this May.

Parliament AI Act co-lead, Italian lawmaker Brando Benifei (S&D), described it as “a historic day” in a subsequent press conference.

“We have the first regulation in the world which puts a clear path for a safe and human centric development of AI. We have now got a text that reflects the parliament’s priorities,” he said.

“The main point now will be implementation and compliance of businesses and institutions. We are also working on further legislation for the next mandate such as a directive on conditions in the workplace and AI,” Benifei said.

His counterpart Dragoş Tudorache (Romania/Renew), told the same conference that the EU looks at partner countries to ensure a global impact of the rules. “We have to be open to work with others on how to promote these rules, and build a governance with like-minded parties,” he said.

Entry into force

Under the AI Act, machine learning systems will be divided into four main categories according to the potential risk they pose to society. The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market.

The general-purpose AI rules will apply one year after entry into force, in May 2025, and the obligations for high-risk systems in three years. They will be under the oversight of national authorities, supported by the AI office inside the European Commission. It will now be up to the member states to set up national oversight agencies. The commission told Euronews that countries have 12 months to nominate these watchdogs.

In a response to today’s vote, Cecilia Bonefeld-Dahl, head of EU trade organisation Digital Europe, said that more needs to be done to keep companies based in Europe.

“Today, only 3% of the world’s AI unicorns come from the EU, with about 14 times more private investment in AI in the US and five times more in China. By 2030, the global AI market is expected to reach $1.5 trillion, and we need to ensure that European companies tap into that without getting tangled up in red tape,” Bonefeld-Dahl said.

Ursula Pachl, Deputy Director General of the European Consumer Organisation (BEUC), welcomed the approval of the law and said it will help consumers to join collective redress claims if they have been harmed by the same AI system.

“Although the legislation should have gone further to protect consumers, the top priority for the European Commission and national governments should now be to show they are serious about the AI Act by implementing it without delay and providing the relevant regulators that will enforce it with the necessary resources,” Pachl said.

Source: https://www.euronews.com/my-europe/2024/03/13/lawmakers-approve-ai-act-with-overwhelming-majority