Posts

Industrial applications of artificial intelligence and big data

The deployment of artificial intelligence (AI) is critical for the success of small and medium-sized enterprises (SMEs) in the EU. In industrial sectors in particular, AI solutions are becoming ever more important as they help to optimize production processes, predict machinery failures and develop more efficient smart services. European industry can also harness big data and the smart use of ICT to enhance productivity and performance, and pave the way for innovation. 

Critical industrial applications of AI for SMEs

We launched a study to explore the most critical AI applications to accelerate their uptake by SMEs within strategic European value chains. Small and medium enterprises (SMEs) struggle more than large companies to keep up with the pace of digital transformation and industrial transition in general. They face specific challenges that could hamper wide AI adoption, reducing overall economic benefits for the European economy.

The study finds that there is a sound base of existing EU and national policy and initiatives that promote the uptake of advanced technologies. Yet, the key to success is to maintain policy focus on strategic priorities and increase coordination among them. See the reports on artificial intelligence for more insights.

Reports on Artificial Intelligence – critical industrial applications

Background

Artificial intelligence (AI) is now a priority for businesses, but also for policymakers, academic research institutions and the broader public. AI techniques are expected to bring benefits for governments, citizens and businesses, including in the fight against Covid-19, enabling resilience and improving green, sustainable growth. At the same time, AI has the potential to disrupt and possibly displace business models as well as impact the way people live and work.

The study advocates that while AI’s incremental GDP impact is initially moderate (up to 1.8% of additional cumulative GDP growth by 2025), there is significant potential in the longer-term (up to 13.5% of cumulative GDP growth by 2030), with disparities between regions and different industries. However, the potential of AI will fully materialise if European businesses, and in particular SMEs, are properly supported in their AI transition and grasp the competitive advantages it can provide.

AI is likely to have the largest economic impact on

  • manufacturing and the Industrial Internet of Things – IIoT, with overall AI impact potential in Europe of up to €200 billion by 2030
  • mobility, with AI impact potential of €300 billion
  • smart health, with AI impact potential of €105 billion

Foresight analysis of the effects of AI and automation technologies on the European labour market demonstrates significant effects in at least four ways

  1. labour substitution (with capital) is likely to displace parts of the workforce
  2. investment in AI and AI-enabled product and service innovation may create new direct jobs
  3. wealth creation may create positive spillover effects for the economy
  4. AI could enable higher participation in global flows (data and trade), creating additional jobs

These topics were initially debated during the conference, ‘A European perspective on Artificial Intelligence: Paving the way for SMEs’ AI adoption in key industrial value chains‘ held in Brussels in February 2020, with over 200 stakeholders.

Business-to-business big data sharing and access

In spite of huge economic potential (see below), data sharing between companies has not taken off at sufficient scale. The Commission seeks to identify and address any undue hurdles hindering data sharing and the use of privately-held data by other companies, as announced in the February 2020 Communication, ‘A European strategy for data’.

On business-to-business (B2B) data sharing, we are deploying two big data pilot projects to explore the innovation potential and innovative business models created by sharing data between data-producing/controlling entities and third-party businesses, notably SMEs. These pilot projects are being carried out in two strategic value chains: smart health (where the aim is to use data on diabetes from healthcare providers) and automotive (where sharing in-vehicle data produced by connected vehicles will be examined). Both projects are part of the ‘Big data and B2B platforms: the next frontier for Europe’s industry and enterprises’ study being carried out from 2019 to 2021.

Background

By harnessing the intelligence of big data and digital platforms, European industries can enhance productivity and performance, increase profitability, strengthen their competitive advantage, reduce risk, and pave the way for innovation. According to the Big data and B2B platforms report by the Strategic Forum on Digital Entrepreneurship, industrial companies are expected to make 3.6% per year in cost reductions over the next five years by basing business decisions on big data analytics. The big data European economy is expected to grow almost three times by 2025, reaching an estimated €829 billion, or 5.8% of EU GDP.

Sourse: https://ec.europa.eu/growth/industry/policy/advanced-technologies/industrial-applications-artificial-intelligence-and-big-data_en

Regulatory framework proposal on Artificial Intelligence

The Commission is proposing the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs). 

The proposal is part of a wider AI package, which also includes the updated Coordinated Plan on AI. Together they guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.

Why do we need rules on AI?

The proposed AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can be used to solve many societal challenges, certain AI systems create risks that need to be addressed to avoid undesirable outcomes. 

For example, it is often not possible to find out why an AI system has made a decision or prediction and reached a certain outcome. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.

Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.

The proposed rules will:

  • address risks specifically created by AI applications
  • propose a list of high-risk applications 
  • set clear requirements for AI systems for high risk applications
  • define specific obligations for AI users and providers of high risk applications
  • propose a conformity assessment before the AI system is put into service or placed on the market
  • propose enforcement after such an AI system is placed in the market
  • propose a governance structure at European and national level

A risk-based approach

source: digital-strategy.ec.europa.eu

Unacceptable risk: All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk; 
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams); 
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan); 
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market: 

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes; 
  • Logging of activity to ensure traceability of results
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; 
  • Clear and adequate information to the user; 
  • Appropriate human oversight measures to minimise risk; 
  • High level of robustness, security and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined  and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back. 

Minimal risk: The proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category, where they represent minimal or no risk. 

How does it all work in practice for providers of high risk AI systems?

Once the AI system is on the market, authorities are in charge of the market surveillance, users ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.

Future-proof legislation

As AI is a fast evolving technology, the proposal is based on a future-proof approach, allowing rules to adapt to technological change. AI applications should remain trustworthy even after they have been placed on the market. This requires ongoing quality and risk management by providers. 

Next steps

Following the Commission’s proposal in April 2021, the regulation could enter into force in the second half of 2022 in a transitional period. In this period, standards would be mandated and developed, and the governance structures set up would be operational. The second half of 2024 is the earliest time the regulation could become applicable to operators with the standards ready and the first conformity assessments carried out.

Source: https://digital-strategy.ec.europa.eu/