Posts

big-data

Industrial applications of artificial intelligence and big data

The deployment of artificial intelligence (AI) is critical for the success of small and medium-sized enterprises (SMEs) in the EU. In industrial sectors in particular, AI solutions are becoming ever more important as they help to optimize production processes, predict machinery failures and develop more efficient smart services. European industry can also harness big data and the smart use of ICT to enhance productivity and performance, and pave the way for innovation. 

Critical industrial applications of AI for SMEs

We launched a study to explore the most critical AI applications to accelerate their uptake by SMEs within strategic European value chains. Small and medium enterprises (SMEs) struggle more than large companies to keep up with the pace of digital transformation and industrial transition in general. They face specific challenges that could hamper wide AI adoption, reducing overall economic benefits for the European economy.

The study finds that there is a sound base of existing EU and national policy and initiatives that promote the uptake of advanced technologies. Yet, the key to success is to maintain policy focus on strategic priorities and increase coordination among them. See the reports on artificial intelligence for more insights.

Reports on Artificial Intelligence – critical industrial applications

Background

Artificial intelligence (AI) is now a priority for businesses, but also for policymakers, academic research institutions and the broader public. AI techniques are expected to bring benefits for governments, citizens and businesses, including in the fight against Covid-19, enabling resilience and improving green, sustainable growth. At the same time, AI has the potential to disrupt and possibly displace business models as well as impact the way people live and work.

The study advocates that while AI’s incremental GDP impact is initially moderate (up to 1.8% of additional cumulative GDP growth by 2025), there is significant potential in the longer-term (up to 13.5% of cumulative GDP growth by 2030), with disparities between regions and different industries. However, the potential of AI will fully materialise if European businesses, and in particular SMEs, are properly supported in their AI transition and grasp the competitive advantages it can provide.

AI is likely to have the largest economic impact on

  • manufacturing and the Industrial Internet of Things – IIoT, with overall AI impact potential in Europe of up to €200 billion by 2030
  • mobility, with AI impact potential of €300 billion
  • smart health, with AI impact potential of €105 billion

Foresight analysis of the effects of AI and automation technologies on the European labour market demonstrates significant effects in at least four ways

  1. labour substitution (with capital) is likely to displace parts of the workforce
  2. investment in AI and AI-enabled product and service innovation may create new direct jobs
  3. wealth creation may create positive spillover effects for the economy
  4. AI could enable higher participation in global flows (data and trade), creating additional jobs

These topics were initially debated during the conference, ‘A European perspective on Artificial Intelligence: Paving the way for SMEs’ AI adoption in key industrial value chains‘ held in Brussels in February 2020, with over 200 stakeholders.

Business-to-business big data sharing and access

In spite of huge economic potential (see below), data sharing between companies has not taken off at sufficient scale. The Commission seeks to identify and address any undue hurdles hindering data sharing and the use of privately-held data by other companies, as announced in the February 2020 Communication, ‘A European strategy for data’.

On business-to-business (B2B) data sharing, we are deploying two big data pilot projects to explore the innovation potential and innovative business models created by sharing data between data-producing/controlling entities and third-party businesses, notably SMEs. These pilot projects are being carried out in two strategic value chains: smart health (where the aim is to use data on diabetes from healthcare providers) and automotive (where sharing in-vehicle data produced by connected vehicles will be examined). Both projects are part of the ‘Big data and B2B platforms: the next frontier for Europe’s industry and enterprises’ study being carried out from 2019 to 2021.

Background

By harnessing the intelligence of big data and digital platforms, European industries can enhance productivity and performance, increase profitability, strengthen their competitive advantage, reduce risk, and pave the way for innovation. According to the Big data and B2B platforms report by the Strategic Forum on Digital Entrepreneurship, industrial companies are expected to make 3.6% per year in cost reductions over the next five years by basing business decisions on big data analytics. The big data European economy is expected to grow almost three times by 2025, reaching an estimated €829 billion, or 5.8% of EU GDP.

Sourse: https://ec.europa.eu/growth/industry/policy/advanced-technologies/industrial-applications-artificial-intelligence-and-big-data_en

Regulatory framework proposal on Artificial Intelligence

The Commission is proposing the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs). 

The proposal is part of a wider AI package, which also includes the updated Coordinated Plan on AI. Together they guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.

Why do we need rules on AI?

The proposed AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can be used to solve many societal challenges, certain AI systems create risks that need to be addressed to avoid undesirable outcomes. 

For example, it is often not possible to find out why an AI system has made a decision or prediction and reached a certain outcome. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.

Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.

The proposed rules will:

  • address risks specifically created by AI applications
  • propose a list of high-risk applications 
  • set clear requirements for AI systems for high risk applications
  • define specific obligations for AI users and providers of high risk applications
  • propose a conformity assessment before the AI system is put into service or placed on the market
  • propose enforcement after such an AI system is placed in the market
  • propose a governance structure at European and national level

A risk-based approach

source: digital-strategy.ec.europa.eu

Unacceptable risk: All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk; 
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams); 
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan); 
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market: 

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes; 
  • Logging of activity to ensure traceability of results
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; 
  • Clear and adequate information to the user; 
  • Appropriate human oversight measures to minimise risk; 
  • High level of robustness, security and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined  and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back. 

Minimal risk: The proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category, where they represent minimal or no risk. 

How does it all work in practice for providers of high risk AI systems?

Once the AI system is on the market, authorities are in charge of the market surveillance, users ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.

Future-proof legislation

As AI is a fast evolving technology, the proposal is based on a future-proof approach, allowing rules to adapt to technological change. AI applications should remain trustworthy even after they have been placed on the market. This requires ongoing quality and risk management by providers. 

Next steps

Following the Commission’s proposal in April 2021, the regulation could enter into force in the second half of 2022 in a transitional period. In this period, standards would be mandated and developed, and the governance structures set up would be operational. The second half of 2024 is the earliest time the regulation could become applicable to operators with the standards ready and the first conformity assessments carried out.

Source: https://digital-strategy.ec.europa.eu/

Artificial intelligence: stock-taking and way forward during the second AI Alliance Assembly

Tomorrow, the second AI Alliance Assembly will take place. This full day of debate, on topics from the use of AI against coronavirus to biometric identification, will contribute to the future policy and legislation in the field of AI to create an ecosystem of excellence and trust.

Executive Vice President Margrethe Vestager said:

We want to develop European AI with clear rules and innovative solutions to boost our economic growth and societal welfare. This event is a great opportunity to deepen the debate with a view to our upcoming proposal next year.

Commissioner for Internal Market, Thierry Breton, who will deliver an introductory keynote speech, said:

Europe has a strong position in AI research, but we need to increase our efforts to remain at the cutting edge of industrial developments by increasing support for research, deployment and investment in AI. We need to leverage the wealth of industrial data that Europe is generating offers to stimulate AI made in Europe and that respects our rules and values. That will be our competitive advantage.

The event will build on the results of a public consultation on the Commission White Paper on which over 1,250 public and private stakeholders provided their feedback. It will bring together the members of the European AI Alliance, a multi-stakeholder forum launched in the frame of the European AI Strategy and currently counting over 4000 members. The High-Level Expert Group on AI will also discuss its finalised work on ethics, policy and investment recommendation.

Artificial Intelligence: first quantitative study of its kind finds uptake by businesses across Europe is on the rise

The European Commission has published the first quantitative overview on the uptake of Artificial Intelligence (AI) technologies among European enterprises. This study will help monitor the adoption of AI in Member States and further assess the challenges faced by enterprises, for their internal organisation and externally.

AI uptake across European enterprises

The robust survey found that four in ten (42%) enterprises have adopted at least one AI technology, with a quarter of them having already adopted at least two. Almost twice the proportion of large enterprises (39%) use two or more AI technologies compared to micro (21%) and small enterprises (22%). A total of 18% have plans to adopt AI in the next two years, while 40% of the enterprises participating do not use AI, nor do they plan to in the future. Overall awareness of AI amongst companies is however high across the EU, standing at 78%.

Challenges to AI technology adoption across Europe

The study also found three key internal barriers that enterprises are facing when adopting AI technologies: 57% experienced difficulties in hiring new staff with the right skills, while just over half (52%) said the cost of adopting AI technology was a barrier for their enterprise. The cost of adapting operational processes was also one of the three key issues (49%). Reducing uncertainty can be beneficial, as enterprises find liability for potential damages (33%), data standardisation (33%) and regulatory obstacles (29%) to be major external challenges to AI adoption.

Next steps

The “European enterprise survey on the use of technologies based on artificial intelligence”, will be used to monitor the adoption of AI across Member States and to assess the obstacles and barriers in the use of AI. In addition, it will present an overview of AI-related skills in the workforce. It will also help the Commission to shape future policy initiatives in the field of AI.

Background

The study was carried out for the European Commission by the market research company Ipsos together with iCite. A robust survey instrument was designed and fielded in EU Member States, as well as Norway, Iceland and the UK. A total of 9640 enterprises took part between January and March 2020. The five key performance indicators measured by the survey were AI awareness, adoption, sourcing, external and internal obstacles to adoption. The study used Computer Assisted Telephone Interviewing to obtain representative country estimates.

Artificial intelligence has become an area of strategic importance and a key driver of economic development. It can bring solutions to many societal challenges from treating diseases to minimising the environmental impact of farming. However, socio-economic, legal and ethical impacts have to be carefully addressed.

Source: https://digital-strategy.ec.europa.eu/

Towards a vibrant European network of AI excellence

The first European Network of Artificial Intelligence (AI) Excellence Centres held a kick-off meeting last week to set the tone for future collaboration, under the motto of “the whole is more than the sum of its parts.”

New network of AI excellence centres to drive up collaboration in research across Europe

Five projects have been selected to form the network, following a call launched in July 2019 to bring together world-class researchers and establish a common approach, vision and identity for the European AI ecosystem.

What will the European Network of AI Excellence Centres do?

  • Support and make the most of the AI talent and excellence already available in Europe;
  • Foster exchange of knowledge and expertise, and attract and maintain these talents;
  • Further develop collaboration between the network and industry;
  • Foster diversity and inclusion;
  • Develop a unifying visual identity.

The 5 projects making up the Network

4 Research and Innovation Actions to mobilise the best researchers on key AI topics.

  • AI4Media: focuses on advancing AI to serve media, to make sure that the European values of ethical and trustworthy AI are embedded in future AI deployments, and to reimagine AI as a beneficial technology in the service of society and media.
  • ELISE:  invites all ways of reasoning, considering all types of data applicable for almost all sectors of science and industry, while being aware of data safety and security, and striving for explainable and trustworthy outcomes.
  • HumanE-AI-Net: supports technologies for human-level interaction, by providing new abilities to perceive and understand complex phenomena, to individually and collectively solve problems, and to empower individuals with new abilities for creativity and experience.
  • TAILOR: builds an academic-public-industrial research network to provide the scientific basis for Trustworthy AI, combining learning, optimization and reasoning to produce AI systems that incorporate safeguards for safety, transparency and respect for human agency and expectations.

1 Coordination and Support Action

  • VISION: to foster exchange between the selected projects and other relevant initiatives, ensuring synergy and overcoming fragmentation in the European AI community.

Funding

The Commission invested €50m under the current Horizon 2020 programme, after an initial investment of €20m for the creation of AI4EU, the AI-on-Demand-Platform that allows the exchange of AI tools and resources across Europe.

Next steps

The Network is the foundation of a larger future initiative through Horizon Europe, which is one cornerstone of the “ecosystem of excellence“ set out in the Commission’s White Paper on AI, and is a result of the Commission’s long-term vision to unify the European AI community and make Europe a powerhouse of AI. Priority shall be given to the development of PhD programmes, integration of AI in education programmes and not just in ICT related courses, and setting up internships.

Guidelines for military and non-military use of Artificial Intelligence

Artificial Intelligence must be subject to human control, allowing humans to correct or disable it in case of unforeseen behaviour, say MEPs.

 

The report, adopted on Wednesday with 364 votes in favour, 274 against, 52 abstentions, calls for an EU legal framework on AI with definitions and ethical principles, including its military use. It also calls on the EU and its member states to ensure AI and related technologies are human-centred (i.e. intended for the service of humanity and the common good).

Military use and human oversight

MEPs stress that human dignity and human rights must be respected in all EU defence-related activities. AI-enabled systems must allow humans to exert meaningful control, so they can assume responsibility and accountability for their use.

The use of lethal autonomous weapon systems (LAWS) raises fundamental ethical and legal questions on human control, say MEPs, reiterating their call for an EU strategy to prohibit them as well as a ban on so-called “killer robots”. The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgement, in line with the principles of proportionality and necessity.

The text calls on the EU to take a leading role in creating and promoting a global framework governing the military use of AI, alongside the UN and the international community.

AI in the public sector

The increased use of AI systems in public services, especially healthcare and justice, should not replace human contact or lead to discrimination, MEPs assert. People should always be informed if they are subject to a decision based on AI and be given the option to appeal it.

When AI is used in matters of public health, (e.g. robot-assisted surgery, smart prostheses, predictive medicine), patients’ personal data must be protected and the principle of equal treatment upheld. While the use of AI technologies in the justice sector can help speed up proceedings and take more rational decisions, final court decisions must be taken by humans, be strictly verified by a person and be subject to due process.

Mass surveillance and deepfakes

MEPs also warn of threats to fundamental human rights and state sovereignty arising from the use of AI technologies in mass civil and military surveillance. They call for public authorities to be banned from using “highly intrusive social scoring applications” (for monitoring and rating citizens). The report also raises concerns over “deepfake technologies” that have the potential to “destabilise countries, spread disinformation and influence elections”. Creators should be obliged to label such material as “not original” and more research should be done into technology to counter this phenomenon.

Quote

Rapporteur Gilles Lebreton (ID, FR) said: “Faced with the multiple challenges posed by the development of AI, we need legal responses. To prepare the Commission’s legislative proposal on this subject, this report aims to put in place a framework which essentially recalls that, in any area, especially in the military field and in those managed by the state such as justice and health, AI must always remain a tool used only to assist decision-making or help when taking action. It must never replace or relieve humans of their responsibility”.

Investing in AI for manufacturing

The European Commission welcomes proposals to exploit the potential of AI and boost the digital technologies in the manufacturing sector.

Manufacturing processes and products can benefit from advanced digital technologies and state-of-the-art Artificial Intelligence (AI) solutions. The integration of Artificial Intelligence in the various stages of a production process not only can supply the markets with better and cost effective goods but also can improve the conditions and quality of human labour.

PwC paper identifies the potential value from exploitation of AI as being $15.7 trillion in 2030. This would be through impacts on productivity, personalisation of products, better use of time, and quality improvements. An expected 55% of GDP gains from AI would come from labour productivity improvements. A study by McKinsey gives some examples on how AI can be used in manufacturing such as predictive maintenance, cost reduction, automated testing, improved quality of products and supply chain management. Microsoft produced a report in May 2019 quoting figures that AI will add $3.7 trillion dollars to the manufacturing sector by 2035.

As digital transformation is already affecting the greatest share of European industry, the European Commission aims to support capitalising on its potential to the fullest, including the potential of AI. Two recently launched calls (ICT-38-2020 and DT-ICT-03-2020) specifically address AI in manufacturing through actions of research, innovation and experimentation:

Artificial Intelligence for manufacturing

Seizing AI opportunities is essential for Europe’s mid and long-term competitiveness. The manufacturing sector provides one of the most relevant examples. The challenge is to integrate AI technologies with advanced manufacturing technologies and systems in order to boost their potential in the manufacturing and process industries to improve the quality of products and processes. At the same time, it is important to consider how humans and AI will work together in optimal complementarity.

The EC is now investing in research and innovation actions that will build on the current state-of-the-art to:

  • Integrate AI technologies in the manufacturing domain
  • Develop innovative concepts and tools of AI application in manufacturing
  • Build on the AI4EU platform, where relevant
  • Promote the effective collaboration between humans and AI
  • Ensure the application of Trustworthy AI
  • Demonstrate technologies and solutions in different manufacturing cases

More on the call ICT-38-2020

Innovation for Manufacturing SMEs (I4MS)

The EU supports the widespread uptake of digital technologies in manufacturing business operations. Since 2013, the I4MS initiative has helped SMEs and mid-caps to improve their products, business processes, and business models via digital technologies. Launching the 4th phase of this initiative, Digital Innovation Hubs are called for that strengthen European SMEs and mid-caps by experimenting and testing Artificial Intelligence techniques in manufacturing. Experiments should aggregate and analyse data from multiple sources.

In addition to AI, the call invites for testing and experimentation actions in other areas, such as:

  • Smart modelling, simulation, and optimisation for digital twins
  • Laser based equipment in advanced and additive manufacturing
  • Cognitive autonomous systems and human-robot interaction

The participation of Digital Innovation Hubs in so far underrepresented regions is particularly encouraged.

More on the call DT-ICT-03-2020

Learn more on these topics by:

Source: https://digital-strategy.ec.europa.eu/en/news/investing-ai-manufacturing

Have your say: European expert group seeks feedback on draft ethics guidelines for trustworthy artificial intelligence

The High-Level Expert Group on Artificial Intelligence released the first draft of its ethics guidelines for the development and use of artificial intelligence.

Today, the High-Level Expert Group on Artificial Intelligence, which was appointed by the Commission in June, released the first draft of its Ethics Guidelines for the development and use of artificial intelligence (AI). In this document, the independent group of 52 experts coming from academia, business and civil society, sets out how developers and users can make sure AI respects fundamental rights, applicable regulation and core principles, and how the technology can be made technically robust and reliable.

Commission Vice-President for the Digital Single Market Andrus Ansip and Commissioner for Digital Economy and Society Mariya Gabriel thanked the group for their work.

Commission Vice-President for the Digital Single Market Andrus Ansip said:

AI can bring major benefits to our societies, from helping diagnose and cure cancers to reducing energy consumption. But for people to accept and use AI-based systems, they need to trust them, know that their privacy is respected, that decisions are not biased. The work of the expert group is very important in this regard and I encourage everyone to share their comments to help finalise the guidelines.

Commissioner for Digital Economy and Society Mariya Gabriel added:

The use of artificial intelligence, like the use of all technology must always be aligned with our core values and uphold fundamental rights. The purpose of the ethics guidelines is to ensure this in practice. Since this challenge concerns all sectors of our society, it is important that everybody can comment and contribute to the work in progress. Please join the European AI Alliance and let us have your feedback!

Update: The draft Ethics Guidelines are now open for comments until 1 February and discussions are taking place through the European AI Alliance, the EU’s multi-stakeholder platform on AI.

In March 2019, the expert group will present their final guidelines to the Commission which will analyse them and propose how to take this work forward. The ambition is then to bring Europe’s ethical approach to the global stage. The Commission is opening up cooperation to all non-EU countries that are willing to share the same values.

Background

Following its European approach on AI published in April 2018, the Commission set up a High-Level Expert Group on AI, which consists of 52 independent experts representing academia, industry, and civil society. This first draft Ethics Guidelines were prepared through a number of meetings since June 2018 and takes into account feedback from many discussions through the European AI Alliance. It also follows the announcements of the EU coordinated plan with the Member States, the Declaration of Cooperation on AI and the proposed investment of at least €7 billion in AI from the Horizon Europe and Digital Europe programmes.

More information

Source: https://digital-strategy.ec.europa.eu/en/news/have-your-say-european-expert-group-seeks-feedback-draft-ethics-guidelines-trustworthy-artificial