Beyond federated learning: On confidentiality-critical machine learning applications in industry (original) (raw)
Related papers
A Federated Learning Framework for Enforcing Traceability in Manufacturing Processes
IEEE Access
The plethora of available data in various manufacturing facilities has boosted the adoption of various data analytics methods, which are tailored to a wide range of operations and tasks. However, fragmentation of data, in the sense that chunks of data could possibly be distributed in geographically sparse areas, hampers the generation of better and more accurate intelligent models that would otherwise benefit from the larger quantities of available data which are derived from various operations taking place at different locations of a manufacturing process. Moreover, in regulated industrial sectors, such as in the medical and the pharmaceutical fields, sector-specific legislation imposes strict criteria and rules for the privacy, maintenance and long-term storage of data. Process reproducibility is often an essential requirement in these regulated industrial sectors, and this issue could be supported by AI models which can be applied to enforce traceability, auditability and integrity of every initial, intermediate and final piece of data used during the AI model training process. In this respect, blockchain technologies could be potentially also useful for enabling and enforcing such requirements. In this paper, we present a multi-blockchain-based platform integrated with federated learning functionalities to train global AI (deep learning) models. The proposed platform maintains an audit trail of all information pertaining the training process using a set of blockchains in order to ensure the training process's immutability. The applicability of the proposed framework has been validated on three tasks by applying three state-of-the-art federated learning algorithms on an industrial pharmaceutical dataset based on two manufacturing lines, achieving promising in terms of both generalizability and convergence time. INDEX TERMS Blockchain, data integrity, federated learning, industry 4.0, pharmaceutical industry.
Data protection by design in AI? The case of federated learning
Computerrecht, 2021
This article investigates some of the data protection implications of an emerging privacy preserving machine learning technique, i.e. federated machine learning. First, it shortly describes how this technique works and focuses on some of the main security threats it faces. Second, it presents some of the ways in which this technique can facilitate compliance with certain principles of the General Data Protection Regulation as well as some of the challenges it may pose under the latter.
SURVEY ON FEDERATED LEARNING TOWARDS PRIVACY PRESERVING AI
One of the significant challenges of Artificial Intelligence (AI) and Machine learning models is to preserve data privacy and to ensure data security. Addressing this problem lead to the application of Federated Learning (FL) mechanism towards preserving data privacy. Preserving user privacy in the European Union (EU) has to abide by the General Data Protection Regulation (GDPR). Therefore, exploring the machine learning models for preserving data privacy has to take into consideration of GDPR. In this paper, we present in detail understanding of Federated Machine Learning, various federated architectures along with different privacy-preserving mechanisms. The main goal of this survey work is to highlight the existing privacy techniques and also propose applications of Federated Learning in Industries. Finally, we also depict how Federated Learning is an emerging area of future research that would bring a new era in AI and Machine learning.
Rabindra Bharati University Journal of Economics, 2024
The importance of data security and privacy is rising in tandem with the need for machine learning models. One potential answer that has recently arisen is federated learning, a decentralised method of machine learning that enables several entities to work together and construct models without disclosing private information. With an emphasis on its privacy-preserving features, this allencompassing examination delves into the concepts, methods, and uses of federated learning.
Privacy and Security in Federated Learning: A Survey
Applied Sciences
In recent years, privacy concerns have become a serious issue for companies wishing to protect economic models and comply with end-user expectations. In the same vein, some countries now impose, by law, constraints on data use and protection. Such context thus encourages machine learning to evolve from a centralized data and computation approach to decentralized approaches. Specifically, Federated Learning (FL) has been recently developed as a solution to improve privacy, relying on local data to train local models, which collaborate to update a global model that improves generalization behaviors. However, by definition, no computer system is entirely safe. Security issues, such as data poisoning and adversarial attack, can introduce bias in the model predictions. In addition, it has recently been shown that the reconstruction of private raw data is still possible. This paper presents a comprehensive study concerning various privacy and security issues related to federated learning....
Federated Learning with Privacy-preserving and Model IP-right-protection
Machine Intelligence Research
In the past decades, artificial intelligence (AI) has achieved unprecedented success, where statistical models become the central entity in AI. However, the centralized training and inference paradigm for building and using these models is facing more and more privacy and legal challenges. To bridge the gap between data privacy and the need for data fusion, an emerging AI paradigm federated learning (FL) has emerged as an approach for solving data silos and data privacy problems. Based on secure distributed AI, federated learning emphasizes data security throughout the lifecycle, which includes the following steps: data preprocessing, training, evaluation, and deployments. FL keeps data security by using methods, such as secure multi-party computation (MPC), differential privacy, and hardware solutions, to build and use distributed multiple-party machine-learning systems and statistical models over different data sources. Besides data privacy concerns, we argue that the concept of “...
Privacy and Trust Redefined in Federated Machine Learning
Machine Learning and Knowledge Extraction
A common privacy issue in traditional machine learning is that data needs to be disclosed for the training procedures. In situations with highly sensitive data such as healthcare records, accessing this information is challenging and often prohibited. Luckily, privacy-preserving technologies have been developed to overcome this hurdle by distributing the computation of the training and ensuring the data privacy to their owners. The distribution of the computation to multiple participating entities introduces new privacy complications and risks. In this paper, we present a privacy-preserving decentralised workflow that facilitates trusted federated learning among participants. Our proof-of-concept defines a trust framework instantiated using decentralised identity technologies being developed under Hyperledger projects Aries/Indy/Ursa. Only entities in possession of Verifiable Credentials issued from the appropriate authorities are able to establish secure, authenticated communicatio...
Increasing Trust for Data Spaces with Federated Learning
Springer International Publishing eBooks, 2022
Despite the need for data in a time of general digitization of organizations, many challenges are still hampering its shared use. Technical, organizational, legal, and commercial issues remain to leverage data satisfactorily, specially when the data is distributed among different locations and confidentiality must be preserved. Data platforms can offer "ad hoc" solutions to tackle specific matters within a data space. MUSKETEER develops an Industrial Data Platform (IDP) including algorithms for federated and privacy-preserving machine learning techniques on a distributed setup, detection and mitigation of adversarial attacks, and a rewarding model capable of monetizing datasets according to the real data value. The platform can offer an adequate response for organizations in demand of high security standards such as industrial companies with sensitive data or hospitals with personal data. From the architectural point of view, trust is enforced in such a way that data has never to leave out its provider's premises, thanks to federated learning. This approach can help to better comply with the European regulation as confirmed All authors have contributed equally and they are listed in alphabetical order.
Privacy Protection for Data-Driven Smart Manufacturing Systems
International Journal of Web Services Research, 2017
The Industrial Internet of Things (IIoT) is a new industrial ecosystem that combines intelligent and autonomous machines, advanced predictive analytics, and machine-human collaboration to improve productivity, efficiency and reliability. The integration of industry and IoT creates various attack surfaces and new opportunities for data breaches. In the IIoT context, it will often be the case that data is considered sensitive. This is because data will encapsulate various aspects of industrial operation, including highly sensitive information about products, business strategies, and companies. The transition to more open network architectures and data sharing of IoT poses challenges in manufacturing and industrial markets. The loss of sensitive information can lead to significant business loss and cause reputational damage. In this paper, the authors discuss emerging issues that are related to IIoT data sharing, investigate possible technological solutions to hide sensitive informatio...