A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability (original) (raw)
Related papers
A Survey of Safety and Trustworthiness of Deep Neural Networks
arXiv (Cornell University), 2018
Safety and Trustworthiness of Deep Neural Networks: A Survey
arXiv (Cornell University), 2018
Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
Deep Neural Networks and Data for Automated Driving
Krzysztof Wnuk, Jonas Törnqvist
Increasing the Confidence of Deep Neural Networks by Coverage Analysis
ArXiv, 2021
Two to Trust: AutoML for Safe Modelling and Interpretable Deep Learning for Robustness
Trustworthy AI - Integrating Learning, Optimization and Reasoning, 2021
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective
IEEE Transactions on Dependable and Secure Computing
Input verification for deep neural networks
2018
SpecAttack: Specification-Based Adversarial Training for Deep Neural Networks
arXiv (Cornell University), 2021
A Comprehensive Evaluation Framework for Deep Model Robustness
2021
Analyzing Influence of Robustness of Neural Networks on the Safety of Autonomous Vehicles
Proceedings of the 31st European Safety and Reliability Conference (ESREL 2021), 2021
Checking Robustness of Representations Learned by Deep Neural Networks
Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track, 2021
Self-Checking Deep Neural Networks in Deployment
2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 2021
SpecRepair: Counter-Example Guided Safety Repair of Deep Neural Networks
Lecture Notes in Computer Science, 2022
Verification of Neural Networks: Challenges and Perspectives in the AIDOaRt Project
Zenodo (CERN European Organization for Nuclear Research), 2023
2019 IEEE Security and Privacy Workshops (SPW), 2019
Safety of Machine Learning Systems in Autonomous Driving
2017
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Sensors
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks
2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS)
Assurance Case Patterns for Cyber-Physical Systems with Deep Neural Networks
Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops, 2020
Verifiably Robust Neural Networks
2019
Establishing Safety Criteria for Artificial Neural Networks
Lecture Notes in Computer Science, 2003
Risk assessment of the result of AI algorithms at runtime
Thesis, 2021
Training Adversarial Agents to Exploit Weaknesses in Deep Control Policies
2020 IEEE International Conference on Robotics and Automation (ICRA)
DeepOpt: Scalable Specification-based Falsification of Neural Networks using Black-Box Optimization
arXiv (Cornell University), 2021
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles
IEEE Internet of Things Journal, 2021
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment
ArXiv, 2020
Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks
Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021
Security Matters: A Survey on Adversarial Machine Learning
2018
Adversarial Control of Neural Network Policies
2020