A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability (original) (raw)

A Survey of Safety and Trustworthiness of Deep Neural Networks

Emese Thamó

arXiv (Cornell University), 2018

View PDFchevron_right

Safety and Trustworthiness of Deep Neural Networks: A Survey

Emese Thamó

arXiv (Cornell University), 2018

View PDFchevron_right

Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety

Eric Sax

Deep Neural Networks and Data for Automated Driving

View PDFchevron_right

Safely Entering the Deep: A Review of Verification and Validation for Machine Learning and a Challenge Elicitation in the Automotive Industry

Krzysztof Wnuk, Jonas Törnqvist

View PDFchevron_right

Increasing the Confidence of Deep Neural Networks by Coverage Analysis

Giulio Rossolini

ArXiv, 2021

View PDFchevron_right

Two to Trust: AutoML for Safe Modelling and Interpretable Deep Learning for Robustness

Mohammadreza Amirian

Trustworthy AI - Integrating Learning, Optimization and Reasoning, 2021

View PDFchevron_right

Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective

Sin Teo

IEEE Transactions on Dependable and Secure Computing

View PDFchevron_right

Toward Improved Reliability of Deep Learning Based Systems Through Online Relabeling of Potential Adversarial Attacks

Faissal El Bouanani

View PDFchevron_right

Input verification for deep neural networks

Ludwig Tranheden

2018

View PDFchevron_right

SpecAttack: Specification-Based Adversarial Training for Deep Neural Networks

Fabian Bauer-Marquart

arXiv (Cornell University), 2021

View PDFchevron_right

A Comprehensive Evaluation Framework for Deep Model Robustness

Xinghai Gao

2021

View PDFchevron_right

Analyzing Influence of Robustness of Neural Networks on the Safety of Autonomous Vehicles

Igor Kozin

Proceedings of the 31st European Safety and Reliability Conference (ESREL 2021), 2021

View PDFchevron_right

Checking Robustness of Representations Learned by Deep Neural Networks

Henryk Maciejewski

Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track, 2021

View PDFchevron_right

Self-Checking Deep Neural Networks in Deployment

David Rosenblum

2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 2021

View PDFchevron_right

SpecRepair: Counter-Example Guided Safety Repair of Deep Neural Networks

Fabian Bauer-Marquart

Lecture Notes in Computer Science, 2022

View PDFchevron_right

Verification of Neural Networks: Challenges and Perspectives in the AIDOaRt Project

Katiuscia Zedda

Zenodo (CERN European Organization for Nuclear Research), 2023

View PDFchevron_right

Are Self-Driving Cars Secure? Evasion Attacks Against Deep Neural Networks for Steering Angle Prediction

Alesia Chernikova

2019 IEEE Security and Privacy Workshops (SPW), 2019

View PDFchevron_right

Safety of Machine Learning Systems in Autonomous Driving

Fadi Al Khoury

2017

View PDFchevron_right

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

ambra demontis

View PDFchevron_right

Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems

Sophia Karagiorgou

Sensors

View PDFchevron_right

TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks

ABDULLAH HANIF

2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS)

View PDFchevron_right

Assurance Case Patterns for Cyber-Physical Systems with Deep Neural Networks

Ramneet Kaur

Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops, 2020

View PDFchevron_right

Verifiably Robust Neural Networks

Duane S Boning

2019

View PDFchevron_right

Establishing Safety Criteria for Artificial Neural Networks

Tim Kelly

Lecture Notes in Computer Science, 2003

View PDFchevron_right

Risk assessment of the result of AI algorithms at runtime

Praveen Kumar

Thesis, 2021

View PDFchevron_right

Training Adversarial Agents to Exploit Weaknesses in Deep Control Policies

Richard Bowden

2020 IEEE International Conference on Robotics and Automation (ICRA)

View PDFchevron_right

DeepOpt: Scalable Specification-based Falsification of Neural Networks using Black-Box Optimization

Fabian Bauer-Marquart

arXiv (Cornell University), 2021

View PDFchevron_right

Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles

Kejie Lu

IEEE Internet of Things Journal, 2021

View PDFchevron_right

Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment

Raul Orduna Urrutia

ArXiv, 2020

View PDFchevron_right

Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks

Tianyu Du

Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021

View PDFchevron_right

Security Matters: A Survey on Adversarial Machine Learning

Guofu Li

2018

View PDFchevron_right

Adversarial Control of Neural Network Policies

Colin Shea-Blymyer

2020

View PDFchevron_right