Xiaowei Huang at Liverpool University (original) (raw)
Office: Ashton Building, Room 222
Phone: (+44) 1517954282
Mobile: (+44) 7831378101
Email: xiaowei.huang at liverpool.ac.uk, xiaowei.huang at live.com
I am Professor of Computer Science at the University of Liverpool. Prior to Liverpool, I worked at the University of Oxford, the University of New South Wales, and the Chinese Academy of Sciences. I am the Head of Artificial Intelligence for the School of Computer Science and Informatics, and prior to this, I was on the role of school research lead of the School of Electronic Engineering, Electronics, and Computer science.
Research Interests:
The research my group is currently conducting spans over machine learning, formal methods, and robotics. If you are interested in these areas and want to collaborate with us, please feel free to get in touch. Most of my research publications can be found through my Google Scholar profile.
Specifically, we are interested in analysing autonomous systems -- systems that can learn, adapt, and make decisions by themselves -- in terms of their properties (e.g., safety, robustness, trustworthiness, security, etc), to understand if they are applicable to safety critical applications, and constructing autonomous systems with these properties satisfied. This may include (but not limited to)
- verification of neural network-based deep learning on safety and security properties,
- practical analysis techniques (software testing, safety argument, certification, etc) for machine learning techniques,
- interpretation and explanation of deep learning, and
- logic-based approaches for the specification, verification and synthesis of autonomous multi-agent systems.
Currently, the application areas we are addressing include self-driving cars, underwater vehicles, and other robotics applications. We are also interested in various healthcare applications where safety and interpretability are important. The research my group is doing is summarised (incompletely) in the slides as well as in the "Research" tab of this webpage.
I am co-chairing AISafety workshops at IJCAI, and SafeAI workshops at AAAI, since 2019, and am co-organising Turing interest group on Neuro-Symbolic AI. I am a senior member of IEEE.
I founded the Autonomous Cyber Physical Systems Laboratory, which is now located at the new Digital Innovation Facility (DIF) Building.
The research has been funded by Dstl, EPSRC, European Commission, Innovate UK, etc. I have been the PI (or Liverpool PI) for projects valued more than £10M, and co-I for more than £20M. Some brief information can be found here.
☆I led a team won the UK-US privacy-enhancing technologies prize challenges at the first stage and a special recognition prize on "Novel Modelling/Design" at the second stage.
Major Ongoing Projects
RobustifAI focuses on foundation models used in the context of human cyber-physical systems (HCPS) which are complex systems that combine computation, networking, humans and physical processes to monitor and control real-world environments with applications in many sectors. It aims to develop a rigorous design and deployment methodology tailored for reliable, robust, and trustworthy GenAI.
A project funded by EPSRC through the DSIT Alignment Project, focusing on the development of rare event estimation algorithms, with targeted applications on AI agents with respect to the jailbreaks and social deceptive behvaiours.
The following are a few video demos of our research:
For Prospective Students:
I am always looking for PhD students with strong motivation to actively participate in research. There are a few possible ways of receiving a scholarship, for example
- A few Centres for Doctoral Training (CDTs) at Liverpool, including e.g., CDT for Distributed Algorithm
- CSC-Liverpool scholarship, which usually has a deadline at the beginning of a year
- Sir Joseph Rotblat Alumni Scholarship
- Duncan Norman Scholarship
- Other scholarship opportunities available at Liverpool
If you have other means of supporting your study, you are also welcomed to get in touch.
New Open Positions:
- Postdoctoral Research Associate, with deadline 5th February, 2026.
Workshop Organisation
- (11/2022) SafeAI workshop will be held again with AAAI2022. Please submit your papers through SafeAI Workshop Website
- (09/2021) Organised a workshop "Safety Assurance for Deep Learning in Underwater Robotics" (website), with other relevant information available at SOLITUDE Project Resources website
- (05/2021) AISafety workshop will be held again with IJCAI2021. Please submit your papers through AISafety Website
- (08/2020) SafeAI workshop will be held again with AAAI2021.
- (03/2020) AISafety workshop will be held again with IJCAI2020.
- (08/2019) SafeAI will be held again as a workshop of AAAI2020.
- (08/2019) Organising workshop AI&FM2019 at ICFEM2019 , to discuss how to make AI and formal methods (and software engineering) mutually beneficial. It will be on 5th Nov, 2019.
- (02/2019) SafeAI will be held again, as a workshop of IJCAI2019, website: https://www.ai-safety.org/.
- (08/2018) Co-organising a AAAI workshop on AI safety (http://www.safeai2019.org).
Recent News
- (01/2026) One paper accepted to ICLR2026. Congratulations to Xinmiao and other co-authors.
- (11/2025) Two papers accepted to AAAI2026.
- (09/2025) Two papers accepted to NeurIPS2025, and one paper to EMNLP2025.
- (09/2025) Gave a keynote talk on "my road to trustworhty AI" to WAISIE2025.
- (07/2025) We hold a workshop on GenAI, "Workshop on General-Purpose AI: Prospects and Risks", at Liverpool on 9th June, as an effort to the EPSRC project EnnCORE.
- (09/2025) I will deliver a mini-course on "Techniques for Certifying Robustness in Modern Neural Networks" in the "Summer School on Artificial Intelligence and Cybersecurity" at Vienna, organised by the TU Wien, Austria.
- (01/2025) Will start a new EU Horizon project, RobustifAI, this year as the coordinator, see the EU announcement.
- (12/2024) Five papers accepted to AAAI2025.
- (09/2024) One paper accepted to NeurIPS2024, congratulations to Zhen.
- (08/2024) Contributions to discussions on Media: Privacy-Preserving Federated Learning – Future Challenges and Opportunities , Implementation Challenges in Privacy-Preserving Federated Learning and Beware of Botshit: How Researchers Hope to Fix AI’s BS Issue
- (07/2024) Two papers accepted to ECCV2024 and one paper to IROS2024.
- (06/2024) Our survey paper "A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation" has been accepted to the journal of Artificial Intelligence Review. Thanks and congratulations to all co-authors. This is a sister paper of our survey for the usual deep neural networks.
- (05/2024) One paper accepted by ICML2024 on "Building Guardrails for Large Language Models", which reviews the current guardrails for foundation models and provides our perspectives (multi-disciplinary approach, whole-system thinking, neural-symbolic implementation, and verification and validation) on how to rigorously and responsibly develop a guardrail. Congrats to Yi, Ronghui, and other co-authors.
- (03/2024) One paper accepted by CVPR2024 on "Towards Fairness-Aware Adversarial Learning", which considers fairness during adversarial training. Congrats to Yanghao, and other co-authors.
- (02/2024) An AKT project funded: Utilising generative AI, specifically large language models (LLMs) for the searching of technical documentation in a cyber-secure environment, with Dr Ronghui Mu, to work with Leonardo UK.
- (02/2024) Two projects funded: A literature review on “Safeguarding LLMs” (PI: Dr Yi Dong), and An Ethical and Robust AI Development Framework: Assessing Correctness and Detecting Fakes (PI: Dr Guangliang Cheng).
- (01/2024) Two papers accepted by journals. "Privacy-Preserving Distributed Learning for Residential Short-Term Load Forecasting" will be published by IEEE Internet of Things, and "Reachability Verification-Based Reliability Assessment for Deep Reinforcement Learning Controlled Robotics and Autonomous Systems" will be published by RA-L. Congratulations to Yi, and all the co-authors.
- (12/2023) Three papers were accepted to AAAI-24, concerning the robustness of large lauguage models in terms of its math reasoning ability, the certification of reinforcement learning through randomised smoothing, and the robustness for goal conditioned reinforcement learning, respectively. Congratulations to Zihao, Ronghui, Sihao, and all other co-authors.
- (10/2023) We won an Alan Turing project "CRoCS: Certified Robust and Scalable Autonomous Operation in Cyber Space", funded from The AI for Cyber Defence (AICD) Research Centre. It will start from 1st December, 2023.
- (08/2023) Our paper "Hierarchical Distribution-Aware Testing of Deep Learning" is accepted by ACM Transactions on Software Engineering and Methodology, a top journal in sofoware engineering.
- (07/2023) One paper accepted to ACM MM 2023.
- (07/2023) Paper "SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability" accepted to ICCV2023, congrats to Wei and co-authors
- (02/2023) Paper "Randomized Adversarial Training via Taylor Expansion" accepted to CVPR2023, congrats to Gaojie and co-authors
- (01/2023) Paper "Decentralised and Cooperative Control of Multi-Robot Systems through Distributed Optimisation" accepted to AAMAS2023, congrats to Yi and co-authors
- (11/2022) Paper "Towards Verifying the Geometric Robustness of Large-scale Neural Networks" accepted to AAAI2023, congrats to all co-authors
- (10/2022) Start co-organising Turing interest group on Neuro-symbolic AI. Stay tune for information about activities we are organising.
- (10/2022) With Xingyu Zhao and Yi Dong, we are awarded a project on UK and US governments launched challenge on privacy-enhancing technologies (PETs), where we are developing a federated/distributed learning that is able to consider scalability (i.e., number of users), privacy, accuracy, communication complexity, and efficiency, and will apply the algorithm to two applications on financial crimes and COVID healthcare, respectively.
- (10/2022) To give an invited talk to ICFEM2022. slides.
- (12/2022) textbook "Machine Learning Safety" will be published in December 2022.
- (07/2022) our paper "Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation" was accepted to ECCV this year. Congratulations to Ganlin, and all co-authors.
- (06/2022) our papers "Dependability Analysis of Deep Reinforcement Learning based Robotics and Autonomous Systems" and "STUN: Self-Teaching Uncertainty Estimation for Place Recognition" were accepted to IROS this year. Congratulations to Yi and Kaiwen, and all co-authors.
- (06/2022) Gave invited talk on "Is Deep Learning Certifiable at all?" to TAI-RM2022 workshop and to the SAE G-34/EUROCAE WG-114 Technical Talk.
- (03/2022) Congratulations to Gaojie, whose paper on "enhancing adversarial training with second order statistics of weights" was accepted to CVPR this year.
- (03/2022) Gave a talk at Université Grenoble Alpes on "Machine Learning Safety (and Security)"
- (10/2021) Congratulations to Yanda, who has three papers published at ICCV2021, IEEE transactions on Medicai Imaging, and MICCAI2021, respectively, on deep learning in healthcare.
- (10/2021) Warmest Welcome to Mr Yi Qi and Mr Sihao Wu on their joining the group to start PhD.
- (08/2021) Delivered a tutorial to IJCAI'2021 on "Towards Robust Deep Learning Models: Verification, Falsification, and Rectification" with Wenjie, Elena, and Xinping. Tutorial information is available at the website: https://tutorial-ijcai.trustai.uk.
- (07/2021) Congratulations to Wei, who is one of the winners of the SIEMENSE AI-DA challenge (https://ecosystem.siemens.com/topic/detail/default/33), which concerns how to assess the dependability of machine learning models. Specifically, he won the “most original approach” award. There were 32 teams from 15 countries participated in this challenge. This work also won the best paper award in AISafety2021, paper is available.
- (07/2021) One paper accepted by ICCV2021. Congratulations to Yanda.
- (07/2021) Our paper "Embedding and Synthesis of Knowledge in Tree Ensemble Classifiers" has been accepted by Machine Learning journal. Congratulations to Wei and Xingyu.
- (05/2021) Gave a talk to the Center For Perspicuous Computing (CEPC) colloquium.
- (06/2021) Congratulations to Xingyu, who was offered a lectureship position in the department.
- (05/2021) Gave a talk on "safety and reliability of deep learning" to VARS'20 (https://hycodev.com/VARS2021/).
- (05/2021) Congratulations to Xingyu and Wei, whose paper on "BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations" has been accepted to UAI2021. This paper develops a Bayesian method for the well-known LIME explainable AI method, to address the issue of robustness and consistency in explanations. Now, the explanations are not only more accurate but also more robust.
- (05/2021) Congratulations to Wei, whose paper on "Coverage Guided Testing for Recurrent Neural Networks" has been accepted to IEEE transactions on Reliability. This paper develops temporal based coverage metrics for the testing of LSTMs.
- (11/2020) Going to give tutorial on "Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications" to ICDM2020 with Wenjie Ruan and Xinping Yi. Website: https://tutorial.trustdeeplearning.com
- (10/2020) Started a new project "SOLITUDE: Safety Argument for Learning-enabled Autonomous Underwater Vehicles." with Xingyu Zhao, Simon Maskell, Sven Schewe, Sen Wang (Heriot Watt) on developing safety assurance argument for autonomous underwater vehicles.
- (09/2020) Congratulations to Gaojie Jin! Paper "How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?" has been accepted to NeurIPS2020. We study a "correct by construction" question -- how to train a neural network with good generalisation ability (i.e., reliability)? -- and find that this is possible by tracking and controlling a Weight Correlation over the trainable parameters during the training. Experiments show that the improvement is persistent across small networks and large scale networks such as VGG16. The weight correlation can also be used to predict if a model generalises well, without using test data which might not be available in practical scenarios. Please check paper from Arxiv.
- (08/2020) Our paper "Generalizing Universal Adversarial Attacks Beyond Additive Perturbations" has been accepted to ICDM2020.
- (08/2020) Our paper "PRODEEP: a platform for robustness verification of deep neural networks" has been accepted to ESEC/FSE2020.
- (07/2020) Our paper "Lightweight Statistical Explanations for Deep Neural Networks" has been accepted to ECCV2020.
- (07/2020) Our paper "Regression of Instance Boundary by Aggregated CNN and GCN" has been accepted to ECCV2020.
- (06/2020) Congratulations to Wei Huang! Our paper "Practical Verication of Neural Network Enabled State Estimation System for Robotics" has been accepted to IROS2020.
- (05/2020) Our survey paper "A Survey of Safety and Trustworthiness of Deep Neural Networks" has been accepted to the journal of Computer Science Survey. It's current arXiv version is here
Teaching for this semester
- Semester 1, Undergraduate, Second Year. Advanced Artificial Intelligence.
- teaching materials available at AI Safety Lecture Notes
We entail in the following several research directions that we have fostered or contributed in the past years. We use [Journal Name, Year] to denote a journal publication and [ConferenceAbbreviation+Year] to denote a conference paper.
(a) Safety and Trustworthiness of AI Systems
- We publish a textbook [Machine Learning Safety, Springer 2023], which has a comprehensive discussion on broad topics related to the safety of various machine learning algorithms, covering both deep learning algorithms and traditional machine learning algorithms.
- We conduct a survey [Computer Survey Review, 2020] about four groups of techniques that can be utilised to support the safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. In an invited paper [ICFEM2022], we formalise the specifications of a set of machine learning vulnerabilities, including generalisation, robustness, security, privacy, and explainable AI properties. The full version of paper is published in [Journal of Logical and Algebraic Methods in Programming, 2024 ]. We also have several other reviews on the adversarial robustness [CIKM2021] and the verification and validation techniques for e.g., robotics systems [Robotics, 2021], and multiagent systems [AI Communications, 2022].
- We consider how to build guardrails (which detect the failures in real time) for foundation models, and offer our perspectives in [ICML2024].
- We also consider the large language models and conduct a survey [Artificial Intelligence Review, 2024] about their safety and trustworthiness from the perspectives of verification and validation.
References: [ICML2024], [Artificial Intelligence Review, 2024], [Journal of Logical and Algebraic Methods in Programming, 2024], [Machine Learning Safety, Springer 2023], [ICFEM2022], [AI Communications, 2022], [CIKM2021], [Robotics, 2021], [Computer Survey Review, 2020]
(b) Verification of Neural Networks and Learning-Enabled Systems
- We are among the first few to suggest that modern deep neural networks can be verified through the combination of SMT solvers and search algorithms [CAV2017], where a layer-by-layer verification algortihm is proposed. This research is further explored in [TACAS2018] and [Theoretical Computer Sciences, 2020], where we introduce a game-theoretical approach to define several safety problems and adapt the Monte-Carlo Tree Search algorithm (MCTS) to solve their safety verification problems. MCTS is a global optimisation algorithm that converges with optimal solution. We further exlpore the application of other global optimisation algorithms in this field, and propose other verification algorithms that are more efficient and with better provable guarantees, see e.g., [IJCAI2018a], [IJCAI2019], [IJCAI2023], [AAAI-2025b]. Other than verification through SMT solvers and global optimisations, we also consider symbolic propagation [SAS2019], [FSE2020], [Formal Aspect of Computing, 2021], and statistical verification [ICANN2021]. Beyond convolutional neural networks, we also consider the verification of recurrent neural networks (by extending global optimisation methods) [PAKDD2023] and deep reinforcement learning (by utilising neural network verifiers) [IEEE Robotics and Automation Letters, 2023b].
- Considering systems where machine learning modesl are components (e.g., perception, navigation, guidance, control), we develop verification algorithms for generic autonomous systems with temporal behaviour (by reduction to probabilistic model checking) [IROS2022a], and state estimation systems [IROS2020]. We also consider verification of both robustness and resilience [Neurocomputing, 2024], as well as extending robustness verification to the deep reinforcement learning [RA-L, 2024].
- We extend randomised smoothing technique to reinforcement learning for the lower bound certification of the cumulative reward, to obtain smoothed policies under various Lp-norm bounded perturbations [AAAI2024b].
- We looked into the training of verification-friendly neural networks, by requiring that neuron activation states remain consistent across different inputs within a local neighborhood in order to reduce the number of unstable neurons and tighten the bounds of neurons thereby enhancing the network's verifiability [AAAI-2025a].
- We start looking into verification of large foundational models, including stable diffusion [ECCV2024-a].
References: [AAAI-2025b], [AAAI-2025a], [AAAI2024b], [ECCV2024-a], [Neurocomputing, 2024], [RA-L, 2024], [IJCAI2023], [IEEE Robotics and Automation Letters, 2023b], [PAKDD2023], [IROS2022a], [Formal Aspect of Computing, 2021], [ICANN2021], [Theoretical Computer Sciences, 2020], [FSE2020], [IROS2020], [IJCAI2019], [SAS2019], [TACAS2018], [IJCAI2018a], [CAV2017]
(c) Falsification (Testing, Attacks) and Evaluation of Neural Networks
- In parallel with DeepXplore, we study the adaptation of software testing methods to find "bugs" in neural network [ArXiv, 2018]. We propose a concolic (i.e., a combination of concrete execution and symbolic execution) testing method [ASE2018] and a structural testing criteria that resemble the MC/DC criteria in softare testing [ACM Transactions on Embedded Computing Systems, 2019], ["ICSE2019b], and develop them into a tool DeepConcolic [ICSE2019a]. In addition to the convolutional neural netowrks, we also consider testing methods for recurrent neural networks [IEEE Transactions on Reliability, 2022].
- In addition to pixel-wise adversarial examples for instance-wise robustness, we also propose methods to find realistic, distribution-aware adversarial examples [ACM Transactions on Software Engineering and Methodology, 2023], universal adversarial attacks [ICDM2020] [Machine Learning, 2023], and poisoning attacks [Machine Learning, 2021]. We also consider adversarial attack on graph neural networks [ECCV2022] and the attack and test case generation for large language models for e.g., math word problem [ACL2023] [AAAI-24].
- We consider rigorous evaluation methods, including the consideration of balance between robustness and privacy [NeurIPS2024].
- Beyond machine learning models, we also consider testing methods for complex systems where machine learning models are components, e.g., a vehicle tracking system [ICRA2020].
References: [NeurIPS2024], [ACL2023], [ACM Transactions on Software Engineering and Methodology, 2023], [Machine Learning, 2023], [IEEE Transactions on Reliability, 2022], [ECCV2022], [Machine Learning, 2021], [ICDM2020], [ICRA2020], [ICSE2019a], ["ICSE2019b], [ACM Transactions on Embedded Computing Systems, 2019], [ArXiv, 2018], [ASE2018], [AAAI-24]
(d) Enhancements to Neural Networks (Adversarial Training, Uncertainty Quantification)
- We consider rigorous methods to improve the properties of neural networks. In [NeurIPS2020], we study the weight correlation and suggest that generalisation can be improved if weight correlation can be reduced. In addition to empirical experiments, we extend the PAC Bayesian theory to support our conclusion. The weight correlation is then utilised for the interpretation of dropout [Transactions of Machine Learning Research, 2022]. Other than the generalisation, we consider the adversarial training for the robustness improvement through second order statistics [CVPR2022], Taylor expansion [CVPR2023], out-of-distribution robustness (domain generalization) [IEEE Trans. on IFS, 2025], etc.
- We also consider the improvement of neural network training through the estimation of uncertainty, by considering a teacher-student framework [IROS2022b], a spatial uncertainty-aware teacher-student framework [ICCV2021], and a probabilistic embedding [IEEE Robotics and Automation Letters,2023a].
- For Goal-Conditioned Reinforcement Learning (GCRL), we propose a novel semi-contrastive representation attack, and use it together with a sensitivity-aware regularizer for the improvement of the adversarial robustness [AAAI2024c].
- We consider fairness-aware adversarial training in [CVPR2024].
- As an improvement to the uncertainty quantification techniques, conformal prediction has been developed, and our efforts includes the extension of conformal prediction to image retrieval tasks [AAAI-2025c].
References: [IEEE Trans. on IFS, 2025], [AAAI-2025c], [CVPR2024], [AAAI2024c], [CVPR2023], [IEEE Robotics and Automation Letters,2023a], [Transactions of Machine Learning Research, 2022], [CVPR2022], [IROS2022b], [ICCV2021], [NeurIPS2020].
(e) Explanation (XAI) of Neural Networks
- Due to the black-box nature of deep neural networks, explainable AI has become a research topic. In [ECCV2020a], we propose a novel explainable AI method by utilising fault localisation methods. In [UAI2021], we consider a Bayesian enhancement to the existing explainable AI methods, and suggest that they are able to improve the consistency, robustness, and fidelity of the explanantions. In [ICCV2023], we study the interaction of XAI with robustness, and propose novel algorithms to discover their inconsistency.
References: [ICCV2023], [UAI2021], [ECCV2020a]
(f) Aassurance of Neural Networks and Learning-Enabled Systems
- We are one of the first few to consider enhancing existing safety assurance approach to deal with the machine learning components. In additio to the general framework [SafeCOMP2020], we consider several key aspects that need to be adapted, including operational profile [DSN2021], robustness evaluation [ACM Transactions on Software Engineering and Methodology, 2023], and Hazard analysis [ITSC2023].
- This thread of works have been applied to real-world systems such as an underwater vehicle [ACM Transactions on Embedded Computing Systems, 2023].
References: [ACM Transactions on Software Engineering and Methodology, 2023], [ACM Transactions on Embedded Computing Systems, 2023], [ITSC2023], [DSN2021], [SafeCOMP2020]
(g) Runtime Monitoring and Protection to AI Systems
- We use uncertainty estimation to conduct runtime detection of failures [IROS2022b], [ICCV2021], [IEEE Robotics and Automation Letters,2023a].
- We construct symbolic runtime monitor by extracting features from hidden layer and clustering similar features with geometric shapes such as boxes [IROS2024].
- Comprehensive evaluation of the state of the art on out-of-distribution monitors is conducted in [ICASSP2025].
References: [ICASSP2025], [IROS2024], [IEEE Robotics and Automation Letters,2023a], [IROS2022b], [ICCV2021]
(h) Large Language Models and Other Generative AI Models
- We investigate large language models in terms of their various abilities, for example, the math solving ability by proposing a new robustness attack that preserves the mathematical logic of the original math word problem [AAAI2024a] and a test case generation method [ACL2023], the collaboration ability when it works with human experts in safety analysis [ArXiv, 2023b].
- We consider how to build guardrails (which detect the failures in real time) for foundation models, and offer our perspectives in [ICML2024].
- We also consider a survey on the safety and trustworthiness of the large language models [Artificial Intelligence Review, 2024], from the perspectives of verification and validation.
References: [ICML2024], [Artificial Intelligence Review, 2024], [AAAI2024a], [ACL2023], [ArXiv, 2023b], [Artificial Intelligence Review, 2024]
(i) Energy Efficiency of Neural Networks
- We study spiking neural networks [Frontier in Neuroscience, 2022], which is more energy-efficient than the usual convolutional neural networks in inference. Other than the optimal translation from CNNs, we also consider the optimisation of energy consumption through training and a novel cutoff mechanism that is useful in inference stage [Frontier in Neuroscience, 2024].
- In the survey [Artificial Intelligence Review, 2024], we have a summarisation of various large language models in terms of their energy consumption.
References: [Artificial Intelligence Review, 2024], [Frontier in Neuroscience, 2024], [Frontier in Neuroscience, 2022]
(j) Other Safety and Trustworthiness Properties (such as privacy preserving) of Neural Networks
- We study privacy perserving in distributed learning, with the application to smart grid applications in [IEEE Internet of Things, 2024].
References: [IEEE Internet of Things, 2024]
(k) Applications of AI
- We also conduct research on the applications of AI to other fields, with various methods to improve AI's performance on different tasks. This includes medical imaging [BMVC2021], [IEEE Transactions on Medical Imaging, 2021], [ECCV2020b], [MICCAI2020], driving amnoeuvres in semi-autonomous vehicles [IROS2019], person re-identification [Pattern Recognition, 2022], multiagent decentralised control [AAMAS2023], transportation object counting [IEEE Transactions on Intelligent Transportation Systems, 2023], general game playing [KR2022], [AAMAS2022], geometry problems [ACMMM2023].
References: [AAMAS2023], [ACMMM2023], [IEEE Transactions on Intelligent Transportation Systems, 2023], [Pattern Recognition, 2022], [KR2022], [AAMAS2022], [BMVC2021], [IEEE Transactions on Medical Imaging, 2021], [ECCV2020b], [MICCAI2020], [IROS2019]
(l) Logic Reasoning about Multiagent Systems (Strategy, Knowledge, Cognitive Trust, etc)
- We have made many contributions to the logic reasoning in multiagent systems, concerning the strategy and knowledge of the agents. This includes the proposals about strategic logics [KR2014], [ACM Transactions on Computational Logic, 2018], their model checking complexity [ECAI2010], [IJCAI2015], and their symbolic model checking algorithms [AAAI2014], [TACAS2014], [Artificial Ingelligence, 2015], [AAMAS2013c], [AAMAS2010]. We also formalise and reason about several key concepts in multiagent systems including diagnosability [AAMAS2013b], reconfigurability [IJCAI2016a], correlated equilibrium [IJCAI2017], normative multiagent systems [IJCAI2016b], and agent communications [AAAI2016b].
- In addition to Boolean systems, we also work with probabilistic systems, concerning probabilistic logics [AAAI2012a], [AAMAS2013a], their model checking complexity [AAAI2016a], and their model checking algorithms [TARK2011, IJCAI2018b]. Based on the above results, we propose a logic to reasoning about cognitive trust in a probabilistic multiagent setting [AAAI2017], [ACM Transactions on Computational Logic, 2019].
- We also consider verification of other systems and other properties, including battery prognostics and health management [SEFM2019], pursuit-evasion games [IJCAI2011], [AAAI2012b], [ECAI2010].
- To consider learning-enabled systems, we start looking into the specication languages. In an invited paper [ICFEM2022], we formalise the specifications of a set of machine learning vulnerabilities, including generalisation, robustness, security, privacy, and explainable AI properties. The full version of paper is published in [Journal of Logical and Algebraic Methods in Programming, 2024 ].
References: [Journal of Logical and Algebraic Methods in Programming, 2024], [ICFEM2022], [ACM Transactions on Computational Logic, 2019], [ACM Transactions on Computational Logic, 2018], [IJCAI2018b], [AAAI2017], [IJCAI2017], [AAAI2016a], [AAAI2016b], [IJCAI2016a], [IJCAI2016b], [Artificial Ingelligence, 2015], [IJCAI2015], [TACAS2014], [AAAI2014], [KR2014], [AAMAS2013a], [AAMAS2013b], [AAMAS2013c], [AAAI2012a], [AAAI2012b], [IJCAI2011], [AAMAS2010], [ECAI2010].
Software:
- TrustAI: a tool set for the safety and trustworthiness of systems with deep learning components.
- DLV : a software to verify deep neural network.
- MCK : a model checker for verifying autonomous multiagent systems.
- Rationality Verification: a model checker for strategic reasoning based on correlated equilibrium
Publications:
Google Scholar and dblp
TrustAI: Tool Demos
DeepConcolic (Github repository)
Related Publications:
- Testing Deep Neural Networks. arXiv
- Concolic testing for deep neural networks. ASE2018
- DeepConcolic: testing and debugging deep neural networks. ICSE2019
- Structural Test Coverage Criteria for Deep Neural Networks. ACM Transactions on Embedded Computing Systems (TECS)
Reliability validation of a learning-enabled dynamic tracking system (Github repository)
Related Publications:
- Reliability Validation of Learning Enabled Vehicle Tracking. arXiv
PRODeep : a platform for robustness verification of deep neural networks (Github repository)
Related Publications:
- PRODeep : a platform for robustness verification of deep neural networks. arXiv
testRNN (Github repository)
Related Publications:
- Test Metrics for Recurrent Neural Networks. arXiv
Recent Invited Talks, Seminars, and Panel Discussions:
- (10/2022) To give an invited talk to ICFEM2022. slides.
- (06/2022) Gave invited talk on "Is Deep Learning Certifiable at all?" to TAI-RM2022 workshop and to the SAE G-34/EUROCAE WG-114 Technical Talk.
- (03/2022) Gave a talk at Université Grenoble Alpes on "Machine Learning Safety (and Security)"
- (08/2021) Delivered a tutorial to IJCAI'2021 on "Towards Robust Deep Learning Models: Verification, Falsification, and Rectification" with Wenjie, Elena, and Xinping. Tutorial information is available at the website: https://tutorial-ijcai.trustai.uk.
- (05/2021) Gave a talk to the Center For Perspicuous Computing (CEPC) colloquium.
- (05/2021) Gave a talk on "safety and reliability of deep learning" to VARS'20 (https://hycodev.com/VARS2021/).
- (08/2020) Will give lecturers on verification of neural networks at Summer School Marktoberdorf 2020("Safety and Security of Software Systems: Logics, Proofs, Applications").
- (05/2020) Will give an invited talk at University of Exeter.
- (03/2020) Will give an invited talk at MMB2020 on "Safety Certification of Deep Learning".
- (04/2019) Gave a talk to Liverpool Early Career Researcher Conference on Data Science, Machine Learning and AI on safety and trustworthiness of deep learning. Great to see the enthusiasm of the crowd over AI and machine learning.
- (09/2018) Slides of my talk at Nanjing University can be found here.
- (07/2018) Slides of my talk at Imperial can be found here.
- (04/2018) Gave an invited talk on "Verification and Testing of Deep Learning" for the ETAPS workshop on "Formal Methods For ML-Enabled Autonomous System (FOMLAS2018)".
- January 2018, Toulouse, France. Invited panel discussion on how machine learning technique could be used (or not) for safety-critical applications, oragnised by ONERA The French Aerospace Lab and AirBus. The 9th European Congress on Embedded Real Time Software and Systems (ERTS 2018). https://www.erts2018.org/
- April 2018, Thessaloniki, Greece. Verification of Deep Neural Networks. Invited Talk to the ETAPS 2018 workshop on formal methods for ML-enabled autonomous systems (FoMLAS2018). https://fomlas2018.fortiss.org
- Januray 2018, Florida, US. Invited talk and panelist of a session in SciTech2018 on the Interaction of Software Assurance and Risk Assessment Based Operation of Unmanned Aircraftsession. Organised by The American Institute of Aeronautics and Astronautics (AIAA).
- December 2017, Beijing, China. Verification of Robotics and Autonomous Systems. Invited talk to the workshop on the Verification of Large Scale Real-Time Embeded Systems. Slides are available from here.
- September 2017, Visegrad, Hungary. Verification of Robotics and Autonomous Systems. Invited Talk to the 11th Alpine Verification Meeting (AVM2017). http://avm2017.inf.mit.bme.hu. Slides are availabe from here
- November 2015, Oxford, UK. Reasoning About Trust in Autonomous Multiagent Systems. Univeristy of Oxford.
Open Positions:
Postdocs or graduate research associates in the project where I was/am the primary investigator
- Mr Tianle Zhang , 2023 - 2024
- Dr Yanghao Zhang , 2022 - 2024, now a postdoc at Imperial College London
- Dr Ronghui Mu , 2023 - 2024, now a permanent lecturer at the University of Exeter
- Dr Qiyi tang , 2021 - 2022, now have a permanent lecturer at the University of Liverpool
- Dr Yi Dong, 2021 - 2023, now have a permanent at the University of Southampton.
- Dr Xingyu Zhao , 2021. Now have a permanent position at the University of Warwick
- Dr Nicolas Berthier, 2019 - 2021.
- Dr Youcheng Sun, 2018-2019. Now have a permanent position at the University of Manchester.
Supervision of Postgraduate Research Students as primary supervisor
If you are interested in doing a PhD in relevant reserach areas with me, please feel free to contact me. University of Liverpool has a set of established scholarship schemes, including Liverpool china scholarship council award and Sir Joseph Rotblat Alumni Scholarship. Additional to these, I may have some vacancies from time to time.
- Ms Yifan Su.
- topic:
- 01/2025 -
- Mr Xinmiao Huang.
- topic: Neural-symbolic generative AI
- 11/2024 -
- Mr Jinwei Hu.
- topic:
- 12/2023 -
- Ms Sahar Alzahrani, with Prof Sven Schewe and Dr Chao Huang as the co-supervisor.
- topic: Verification of Deep Learning
- 01/2022 -
- Mr Sihao Wu, with Dr Xingyu Zhao and Dr. Xinping Yi as the co-supervisors.
- topic: Deep Reinforcement Learning Safety
- 12/2021 -
- Mr Yi Qi, with Dr Xingyu Zhao as the co-supervisors.
- topic: Safety Assurance for Learning
- 10/2021 -
- Mr Kaiwen Cai, with Dr. Shan Luo as the co-supervisor.
- topic: autonomous cyber physical systems
- 10/2020 - 07/2024, now at Li Auto
- Mr Dengyu Wu, with Dr. Xinping Yi as the co-supervisor.
- topic: energy efficient deep learning
- 10/2019 - 07/2024, now a postdoc at King's College London
- Ms Peipei Xu, with Prof. Frank Wolter as the co-supervisor.
- topic: verification of deep learning
- 06/2019 - 07/2024, now a postdoc
- Ms Amany Alshareef, started from 03/2019, with Prof. Sven Schewe as the co-supervisor. Before coming to Liverpool, Amany has an MSc at Ball State University and a BSc at Umm Al-Qura University.
- topic: testing deep learning
- 03/2019 - 12/2023
- Dr Gaojie Jin, with Dr. Xinping Yi as the co-supervisor. Before coming to Liverpool, Gaojie has an MSc at Liverpool University and a BSc at Peking University.
- topic: Reliable Deep Neural Networks with Randomised Weights
- 03/2019 - 07/2023, now at the University of Exeter
- Dr Wei Huang, with Prof. Shang-Hong Lai at National Tsing Hua University, Taiwan, as the co-supervisor. Before coming to Liverpool, Wei has an MSc at Imperial College and a BSc at Xiamen University.
- topic: Verification and Validation of Machine Learning Safety in Learning-Enabled Autonomous Systems
- 02/2019 - 07/2023, now at Purple Mountain Laboratories
- Ms Emese Thamo, with Dr Yannis Goulermas as the co-supervisor. Before coming to Liverpool, Emese has a BSc at Cambridge.
- topic: Improving the Safety of Deep Reinforcement Learning Algorithms by Making Them More Interpretable
- 10/2018 -
Visitors:
- Dr Chen Zhang, China University of Mining and Technology. 12/2019 - 11/2020
- Mr Zhixuan Xu, Renming University of China. 10/2019 - 10/2020
- Mr Francesco Crecchi, University of Pisa, Italy. 04/2019 - 06/2019
"Robotics and Artificial Intelligence" Reading group is to hold a weekly meeting where one of the members will have a 30-40 minutes talk, discussing either their own papers, papers from other research groups, or anything that they are interested in. This will be followed by a Q&A and discussion session among the group on the topic.
Membership:
Anyone can join by request. If you are interested in, please feel free to drop me a message.
Venue:
Due to the lockdown, we are mainly holding this through virtual meetings (please click: Zoom meeting).
Meeting time:
Starting from the week of 24th August, the meeting time is moved to Tuesday 11:00-12:00, London time.
Talk Schedule:
Please refer to the webpage at ACPS lab for the detailed information about the reading group.