A Double Take at Conferences: The Hybrid Format (original) (raw)
1.1 Application of AI in Digital Forensics
Guest Editors: Johannes Fähndrich, Roman Povalej, Heiko Rittelmeier, Silvio Berner
Scope: With the increase of digitalization and the pervasiveness of information systems, a crime scene is no longer what it used to be with its mix of a location, people, evidence, changes in time, and their virtual counterpart. Including the mainstream use of smart-homes, -infrastructure, -factories, or -cities, investigations and forensic evidence are no longer bound by physics. With the growing amount of digital information, an application of Artificial Intelligence (AI) in forensics is incumbent. Methods from Machine Learning and Data Science need to be extended to be explainable and valid for legal purposes. This special issue has the goal of collecting work on AI with the application on forensic science with the focus on the amalgamation of computer science, data analytics, and machine learning with the discussion of the law and ethics for its application to cyber forensics.
Topic might be, but are not restricted to:
- NLU/NLP in forensic evidence
- Explainable AI which can stand up in court
- AI and object detection
- AI and super resolution
- AI and darknets and hidden services investigation
- AI and emotion recognition
- AI and lie detection
- AI and cybercrime related investigations
- Fooling neuronal networks and other anti-forensic techniques and methods automated analysis for forensic evidence in IoT
- AI in incident response, investigation and evidence handling
- Ethical, legal, and societal challenges of using AI in digital forensics
Contact: Johannes Fähndrich
(johannesfaehndrich@hfpol-bw.de)
1.2 Explainable AI
Guest Editors: Ute Schmid (Universität Bamberg), Britta Wrede (Universität Bielefeld)
Scope: During the last years, Explainable AI (XAI) has been established as a new area of research focusing on approaches which allow humans to comprehend and possibly control machine learned (ML) models and other AI-systems whose complexity makes the process which leads to a specific decision intransparent. In the beginning, most approaches were concerned with post-hoc explanations for classification decisions of deep learning architectures, especially for image classification. Furthermore, a growing number of empirical studies addressed effects of explanations on trust in and acceptability of AI/ML systems. Recent work has broadened the perspective of XAI, covering topics such as verbal explanations, explanations by prototypes and contrastive explanations, combining explanations and interactive machine learning, multi-step explanations, explanations in the context of machine teaching, relations between interpretable approaches of machine learning and post-hoc explanations, neuro-symbolic approaches and other hybrid approaches combining reasoning and learning for XAI. Addressing criticism regarding missing adaptivity more interactive accounts have been developed to take individual differences into account. Also, the question of evaluation beyond mere batch testing has come into focus.
In the special issue, the focus will be on research addressing such recent developments in XAI. Furthermore, interdisciplinary contributions as well as specific applications of XAI form domains such as education, healthcare, and industrial production are welcome.
The topics of interest for the special issue include, but are not limited to:
- interactive approaches to XAI
- adaptive XAI
- deployment of explainable decision support systems in real-life settings (e.g. medical domain, work contexts etc.)
- multi-modal approaches to XAI
- process explanations
- self-explaining robots
- empirical evaluation of XAI approaches
- measures for understanding of XAI
- evaluation measures for XAI beyond trust and acceptability
Contributions can be from the following categories (for more detailed information please refer to the author instructions for each of these categories): Technical Contribution; System Descriptions; Project Reports; Dissertation and Habilitation Abstracts; AI Transfer; Discussion
If you are interested in submitting a paper please contact one of the guest editors:
Contact: Ute Schmidt
1.3 GeoAI
Guest Editors: Simon Scheider, Zena Wood, Kai-Florian Richter
Scope: Researchers in Artificial Intelligence (AI) and Geography have been developing various points of contact in the past, with many possibilities of mutual benefit in the future. Recently, subsymbolic AI methods, such as Deep Learning, have increased the quality and scalability of data processing methods in remote sensing, geographic information retrieval, natural language processing (NLP) and geospatial modeling, among others. Furthermore, there is a tradition of using symbolic AI approaches to raise the quality and scalability of methods by linking, e.g., Geography with agent-based simulation (ABM), spatial cognitive reasoning with Robotics, as well as Geography with the Knowledge Graphs (KG) in the Semantic Web. At the same time, geographic information has become an indispensable resource in itself, needed not only for adding spatial intelligence to machines, and for making opaque models transparent, but also for understanding what kind of intelligence is needed to refer to place and to handle space. Understood in this broader sense, geoAI has the potential of fundamentally improving the way geographic information can be processed and interpreted by both humans and machines.
For this special issue, we invite researchers who investigate the kind of knowledge needed to account for Geography and space with(in) intelligent machines. We are looking for original research articles, project reports and discussion articles on (among others):
- Symbolic (Semantic Web and ontological) approaches to geoAI
- Sub-symbolic (deep learning/ML) based approaches to geoAI
- Explainable geoAI (XgeoAI): interpreting and opening black box models with a-priori knowledge
- Computational models of geospatial intelligence and spatial cognition
- Methods for geospatial knowledge graphs (geoKG)
- Reusability of geoAI models and reproducibility
- Knowledge models of Geography and geographic information for data scientists
- Pragmatic intelligence: Models of purpose and design of workflows with geoinformation
- The human in the loop and models of human interaction in geoAI
Application areas include, but are not restricted to:
- Agent-based models (ABM) and geoAI in Geography and Geosciences
- AI in geographic information retrieval (GIR) and NLP: distant reading of geolocated texts
- Geographic question-answering (geoQA) and automation of geographic data analysis
- AI-enhanced geovisualization and dialogue methods
- Object recognition in remote sensing and georeferenced image processing
- geoAI in robotics, ubiquitous sensors and navigation systems
Contacts: Simon Scheider (s.scheider@uu.nl),
Zena Wood (Z.M.Wood2@exeter.ac.uk),
Kai-Florian Richter (kai-florian.richter@umu.se)