Theodoros Semertzidis - Academia.edu (original) (raw)
Papers by Theodoros Semertzidis
2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
ICT have proven to provide significant aid for appropriate integration of migrants. These tools c... more ICT have proven to provide significant aid for appropriate integration of migrants. These tools can support the inclusion by providing guidance, education opportunities, job seeking, culture immersion and facilitating access to primary services. In this paper, a complete framework for migrants (with special focus on refugees) guidance and inclusion is presented. This framework comprises a set of novel AI tools aimed at enabling mentioned services from diverse perspectives: a) users’ profiling; b) skills matching c) recommendations; d) user profiling and e) digital companion. Consideration about data collection, data flow, architecture and interactions are provided.
Publication in the conference proceedings of EUSIPCO, Lausanne, Switzerland, 2008
This record contains raw data related to article "Volume-of-Interest Aware Deep Neural Netwo... more This record contains raw data related to article "Volume-of-Interest Aware Deep Neural Networks for Rapid Chest CT-Based COVID-19 Patient Risk Assessment" Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pandemic. Emergency Departments have been experiencing situations of urgency where clinical experts, without long experience and mature means in the fight against COVID-19, have to rapidly decide the most proper patient treatment. In this context, we introduce an artificially intelligent tool for effective and efficient Computed Tomography (CT)-based risk assessment to improve treatment and patient care. In this paper, we introduce a data-driven approach built on top of volume-of-interest aware deep neural networks for automatic COVID-19 patient risk assessment (discharged, hospitalized, intensive care unit) based on lung infection quantization through segmentation and, subsequently, CT classification. We tackle the high and var...
web: www.thalesgroup.com A systematic approach for the design and implementation of an efficient ... more web: www.thalesgroup.com A systematic approach for the design and implementation of an efficient and reliable signal processing unit able to provide vital flight parameters to the cockpit of an aircraft using a properly built LIDAR equipment is presented in this paper. Taking into account the specific characteristics of a signal coming out from such an optical device we define all the necessary steps for the real-time data acquisition and processing in order to extract accurate information about the true air speed (TAS) of the aircraft and other useful flight parameters. Simulation results using properly modeled signals verified the effectiveness of the suggested methodology. The proposed scheme is being developed in the framework of the EU funded NESLIE research project and aims to be finally flight tested in order to demonstrate the availability of accurate measurements in all weather conditions and in any phase of flight.
2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI)
7th International Conference on Imaging for Crime Detection and Prevention (ICDP 2016)
Soft biometrics are biometric traits that do not offer exact human identification, however, they ... more Soft biometrics are biometric traits that do not offer exact human identification, however, they can provide adequate information to narrow-down the search space and give valuable insights for the subject in question. In this work, we examine the issues that emerge by analysing CCTV videos for soft biometrics and propose a methodology for extracting soft biometrics from low-quality and low-resolution video footage taken from real, street CCTV cameras. The proposed approach is based on the concept of Exemplars, that is, to find matches of the examined subject over a labelled dataset, which is able to encode the quality, colour and light variations of the surveillance images. Experiments have been conducted in a new challenging dataset that we introduce in this paper. It has been created using real CCTV footage, enhanced with a wide range of annotations from multiple people, and a manually created segmentation mask for each detection/person. This dataset is made available to scientific community for comparison and improvement of their methodologies in real-world scenarios.
MultiMedia Modeling
In the past few years, various methods have been developed that attempt to embed graph nodes (e.g... more In the past few years, various methods have been developed that attempt to embed graph nodes (e.g. users that interact through a social platform) onto low-dimensional vector spaces, exploiting the relationships (commonly displayed as edges) among them. The extracted vector representations of the graph nodes are then used to effectively solve machine learning tasks such as node classification or link prediction. These methods, however, focus on the static properties of the underlying networks, neglecting the temporal unfolding of those relationships. This affects the quality of representations, since the edges don't encode the response times (i.e. speed) of the users' (i.e. nodes) interactions. To overcome this limitation, we propose an unsupervised method that relies on temporal random walks unfolding at the same timescale as the evolution of the underlying dataset. We demonstrate its superiority against state-of-the-art techniques on the tasks of hidden link prediction and future link forecast. Moreover, by interpolating between the fully static and fully temporal setting, we show that the incorporation of topological information of past interactions can further increase our method efficiency.
International Journal of Environmental Research and Public Health
Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pan... more Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pandemic. Emergency Departments have been experiencing situations of urgency where clinical experts, without long experience and mature means in the fight against COVID-19, have to rapidly decide the most proper patient treatment. In this context, we introduce an artificially intelligent tool for effective and efficient Computed Tomography (CT)-based risk assessment to improve treatment and patient care. In this paper, we introduce a data-driven approach built on top of volume-of-interest aware deep neural networks for automatic COVID-19 patient risk assessment (discharged, hospitalized, intensive care unit) based on lung infection quantization through segmentation and, subsequently, CT classification. We tackle the high and varying dimensionality of the CT input by detecting and analyzing only a sub-volume of the CT, the Volume-of-Interest (VoI). Differently from recent strategies that con...
2019 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC), 2019
Deep learning architectures and Convolutional Neural Networks (CNNs) have made a significant impa... more Deep learning architectures and Convolutional Neural Networks (CNNs) have made a significant impact in learning embeddings of high-dimensional datasets. In some cases, and especially in the case of high-dimensional graph data, the interlinkage of data points may be hard to model.Previous approaches in applying the convolution function on graphs, namely the Graph Convolutional Networks (GCNs), presented neural networks architectures that encode information of individual nodes along with their connectivity. Nonetheless, these methods face the same issues as in traditional graph-based machine learning techniques i.e. the requirement of full matrix computations. This requirement bounds the applicability of the GCNs on the available computational resources. In this paper, the following assumption is evaluated: the training of a GCN with multiple subsets of the full data matrix is possible and converges to the full data matrix training scores, thus lifting the aforementioned limitation.Fo...
2020 25th International Conference on Pattern Recognition (ICPR)
Forecasting
In this paper, a detailed study on crime classification and prediction using deep learning archit... more In this paper, a detailed study on crime classification and prediction using deep learning architectures is presented. We examine the effectiveness of deep learning algorithms in this domain and provide recommendations for designing and training deep learning systems for predicting crime areas, using open data from police reports. Having time-series of crime types per location as training data, a comparative study of 10 state-of-the-art methods against 3 different deep learning configurations is conducted. In our experiments with 5 publicly available datasets, we demonstrate that the deep learning-based methods consistently outperform the existing best-performing methods. Moreover, we evaluate the effectiveness of different parameters in the deep learning architectures and give insights for configuring them to achieve improved performance in crime classification and finally crime prediction.
Cyber-Physical Threat Intelligence for Critical Infrastructures Security: Securing Critical Infrastructures in Air Transport, Water, Gas, Healthcare, Finance and Industry
Social Informatics, 2017
Identifying important network nodes is very crucial for a variety of applications, such as the sp... more Identifying important network nodes is very crucial for a variety of applications, such as the spread of an idea or an innovation. The majority of the publications so far assume that the interactions between nodes are static. However, this approach neglects that real-world phenomena evolve in time. Thus, there is a need for tools and techniques which account for evolution over time. Towards this direction, we present a novel graph-based method, named DepthRank (DR) that incorporates the temporal characteristics of the underlying datasets. We compare our approach against two baseline methods and find that it efficiently recovers important nodes on three real world datasets, as indicated by the numerical simulations. Moreover, we perform our analysis on a modified version of the DBLP dataset and verify its correctness using ground truth data.
Proceedings of the 11th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, 2019
It is a challenge to aggregate and analyze data from heterogeneous social media sources not only ... more It is a challenge to aggregate and analyze data from heterogeneous social media sources not only for businesses and organizations but also for Law Enforcement Agencies. The latter's core objectives are to monitor criminal and terrorist related activities and to identify the "key players" in various networks. In this paper, a framework for homogenizing and exploiting data from multiple sources is presented. Moreover, as part of the framework, an ontology that reflects today's social media perceptions is introduced. Data from multiple sources is transformed into a labeled property graph and stored in a graph database in a homogenized way based on the proposed ontology. The result is a cross-source analysis system where end-users can explore different scenarios and draw conclusions through a library of predefined query placeholders that focus on forensic investigation. The framework is evaluated on the Stormfront dataset, a radical right, web community. Finally, the benefits of applying the proposed framework to discover and visualize the relationships between the Stormfront profiles are presented.
Computer Communications and Networks, 2012
ACM Transactions on Intelligent Systems and Technology, 2016
Proceedings of the 8th International Interactive Conference on Interactive Tv Video, Jun 9, 2010
Abstract Three-dimensional TV is now closer than ever to become a reality for consumers providing... more Abstract Three-dimensional TV is now closer than ever to become a reality for consumers providing a complete life-like image experience. Recent advances in autostereoscopic displays have resulted in improved 3D viewing experience, wider viewing angles, no need for special glasses and support for multiple viewers. However, due to their content formatting requirements (2D+ depth), liveaction content is much more difficult to create. In this paper a new content generation approach for autostereoscopic 3DTV is proposed, by integrating a ...
2008 16th European Signal Processing Conference, 2008
A systematic approach for the design and implementation of an efficient and reliable signal proce... more A systematic approach for the design and implementation of an efficient and reliable signal processing unit able to provide vital flight parameters to the cockpit of an aircraft using a properly built LIDAR equipment is presented in this paper. Taking into account the specific characteristics of a signal coming out from such an optical device we define all the necessary steps for the real-time data acquisition and processing in order to extract accurate information about the true air speed (TAS) of the aircraft and other useful flight parameters. Simulation results using properly modeled signals verified the effectiveness of the suggested methodology. The proposed scheme is being developed in the framework of the EU funded NESLIE research project and aims to be finally flight tested in order to demonstrate the availability of accurate measurements in all weather conditions and in any phase of flight.
Information Processing & Management, 2015
ABSTRACT
2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
ICT have proven to provide significant aid for appropriate integration of migrants. These tools c... more ICT have proven to provide significant aid for appropriate integration of migrants. These tools can support the inclusion by providing guidance, education opportunities, job seeking, culture immersion and facilitating access to primary services. In this paper, a complete framework for migrants (with special focus on refugees) guidance and inclusion is presented. This framework comprises a set of novel AI tools aimed at enabling mentioned services from diverse perspectives: a) users’ profiling; b) skills matching c) recommendations; d) user profiling and e) digital companion. Consideration about data collection, data flow, architecture and interactions are provided.
Publication in the conference proceedings of EUSIPCO, Lausanne, Switzerland, 2008
This record contains raw data related to article "Volume-of-Interest Aware Deep Neural Netwo... more This record contains raw data related to article "Volume-of-Interest Aware Deep Neural Networks for Rapid Chest CT-Based COVID-19 Patient Risk Assessment" Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pandemic. Emergency Departments have been experiencing situations of urgency where clinical experts, without long experience and mature means in the fight against COVID-19, have to rapidly decide the most proper patient treatment. In this context, we introduce an artificially intelligent tool for effective and efficient Computed Tomography (CT)-based risk assessment to improve treatment and patient care. In this paper, we introduce a data-driven approach built on top of volume-of-interest aware deep neural networks for automatic COVID-19 patient risk assessment (discharged, hospitalized, intensive care unit) based on lung infection quantization through segmentation and, subsequently, CT classification. We tackle the high and var...
web: www.thalesgroup.com A systematic approach for the design and implementation of an efficient ... more web: www.thalesgroup.com A systematic approach for the design and implementation of an efficient and reliable signal processing unit able to provide vital flight parameters to the cockpit of an aircraft using a properly built LIDAR equipment is presented in this paper. Taking into account the specific characteristics of a signal coming out from such an optical device we define all the necessary steps for the real-time data acquisition and processing in order to extract accurate information about the true air speed (TAS) of the aircraft and other useful flight parameters. Simulation results using properly modeled signals verified the effectiveness of the suggested methodology. The proposed scheme is being developed in the framework of the EU funded NESLIE research project and aims to be finally flight tested in order to demonstrate the availability of accurate measurements in all weather conditions and in any phase of flight.
2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI)
7th International Conference on Imaging for Crime Detection and Prevention (ICDP 2016)
Soft biometrics are biometric traits that do not offer exact human identification, however, they ... more Soft biometrics are biometric traits that do not offer exact human identification, however, they can provide adequate information to narrow-down the search space and give valuable insights for the subject in question. In this work, we examine the issues that emerge by analysing CCTV videos for soft biometrics and propose a methodology for extracting soft biometrics from low-quality and low-resolution video footage taken from real, street CCTV cameras. The proposed approach is based on the concept of Exemplars, that is, to find matches of the examined subject over a labelled dataset, which is able to encode the quality, colour and light variations of the surveillance images. Experiments have been conducted in a new challenging dataset that we introduce in this paper. It has been created using real CCTV footage, enhanced with a wide range of annotations from multiple people, and a manually created segmentation mask for each detection/person. This dataset is made available to scientific community for comparison and improvement of their methodologies in real-world scenarios.
MultiMedia Modeling
In the past few years, various methods have been developed that attempt to embed graph nodes (e.g... more In the past few years, various methods have been developed that attempt to embed graph nodes (e.g. users that interact through a social platform) onto low-dimensional vector spaces, exploiting the relationships (commonly displayed as edges) among them. The extracted vector representations of the graph nodes are then used to effectively solve machine learning tasks such as node classification or link prediction. These methods, however, focus on the static properties of the underlying networks, neglecting the temporal unfolding of those relationships. This affects the quality of representations, since the edges don't encode the response times (i.e. speed) of the users' (i.e. nodes) interactions. To overcome this limitation, we propose an unsupervised method that relies on temporal random walks unfolding at the same timescale as the evolution of the underlying dataset. We demonstrate its superiority against state-of-the-art techniques on the tasks of hidden link prediction and future link forecast. Moreover, by interpolating between the fully static and fully temporal setting, we show that the incorporation of topological information of past interactions can further increase our method efficiency.
International Journal of Environmental Research and Public Health
Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pan... more Since December 2019, the world has been devastated by the Coronavirus Disease 2019 (COVID-19) pandemic. Emergency Departments have been experiencing situations of urgency where clinical experts, without long experience and mature means in the fight against COVID-19, have to rapidly decide the most proper patient treatment. In this context, we introduce an artificially intelligent tool for effective and efficient Computed Tomography (CT)-based risk assessment to improve treatment and patient care. In this paper, we introduce a data-driven approach built on top of volume-of-interest aware deep neural networks for automatic COVID-19 patient risk assessment (discharged, hospitalized, intensive care unit) based on lung infection quantization through segmentation and, subsequently, CT classification. We tackle the high and varying dimensionality of the CT input by detecting and analyzing only a sub-volume of the CT, the Volume-of-Interest (VoI). Differently from recent strategies that con...
2019 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC), 2019
Deep learning architectures and Convolutional Neural Networks (CNNs) have made a significant impa... more Deep learning architectures and Convolutional Neural Networks (CNNs) have made a significant impact in learning embeddings of high-dimensional datasets. In some cases, and especially in the case of high-dimensional graph data, the interlinkage of data points may be hard to model.Previous approaches in applying the convolution function on graphs, namely the Graph Convolutional Networks (GCNs), presented neural networks architectures that encode information of individual nodes along with their connectivity. Nonetheless, these methods face the same issues as in traditional graph-based machine learning techniques i.e. the requirement of full matrix computations. This requirement bounds the applicability of the GCNs on the available computational resources. In this paper, the following assumption is evaluated: the training of a GCN with multiple subsets of the full data matrix is possible and converges to the full data matrix training scores, thus lifting the aforementioned limitation.Fo...
2020 25th International Conference on Pattern Recognition (ICPR)
Forecasting
In this paper, a detailed study on crime classification and prediction using deep learning archit... more In this paper, a detailed study on crime classification and prediction using deep learning architectures is presented. We examine the effectiveness of deep learning algorithms in this domain and provide recommendations for designing and training deep learning systems for predicting crime areas, using open data from police reports. Having time-series of crime types per location as training data, a comparative study of 10 state-of-the-art methods against 3 different deep learning configurations is conducted. In our experiments with 5 publicly available datasets, we demonstrate that the deep learning-based methods consistently outperform the existing best-performing methods. Moreover, we evaluate the effectiveness of different parameters in the deep learning architectures and give insights for configuring them to achieve improved performance in crime classification and finally crime prediction.
Cyber-Physical Threat Intelligence for Critical Infrastructures Security: Securing Critical Infrastructures in Air Transport, Water, Gas, Healthcare, Finance and Industry
Social Informatics, 2017
Identifying important network nodes is very crucial for a variety of applications, such as the sp... more Identifying important network nodes is very crucial for a variety of applications, such as the spread of an idea or an innovation. The majority of the publications so far assume that the interactions between nodes are static. However, this approach neglects that real-world phenomena evolve in time. Thus, there is a need for tools and techniques which account for evolution over time. Towards this direction, we present a novel graph-based method, named DepthRank (DR) that incorporates the temporal characteristics of the underlying datasets. We compare our approach against two baseline methods and find that it efficiently recovers important nodes on three real world datasets, as indicated by the numerical simulations. Moreover, we perform our analysis on a modified version of the DBLP dataset and verify its correctness using ground truth data.
Proceedings of the 11th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, 2019
It is a challenge to aggregate and analyze data from heterogeneous social media sources not only ... more It is a challenge to aggregate and analyze data from heterogeneous social media sources not only for businesses and organizations but also for Law Enforcement Agencies. The latter's core objectives are to monitor criminal and terrorist related activities and to identify the "key players" in various networks. In this paper, a framework for homogenizing and exploiting data from multiple sources is presented. Moreover, as part of the framework, an ontology that reflects today's social media perceptions is introduced. Data from multiple sources is transformed into a labeled property graph and stored in a graph database in a homogenized way based on the proposed ontology. The result is a cross-source analysis system where end-users can explore different scenarios and draw conclusions through a library of predefined query placeholders that focus on forensic investigation. The framework is evaluated on the Stormfront dataset, a radical right, web community. Finally, the benefits of applying the proposed framework to discover and visualize the relationships between the Stormfront profiles are presented.
Computer Communications and Networks, 2012
ACM Transactions on Intelligent Systems and Technology, 2016
Proceedings of the 8th International Interactive Conference on Interactive Tv Video, Jun 9, 2010
Abstract Three-dimensional TV is now closer than ever to become a reality for consumers providing... more Abstract Three-dimensional TV is now closer than ever to become a reality for consumers providing a complete life-like image experience. Recent advances in autostereoscopic displays have resulted in improved 3D viewing experience, wider viewing angles, no need for special glasses and support for multiple viewers. However, due to their content formatting requirements (2D+ depth), liveaction content is much more difficult to create. In this paper a new content generation approach for autostereoscopic 3DTV is proposed, by integrating a ...
2008 16th European Signal Processing Conference, 2008
A systematic approach for the design and implementation of an efficient and reliable signal proce... more A systematic approach for the design and implementation of an efficient and reliable signal processing unit able to provide vital flight parameters to the cockpit of an aircraft using a properly built LIDAR equipment is presented in this paper. Taking into account the specific characteristics of a signal coming out from such an optical device we define all the necessary steps for the real-time data acquisition and processing in order to extract accurate information about the true air speed (TAS) of the aircraft and other useful flight parameters. Simulation results using properly modeled signals verified the effectiveness of the suggested methodology. The proposed scheme is being developed in the framework of the EU funded NESLIE research project and aims to be finally flight tested in order to demonstrate the availability of accurate measurements in all weather conditions and in any phase of flight.
Information Processing & Management, 2015
ABSTRACT