Rabih Younes | Duke University (original) (raw)

Papers by Rabih Younes

Research paper thumbnail of Real-time Blind Deblurring Based on Lightweight Deep-Wiener-Network

2023 International Joint Conference on Neural Networks (IJCNN)

Research paper thumbnail of Predicting Stocks Changes Using News Sentiment Analysis

Research paper thumbnail of Self-supervised Multi-Modal Video Forgery Attack Detection

2023 IEEE Wireless Communications and Networking Conference (WCNC)

Research paper thumbnail of Recovering Surveillance Video Using RF Cues

arXiv (Cornell University), Dec 26, 2022

Video capture is the most extensively utilized human perception source due to its intuitively und... more Video capture is the most extensively utilized human perception source due to its intuitively understandable nature. A desired video capture often requires multiple environmental conditions such as ample ambient-light, unobstructed space, and proper camera angle. In contrast, wireless measurements are more ubiquitous and have fewer environmental constraints. In this paper, we propose CSI2Video, a novel cross-modal method that leverages only WiFi signals from commercial devices and a source of human identity information to recover fine-grained surveillance video in a real-time manner. Specifically, two tailored deep neural networks are designed to conduct cross-modal mapping and video generation tasks respectively. We make use of an auto-encoderbased structure to extract pose features from WiFi frames. Afterward, both extracted pose features and identity information are merged to generate synthetic surveillance video. Our solution generates realistic surveillance videos without any expensive wireless equipment and has ubiquitous, cheap, and real-time characteristics.

Research paper thumbnail of Self-supervised Multi-Modal Video Forgery Attack Detection

Cornell University - arXiv, Sep 13, 2022

Video forgery attacks threaten surveillance systems by replacing the video captures with unrealis... more Video forgery attacks threaten surveillance systems by replacing the video captures with unrealistic synthesis, which can be powered by the latest augmented reality and virtual reality technologies. From the machine perception aspect, visual objects often have RF signatures that are naturally synchronized with them during recording. In contrast to video captures, the RF signatures are more difficult to attack given their concealed and ubiquitous nature. In this work, we investigate multimodal video forgery attack detection methods using both visual and wireless modalities. Since wireless signal-based human perception is environmentally sensitive, we propose a self-supervised training strategy to enable the system to work without external annotation and thus adapt to different environments. Our method achieves a perfect human detection accuracy and a high forgery attack detection accuracy of 94.38% which is comparable with supervised methods.

Research paper thumbnail of Toward Data Augmentation and Interpretation in Sensor-Based Fine-Grained Hand Activity Recognition

Communications in Computer and Information Science, 2021

Research paper thumbnail of ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition

Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2022

Research paper thumbnail of Bokeh Effect Rendering with Vision Transformers

Bokeh effect is growing to be an important feature in photography, essentially to choose an objec... more Bokeh effect is growing to be an important feature in photography, essentially to choose an object of interest to be in focus with the rest of the background being blurred. While naturally rendering this effect requires a DSLR with large diameter of aperture, with the current advancements in Deep Learning, this effect can also be produced in mobile cameras. Most of the existing methods use Convolutional Neural Networks while some relying on the depth map to render this effect. In this paper, we propose an end-to-end Vision Transformer model for Bokeh rendering of images from monocular camera. This architecture uses vision transformers as backbone, thus learning from the entire image rather than just the parts from the filters in a CNN. This property of retaining global information coupled with initial training of the model for image restoration before training to render the blur effect for the background, allows our method to produce clearer images and outperform the current state-o...

Research paper thumbnail of Improving SIFT Matching by Interest Points Filtering

In this work, the goal is to minimize the number of outliers detected by interest point detectors... more In this work, the goal is to minimize the number of outliers detected by interest point detectors. This approach presents many benefits like the fact that better homography matrices could be obtained and used for image stitching, RANSAC would take much less time to converge, better results would be obtained from the generalized Hough transform, etc. Four methods to reduce outliers were proposed and tested using the SIFT descriptor on a dataset of 1000 images containing image pairs subjected to different types of transformations and illumination changes. Three out of the four methods yielded good results and were combined in order to give the best result which was a 26.69% improvement in the true positive rate.

Research paper thumbnail of ViTA: A flexible CAD-tool-independent automatic grading platform for two-dimensional CAD drawings

International Journal of Mechanical Engineering Education, 2020

Grading engineering drawings takes a significant amount of an instructor’s time, especially in la... more Grading engineering drawings takes a significant amount of an instructor’s time, especially in large classrooms. In many cases, teaching assistants help with grading, adding levels of inconsistency and unfairness. To help in grading automation of CAD drawings, this paper introduces a novel tool that can completely automate the grading process after students submit their work. The introduced tool, called Virtual Teaching Assistant (ViTA), is a CAD-tool-independent platform that can work with exported drawings originating from different CAD software having different export settings. Using computer vision techniques applied to exported images of the drawings, ViTA can not only recognize whether or not a two-dimensional (2 D) drawing is correct, but also offers the detection of many important orthographic and sectional view mistakes such as mistakes in structural features, outline, hatching, orientation, scale, line thickness, colors, and views. We show ViTA’s accuracy and its relevance...

Research paper thumbnail of Simple Steps to Lower Student Stress in a Digital Systems Course While Maintaining High Standards and Expectations

2020 ASEE Virtual Annual Conference Content Access Proceedings

Rabih speaks nine languages (fluent in three) and holds a number of certificates in education, ne... more Rabih speaks nine languages (fluent in three) and holds a number of certificates in education, networking, IT, and skydiving. He is a member of ASEE, IEEE, and ACM, and a member of several honor societies, including Tau Beta Pi, Eta Kappa Nu, Phi Kappa Phi, and Golden Key. Rabih has a passion for both teaching and research; he has been teaching since he was a teenager, and his research interests include wearable computing, activity recognition, and engineering education. For more information, refer to his website: www.rabihyounes.com.

Research paper thumbnail of Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data

Sensors, 2020

The difficulty level of learning tasks is a concern that often needs to be considered in the teac... more The difficulty level of learning tasks is a concern that often needs to be considered in the teaching process. Teachers usually dynamically adjust the difficulty of exercises according to the prior knowledge and abilities of students to achieve better teaching results. In e-learning, because there is no teacher involvement, it often happens that the difficulty of the tasks is beyond the ability of the students. In attempts to solve this problem, several researchers investigated the problem-solving process by using eye-tracking data. However, although most e-learning exercises use the form of filling in blanks and choosing questions, in previous works, research focused on building cognitive models from eye-tracking data collected from flexible problem forms, which may lead to impractical results. In this paper, we build models to predict the difficulty level of spatial visualization problems from eye-tracking data collected from multiple-choice questions. We use eye-tracking and mach...

Research paper thumbnail of Parallel multi-voltage power minimization in VLSI circuits. (c2013)

Research paper thumbnail of Creative Ways of Knowing and the Future of Engineering Education

Creative Ways of Knowing in Engineering, 2016

Within previous chapters of this book, members of the engineering education community describe em... more Within previous chapters of this book, members of the engineering education community describe emerging educational shifts in engineering teaching that draw on creativity and many ways of knowing. In this chapter, we project the future of these shifts using the voices of students within a graduate-level practicum class in engineering education who are embarking on their teaching careers. In this course, students were asked to review a draft chapter of the current text, provide a scholarly critique of the chapter, and write a reflection about the ways in which their teaching practices have been informed by this and other existing literature. This chapter presents the reflections of five students and serves to encourage continued work in this area as well as inspire creative ways of teaching and knowing throughout engineering education.

Research paper thumbnail of A User-Independent and Sensor-Tolerant Wearable Activity Classifier

Research paper thumbnail of Classifier for Activities with Variations

Sensors (Basel, Switzerland), Jan 18, 2018

Most activity classifiers focus on recognizing application-specific activities that are mostly pe... more Most activity classifiers focus on recognizing application-specific activities that are mostly performed in a scripted manner, where there is very little room for variation within the activity. These classifiers are mainly good at recognizing short scripted activities that are performed in a specific way. In reality, especially when considering daily activities, humans perform complex activities in a variety of ways. In this work, we aim to make activity recognition more practical by proposing a novel approach to recognize complex heterogeneous activities that could be performed in a wide variety of ways. We collect data from 15 subjects performing eight complex activities and test our approach while analyzing it from different aspects. The results show the validity of our approach. They also show how it performs better than the state-of-the-art approaches that tried to recognize the same activities in a more controlled environment.

Research paper thumbnail of Improving the accuracy of wearable activity classifiers

Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, 2015

Providing context awareness in wearables helps the user be more efficient in his tasks while bein... more Providing context awareness in wearables helps the user be more efficient in his tasks while being interrupted by the wearable device only when it is needed. In this work, we focus on one important aspect of context awareness in wearables which is activity classification. First, a wearable activity classifier is improved and applied to a medical application. Afterwards, more techniques are proposed that can be used to improve the results' accuracy of any activity classifier. Future research aims to incorporate more context awareness domains that can interact with and help wearable activity classifiers.

Research paper thumbnail of Lab in a box: Redesigning an electrical circuits course by utilizing pedagogies of engagement

International Journal of Engineering Education, 2019

A lecture-based theoretical approach is frequently utilized when teaching courses in electrical c... more A lecture-based theoretical approach is frequently utilized when teaching courses in electrical circuits and the educationallearning objectives are often limited solely to content learning. This paper describes how a lecture-based electrical circuits’course was redesigned utilizing pedagogies of engagement to produce an environment that stimulates creativity and allowsfor the following additional learning objectives to be pursued: (1) improvement of hands-on skills, (2) increase in designabilities, and (3) teaming/collaboration proficiency. Educators are often deterred from pursuing these additional learningobjectives in a large classroom or when there is lack of space and equipment. In this study, a ‘‘lab in a box’’ approach isoutlined and shown to overcome these deterrents and foster an environment of student engagement. An inexpensive andeasy-to-maintain portable kit was developed to enable approximately 300 undergraduate students each year to build anddesign electrical circuits....

Research paper thumbnail of The design of smart garments for motion capture and activity classification

Electronic textiles provide a means for embedding electronics and conductive wires into fabric to... more Electronic textiles provide a means for embedding electronics and conductive wires into fabric to make smart garments that can serve as platforms for a wide variety of applications. This chapter presents prototypes developed by the Virginia Tech E-Textiles Lab over the past few years, with a focus on creating smart garments for motion capture and activity classification.

Research paper thumbnail of Toward Data Augmentation and Interpretation in Sensor-Based Fine-Grained Hand Activity Recognition

Research paper thumbnail of Real-time Blind Deblurring Based on Lightweight Deep-Wiener-Network

2023 International Joint Conference on Neural Networks (IJCNN)

Research paper thumbnail of Predicting Stocks Changes Using News Sentiment Analysis

Research paper thumbnail of Self-supervised Multi-Modal Video Forgery Attack Detection

2023 IEEE Wireless Communications and Networking Conference (WCNC)

Research paper thumbnail of Recovering Surveillance Video Using RF Cues

arXiv (Cornell University), Dec 26, 2022

Video capture is the most extensively utilized human perception source due to its intuitively und... more Video capture is the most extensively utilized human perception source due to its intuitively understandable nature. A desired video capture often requires multiple environmental conditions such as ample ambient-light, unobstructed space, and proper camera angle. In contrast, wireless measurements are more ubiquitous and have fewer environmental constraints. In this paper, we propose CSI2Video, a novel cross-modal method that leverages only WiFi signals from commercial devices and a source of human identity information to recover fine-grained surveillance video in a real-time manner. Specifically, two tailored deep neural networks are designed to conduct cross-modal mapping and video generation tasks respectively. We make use of an auto-encoderbased structure to extract pose features from WiFi frames. Afterward, both extracted pose features and identity information are merged to generate synthetic surveillance video. Our solution generates realistic surveillance videos without any expensive wireless equipment and has ubiquitous, cheap, and real-time characteristics.

Research paper thumbnail of Self-supervised Multi-Modal Video Forgery Attack Detection

Cornell University - arXiv, Sep 13, 2022

Video forgery attacks threaten surveillance systems by replacing the video captures with unrealis... more Video forgery attacks threaten surveillance systems by replacing the video captures with unrealistic synthesis, which can be powered by the latest augmented reality and virtual reality technologies. From the machine perception aspect, visual objects often have RF signatures that are naturally synchronized with them during recording. In contrast to video captures, the RF signatures are more difficult to attack given their concealed and ubiquitous nature. In this work, we investigate multimodal video forgery attack detection methods using both visual and wireless modalities. Since wireless signal-based human perception is environmentally sensitive, we propose a self-supervised training strategy to enable the system to work without external annotation and thus adapt to different environments. Our method achieves a perfect human detection accuracy and a high forgery attack detection accuracy of 94.38% which is comparable with supervised methods.

Research paper thumbnail of Toward Data Augmentation and Interpretation in Sensor-Based Fine-Grained Hand Activity Recognition

Communications in Computer and Information Science, 2021

Research paper thumbnail of ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition

Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2022

Research paper thumbnail of Bokeh Effect Rendering with Vision Transformers

Bokeh effect is growing to be an important feature in photography, essentially to choose an objec... more Bokeh effect is growing to be an important feature in photography, essentially to choose an object of interest to be in focus with the rest of the background being blurred. While naturally rendering this effect requires a DSLR with large diameter of aperture, with the current advancements in Deep Learning, this effect can also be produced in mobile cameras. Most of the existing methods use Convolutional Neural Networks while some relying on the depth map to render this effect. In this paper, we propose an end-to-end Vision Transformer model for Bokeh rendering of images from monocular camera. This architecture uses vision transformers as backbone, thus learning from the entire image rather than just the parts from the filters in a CNN. This property of retaining global information coupled with initial training of the model for image restoration before training to render the blur effect for the background, allows our method to produce clearer images and outperform the current state-o...

Research paper thumbnail of Improving SIFT Matching by Interest Points Filtering

In this work, the goal is to minimize the number of outliers detected by interest point detectors... more In this work, the goal is to minimize the number of outliers detected by interest point detectors. This approach presents many benefits like the fact that better homography matrices could be obtained and used for image stitching, RANSAC would take much less time to converge, better results would be obtained from the generalized Hough transform, etc. Four methods to reduce outliers were proposed and tested using the SIFT descriptor on a dataset of 1000 images containing image pairs subjected to different types of transformations and illumination changes. Three out of the four methods yielded good results and were combined in order to give the best result which was a 26.69% improvement in the true positive rate.

Research paper thumbnail of ViTA: A flexible CAD-tool-independent automatic grading platform for two-dimensional CAD drawings

International Journal of Mechanical Engineering Education, 2020

Grading engineering drawings takes a significant amount of an instructor’s time, especially in la... more Grading engineering drawings takes a significant amount of an instructor’s time, especially in large classrooms. In many cases, teaching assistants help with grading, adding levels of inconsistency and unfairness. To help in grading automation of CAD drawings, this paper introduces a novel tool that can completely automate the grading process after students submit their work. The introduced tool, called Virtual Teaching Assistant (ViTA), is a CAD-tool-independent platform that can work with exported drawings originating from different CAD software having different export settings. Using computer vision techniques applied to exported images of the drawings, ViTA can not only recognize whether or not a two-dimensional (2 D) drawing is correct, but also offers the detection of many important orthographic and sectional view mistakes such as mistakes in structural features, outline, hatching, orientation, scale, line thickness, colors, and views. We show ViTA’s accuracy and its relevance...

Research paper thumbnail of Simple Steps to Lower Student Stress in a Digital Systems Course While Maintaining High Standards and Expectations

2020 ASEE Virtual Annual Conference Content Access Proceedings

Rabih speaks nine languages (fluent in three) and holds a number of certificates in education, ne... more Rabih speaks nine languages (fluent in three) and holds a number of certificates in education, networking, IT, and skydiving. He is a member of ASEE, IEEE, and ACM, and a member of several honor societies, including Tau Beta Pi, Eta Kappa Nu, Phi Kappa Phi, and Golden Key. Rabih has a passion for both teaching and research; he has been teaching since he was a teenager, and his research interests include wearable computing, activity recognition, and engineering education. For more information, refer to his website: www.rabihyounes.com.

Research paper thumbnail of Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data

Sensors, 2020

The difficulty level of learning tasks is a concern that often needs to be considered in the teac... more The difficulty level of learning tasks is a concern that often needs to be considered in the teaching process. Teachers usually dynamically adjust the difficulty of exercises according to the prior knowledge and abilities of students to achieve better teaching results. In e-learning, because there is no teacher involvement, it often happens that the difficulty of the tasks is beyond the ability of the students. In attempts to solve this problem, several researchers investigated the problem-solving process by using eye-tracking data. However, although most e-learning exercises use the form of filling in blanks and choosing questions, in previous works, research focused on building cognitive models from eye-tracking data collected from flexible problem forms, which may lead to impractical results. In this paper, we build models to predict the difficulty level of spatial visualization problems from eye-tracking data collected from multiple-choice questions. We use eye-tracking and mach...

Research paper thumbnail of Parallel multi-voltage power minimization in VLSI circuits. (c2013)

Research paper thumbnail of Creative Ways of Knowing and the Future of Engineering Education

Creative Ways of Knowing in Engineering, 2016

Within previous chapters of this book, members of the engineering education community describe em... more Within previous chapters of this book, members of the engineering education community describe emerging educational shifts in engineering teaching that draw on creativity and many ways of knowing. In this chapter, we project the future of these shifts using the voices of students within a graduate-level practicum class in engineering education who are embarking on their teaching careers. In this course, students were asked to review a draft chapter of the current text, provide a scholarly critique of the chapter, and write a reflection about the ways in which their teaching practices have been informed by this and other existing literature. This chapter presents the reflections of five students and serves to encourage continued work in this area as well as inspire creative ways of teaching and knowing throughout engineering education.

Research paper thumbnail of A User-Independent and Sensor-Tolerant Wearable Activity Classifier

Research paper thumbnail of Classifier for Activities with Variations

Sensors (Basel, Switzerland), Jan 18, 2018

Most activity classifiers focus on recognizing application-specific activities that are mostly pe... more Most activity classifiers focus on recognizing application-specific activities that are mostly performed in a scripted manner, where there is very little room for variation within the activity. These classifiers are mainly good at recognizing short scripted activities that are performed in a specific way. In reality, especially when considering daily activities, humans perform complex activities in a variety of ways. In this work, we aim to make activity recognition more practical by proposing a novel approach to recognize complex heterogeneous activities that could be performed in a wide variety of ways. We collect data from 15 subjects performing eight complex activities and test our approach while analyzing it from different aspects. The results show the validity of our approach. They also show how it performs better than the state-of-the-art approaches that tried to recognize the same activities in a more controlled environment.

Research paper thumbnail of Improving the accuracy of wearable activity classifiers

Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, 2015

Providing context awareness in wearables helps the user be more efficient in his tasks while bein... more Providing context awareness in wearables helps the user be more efficient in his tasks while being interrupted by the wearable device only when it is needed. In this work, we focus on one important aspect of context awareness in wearables which is activity classification. First, a wearable activity classifier is improved and applied to a medical application. Afterwards, more techniques are proposed that can be used to improve the results' accuracy of any activity classifier. Future research aims to incorporate more context awareness domains that can interact with and help wearable activity classifiers.

Research paper thumbnail of Lab in a box: Redesigning an electrical circuits course by utilizing pedagogies of engagement

International Journal of Engineering Education, 2019

A lecture-based theoretical approach is frequently utilized when teaching courses in electrical c... more A lecture-based theoretical approach is frequently utilized when teaching courses in electrical circuits and the educationallearning objectives are often limited solely to content learning. This paper describes how a lecture-based electrical circuits’course was redesigned utilizing pedagogies of engagement to produce an environment that stimulates creativity and allowsfor the following additional learning objectives to be pursued: (1) improvement of hands-on skills, (2) increase in designabilities, and (3) teaming/collaboration proficiency. Educators are often deterred from pursuing these additional learningobjectives in a large classroom or when there is lack of space and equipment. In this study, a ‘‘lab in a box’’ approach isoutlined and shown to overcome these deterrents and foster an environment of student engagement. An inexpensive andeasy-to-maintain portable kit was developed to enable approximately 300 undergraduate students each year to build anddesign electrical circuits....

Research paper thumbnail of The design of smart garments for motion capture and activity classification

Electronic textiles provide a means for embedding electronics and conductive wires into fabric to... more Electronic textiles provide a means for embedding electronics and conductive wires into fabric to make smart garments that can serve as platforms for a wide variety of applications. This chapter presents prototypes developed by the Virginia Tech E-Textiles Lab over the past few years, with a focus on creating smart garments for motion capture and activity classification.

Research paper thumbnail of Toward Data Augmentation and Interpretation in Sensor-Based Fine-Grained Hand Activity Recognition