Mohammed Boukabous - Academia.edu (original) (raw)

Uploads

Papers by Mohammed Boukabous

Research paper thumbnail of Toward a deep learning-based intrusion detection system for IoT against botnet attacks

The massive network traffic data between connected devices in the internet of things have taken a... more The massive network traffic data between connected devices in the internet of things have taken a big challenge to many traditional intrusion detection systems (IDS) to find probable security breaches. However, security attacks lean towards unpredictability. There are numerous difficulties to build up adaptable and powerful IDS for IoT in order to avoid false alerts and ensure a high recognition precision against attacks, especially with the rising of Botnet attacks. These attacks can even make harmless devices becoming zombies that send malicious traffic and disturb the network. In this paper, we propose a new IDS solution, baptized BotIDS, based on deep learning convolutional neural networks (CNN). The main interest of this work is to design, implement and test our IDS against some well-known Botnet attacks using a specific Bot-IoT dataset. Compared to other deep learning techniques, such as simple RNN, LSTM and GRU, the obtained results of our BotIDS are promising with 99.94% in ...

Research paper thumbnail of A comparative study of deep learning based language representation learning models

Indonesian Journal of Electrical Engineering and Computer Science, 2021

Deep learning (DL) approaches use various processing layers to learn hierarchical representations... more Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vectorspace representations of text, there are famous models like Word2vec, GloVe, and fastText. In fact, NLP took a big step forward when BERT and recently GTP-3 came out. In this paper, we highlight the most important language representation learning models in NLP and provide an insight of their evolution. We also summarize, compare and contrast these different models on sentiment analysis, and thus discuss their main strengths and limitations. Our obtained results show that BERT is the best language representation learning model.

Research paper thumbnail of Toward a deep learning-based intrusion detection system for IoT against botnet attacks

The massive network traffic data between connected devices in the internet of things have taken a... more The massive network traffic data between connected devices in the internet of things have taken a big challenge to many traditional intrusion detection systems (IDS) to find probable security breaches. However, security attacks lean towards unpredictability. There are numerous difficulties to build up adaptable and powerful IDS for IoT in order to avoid false alerts and ensure a high recognition precision against attacks, especially with the rising of Botnet attacks. These attacks can even make harmless devices becoming zombies that send malicious traffic and disturb the network. In this paper, we propose a new IDS solution, baptized BotIDS, based on deep learning convolutional neural networks (CNN). The main interest of this work is to design, implement and test our IDS against some well-known Botnet attacks using a specific Bot-IoT dataset. Compared to other deep learning techniques, such as simple RNN, LSTM and GRU, the obtained results of our BotIDS are promising with 99.94% in ...

Research paper thumbnail of A comparative study of deep learning based language representation learning models

Indonesian Journal of Electrical Engineering and Computer Science, 2021

Deep learning (DL) approaches use various processing layers to learn hierarchical representations... more Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vectorspace representations of text, there are famous models like Word2vec, GloVe, and fastText. In fact, NLP took a big step forward when BERT and recently GTP-3 came out. In this paper, we highlight the most important language representation learning models in NLP and provide an insight of their evolution. We also summarize, compare and contrast these different models on sentiment analysis, and thus discuss their main strengths and limitations. Our obtained results show that BERT is the best language representation learning model.

Log In