Navdeep Kaur | Sri Guru Granth Sahib World University (original) (raw)

Papers by Navdeep Kaur

Research paper thumbnail of Keystroke Dynamics for Mobile Phones: A Survey

Biometric is the science of authenticating a user based on his physical or behavioral attributes.... more Biometric is the science of authenticating a user based on his physical or behavioral attributes. Keystroke dynamics is behavioral study which analyses the typing rhythm of the user. We adopted a systematic procedure for studying the state of the art in keystroke dynamics in mobile phones. We analyzed the features extracted, the classification techniques, the input text, length of the input text, number of users, hardware used and the results that each study got. We included research articles that focused on keystroke dynamics for mobile devices only. It was found that majority of the research used latency as the prominent feature. Hold time and pressure are also used in combination with latency to get improved results. The most popular classification techniques are either statistical or neural network based, although it is difficult to say which is better since the users, testing conditions and features used are different in all researches. Also the number of users that are used for taking the input are generally less than 100 which is not a good representation sample. The application of this technique is very cost effective as it does not require any extra hardware. Hence there is a need to share the datasets by researchers and develop a standard against which every researcher can compare his results. Also the environment in which the tests are performed should be uncontrolled which will give results that are more realistic and close to real deployment environment.

Research paper thumbnail of Efficient Edge Detection Method based on Soft Computing : A Review

several edge recognition strategies have now been planned within the past decades. Many are deriv... more several edge recognition strategies have now been planned within the past decades. Many are derived from differential strategies such as for example Sobel, Roberts, Laplacian operators, and therefore on. Nevertheless these calculations are quite sensitive to sound, in order to restrain sound before sensing edges. In this report intensive literature review has been presented in numerous edge recognition strategies which give various restrictions that can be overcome in the proposed strategies i.e. hybrid approach.

Research paper thumbnail of Simulation and Comparison of Various Queuing Algo-rithms based on their Performance using CPR Approach in Detection of LDDoS Attacks

International Journal of Computer Applications, 2014

In this paper, the comparison of various queue management algorithms is done based upon use of CP... more In this paper, the comparison of various queue management algorithms is done based upon use of CPR and without CPR. Congestion Participation Rate (CPR) is novel metric approach proposed for the detection and prevention of LDDoS attacks. As LDDoS attacks does not decrease the number of sending packets when congestion occurs, but TCP does. We will check the effect of using various queue management algorithms on the various parameters of the flow of packets such as number of packets sent, received and lost etc.

Research paper thumbnail of SecRIP : Secure and reliable intercluster routing protocol for efficient data transmission in flying ad hoc networks

Transactions on Emerging Telecommunications Technologies

Research paper thumbnail of A practical approach to energy consumption in wireless sensor networks

International Journal of Advanced Intelligence Paradigms

Research paper thumbnail of Software effort estimation using FAHP and weighted kernel LSSVM machine

Soft Computing

In the life cycle of software product development, the software effort estimation (SEE) has alway... more In the life cycle of software product development, the software effort estimation (SEE) has always been a critical activity. The researchers have proposed numerous estimation methods since the inception of software engineering as a research area. The diversity of estimation approaches is very high and increasing, but it has been interpreted that no single technique performs consistently for each project and environment. Multi-criteria decision-making (MCDM) approach generates more credible estimates, which is subjected to expert’s experience. In this paper, a hybrid model has been developed to combine MCDM (for handling uncertainty) and machine learning algorithm (for handling imprecision) approach to predict the effort more accurately. Fuzzy analytic hierarchy process (FAHP) has been used effectively for feature ranking. Ranks generated from FAHP have been integrated into weighted kernel least square support vector machine for effort estimation. The model developed has been empirically validated on data repositories available for SEE. The combination of weights generated by FAHP and the radial basis function (RBF) kernel has resulted in more accurate effort estimates in comparison with bee colony optimisation and basic RBF kernel-based model.

Research paper thumbnail of An Improved Technique to Compute Visual Attention Map based upon Wavelet Domain

International Journal of Computer Applications

Research paper thumbnail of Review on Energy Efficient Techniques for Mobile Ad Hoc Networks

International Journal of Advanced Research in Computer Science and Software Engineering

Research paper thumbnail of Research patterns and trends in software effort estimation

Information and Software Technology

Research paper thumbnail of Secret Communication in RGB Images Using Wavelet Domain Based Saliency Map as Model

Visual system of human beings does not process the complete area of image rather focus upon limit... more Visual system of human beings does not process the complete area of image rather focus upon limited area of visual image. But in which area does the visual attention focused is a topic of hot research nowadays. Research on psychological phenomenon indicates that attention is attracted to features that differ from its surroundings or the one that are unusual or unfamiliar to the human visual system. Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. Object or region based image processing can be performed more efficiently with information pertaining locations that are visually salient to human perception with the aid of a saliency map. Recently many authors have used wavelet domain for detection of salient regions. This domain has shown promising results but almost all the authors have ignored the detail components of wavelet domain which may have some useful information. So in this paper we have tried to use the wavelet domain method to detect salient regions using approximation and all detail components. Further this saliency map will be used for steganography.

Research paper thumbnail of Improved Max-Min Scheduling Algorithm

In this research paper, additional constrains have been considered to progress a holistic analysi... more In this research paper, additional constrains have been considered to progress a holistic analysis based algorithm based on Max-Min algorithm, which work on principle of sorting jobs(cloudlets) based on completion time of cloudlets. The improved algorithms here also reviews the job characteristics in method of size, pattern, payload ratio and available storage blocks in particular cluster of contribution of file systems. The observations show no significant overload due to addition of these constrains, as sorting operation remains same and efficient. Storage allocation helps in getting better performance.

Research paper thumbnail of Searching over the encrypted cloud data

Research paper thumbnail of Let’s Play with Images and Private Data Using Stick of Randomness

Communications in Computer and Information Science, 2012

Steganography is the process of hiding one message or file inside another message or file. For in... more Steganography is the process of hiding one message or file inside another message or file. For instance, steganographers can hide an image inside another image, an audio file, or a video file, or they can hide an audio or video file inside another media file or even inside a large graphic file. Steganography differs from cryptography in that while cryptography works to mask the content of a message, steganography works to mask the very existence of the message. With the war on terrorism and the hunt for those responsible for the September 11 attacks mounting, steganography is increasingly in the news. Some experts theorize the al Qaeda terrorists used the Internet to plan the attacks, possibly using steganography to keep their intentions secret [13]. The aim of this study was to investigate the various steganograhy methods & how they are implemented .LSB is a very well known method in this field. In binary images we are very much restricted in the scope as there are only 4 bits or 8 bits to represent a pixel so we are very much restricted to most popular LSB methods .But in coloured images there are generally up to 24 bits images with three different RGB channels, if using RGB colour space .So , we can explore a lot many new methods which can manipulate or use various channels of colouerd images in regular or arbitrary pattern to hide the information. Using this concept we have explored the various existing methods of data hiding in coloured images & taken an intersection between the arbitrary pixel manipulation & LSB method to propose our work which uses arbitrary channel of a pixel to reflect the presence of data in one or two other channels. We are sure that this work will show an attractive result as compared to the other present algorithms on the various parameters like security, imperceptibility capacity & robustness.

Research paper thumbnail of Comparative Analysis of Existing Dynamic Load Balancing Techniques

International Journal of Computer Applications, 2013

The anticipated uptake of Cloud computing, built on wellestablished research in Web Services, net... more The anticipated uptake of Cloud computing, built on wellestablished research in Web Services, networks, utility computing, distributed computing and virtualization, will bring many advantages in cost, flexibility and availability for service users. Cloud is based on the data centers which are powerful to handle large number of users. As the cloud computing is a new style of computing over internet, it has many advantages along with some crucial issues to be resolved in order to improve reliability of cloud environment. Central to this is the implementation of an effective load balancing algorithm. This paper investigates two distributed load balancing algorithms which have been proposed for load balancing: round robin and throttled scheduling.

Research paper thumbnail of Concurrency Control for Multilevel Secure Databases

International Journal of Network Security, 2009

A multilevel secure (MLS) database is intended to pro- tect classifled information from unauthori... more A multilevel secure (MLS) database is intended to pro- tect classifled information from unauthorized users based on the classiflcation of the data and the clearances of the users. The concurrency control requirements for transaction processing in multilevel secure database management systems (MLS/DBMSs) are difierent from those in conventional transaction processing systems. In MLS/DBMSs, coordination of transactions at difierent se- curity

Research paper thumbnail of Improved Approach for Maximizing Reliability in Fault Tolerant Networks

Journal of Advanced Computational Intelligence and Intelligent Informatics

The objective of this paper is to present a novelmethod for achievingmaximumreliability in fault-... more The objective of this paper is to present a novelmethod for achievingmaximumreliability in fault-tolerant optimal network design when networks have variable size. Reliability calculation is a most important and critical component when fault-tolerant optimal network design is required. A network must be supplied with certain parameters that guarantee proper functionality and maintainability in worse-case situations. Many alternative methods for measuring reliability have been stated in the literature for optimal network design. Most of these methods, mentioned in the literature for evaluating reliability, may be analytical and simulation-based. These methods provide significant ways for computing reliability when a network has a limited size. Significant computational effort is also required for growing variable-sized networks. A novel neural network method is therefore presented to achieve significant high reliability in fault-tolerant optimal network design in highly growing variab...

Research paper thumbnail of A Three-Step Authentication Model for Mobile Phone User Using Keystroke Dynamics

IEEE Access

The use of keystroke dynamics for user authentication has evolved over the years and has found it... more The use of keystroke dynamics for user authentication has evolved over the years and has found its application in mobile phones. But the primary challenge with mobile phones is that they can be used in any position. Thus, it becomes critical to analyze the use of keystroke dynamics using the data collected in various typing positions. This research proposed a three-step authentication model that could be used to authenticate a user who is using the mobile in sitting, walking, and relaxing position. Furthermore, the mobile orientation (portrait and landscape) was considered while taking input from the user. Apart from using traditional keystroke features, accelerometer data were also combined for classification using Random Forest(RF) and K-Nearest Neighbour(KNN) classifiers. The three-step authentication method was able to authenticate a user with an EER of 2.9% for the relaxing landscape position. Finally, the model was optimized using Particle Swarm Optimization (PSO) to reduce the feature set and make the model more practical for mobile phones. Optimization helped to reduce the number of features from 55 to 17 and improved the EER to 2.2%. The research validated that relaxing and walking positions are the best positions to authenticate a user using keystroke dynamics.

Research paper thumbnail of Review article: Achieving maximum reliability in fault tolerant network design for variable networks

Applied Soft Computing, Jul 1, 2013

Research paper thumbnail of Secret communication in colored images using saliency map as model

2014 Ieee Applied Imagery Pattern Recognition Workshop, Oct 1, 2014

Research paper thumbnail of Improved neural approach in maximising reliability for increasing networks

International Journal of Computational Science and Engineering, 2015

Research paper thumbnail of Keystroke Dynamics for Mobile Phones: A Survey

Biometric is the science of authenticating a user based on his physical or behavioral attributes.... more Biometric is the science of authenticating a user based on his physical or behavioral attributes. Keystroke dynamics is behavioral study which analyses the typing rhythm of the user. We adopted a systematic procedure for studying the state of the art in keystroke dynamics in mobile phones. We analyzed the features extracted, the classification techniques, the input text, length of the input text, number of users, hardware used and the results that each study got. We included research articles that focused on keystroke dynamics for mobile devices only. It was found that majority of the research used latency as the prominent feature. Hold time and pressure are also used in combination with latency to get improved results. The most popular classification techniques are either statistical or neural network based, although it is difficult to say which is better since the users, testing conditions and features used are different in all researches. Also the number of users that are used for taking the input are generally less than 100 which is not a good representation sample. The application of this technique is very cost effective as it does not require any extra hardware. Hence there is a need to share the datasets by researchers and develop a standard against which every researcher can compare his results. Also the environment in which the tests are performed should be uncontrolled which will give results that are more realistic and close to real deployment environment.

Research paper thumbnail of Efficient Edge Detection Method based on Soft Computing : A Review

several edge recognition strategies have now been planned within the past decades. Many are deriv... more several edge recognition strategies have now been planned within the past decades. Many are derived from differential strategies such as for example Sobel, Roberts, Laplacian operators, and therefore on. Nevertheless these calculations are quite sensitive to sound, in order to restrain sound before sensing edges. In this report intensive literature review has been presented in numerous edge recognition strategies which give various restrictions that can be overcome in the proposed strategies i.e. hybrid approach.

Research paper thumbnail of Simulation and Comparison of Various Queuing Algo-rithms based on their Performance using CPR Approach in Detection of LDDoS Attacks

International Journal of Computer Applications, 2014

In this paper, the comparison of various queue management algorithms is done based upon use of CP... more In this paper, the comparison of various queue management algorithms is done based upon use of CPR and without CPR. Congestion Participation Rate (CPR) is novel metric approach proposed for the detection and prevention of LDDoS attacks. As LDDoS attacks does not decrease the number of sending packets when congestion occurs, but TCP does. We will check the effect of using various queue management algorithms on the various parameters of the flow of packets such as number of packets sent, received and lost etc.

Research paper thumbnail of SecRIP : Secure and reliable intercluster routing protocol for efficient data transmission in flying ad hoc networks

Transactions on Emerging Telecommunications Technologies

Research paper thumbnail of A practical approach to energy consumption in wireless sensor networks

International Journal of Advanced Intelligence Paradigms

Research paper thumbnail of Software effort estimation using FAHP and weighted kernel LSSVM machine

Soft Computing

In the life cycle of software product development, the software effort estimation (SEE) has alway... more In the life cycle of software product development, the software effort estimation (SEE) has always been a critical activity. The researchers have proposed numerous estimation methods since the inception of software engineering as a research area. The diversity of estimation approaches is very high and increasing, but it has been interpreted that no single technique performs consistently for each project and environment. Multi-criteria decision-making (MCDM) approach generates more credible estimates, which is subjected to expert’s experience. In this paper, a hybrid model has been developed to combine MCDM (for handling uncertainty) and machine learning algorithm (for handling imprecision) approach to predict the effort more accurately. Fuzzy analytic hierarchy process (FAHP) has been used effectively for feature ranking. Ranks generated from FAHP have been integrated into weighted kernel least square support vector machine for effort estimation. The model developed has been empirically validated on data repositories available for SEE. The combination of weights generated by FAHP and the radial basis function (RBF) kernel has resulted in more accurate effort estimates in comparison with bee colony optimisation and basic RBF kernel-based model.

Research paper thumbnail of An Improved Technique to Compute Visual Attention Map based upon Wavelet Domain

International Journal of Computer Applications

Research paper thumbnail of Review on Energy Efficient Techniques for Mobile Ad Hoc Networks

International Journal of Advanced Research in Computer Science and Software Engineering

Research paper thumbnail of Research patterns and trends in software effort estimation

Information and Software Technology

Research paper thumbnail of Secret Communication in RGB Images Using Wavelet Domain Based Saliency Map as Model

Visual system of human beings does not process the complete area of image rather focus upon limit... more Visual system of human beings does not process the complete area of image rather focus upon limited area of visual image. But in which area does the visual attention focused is a topic of hot research nowadays. Research on psychological phenomenon indicates that attention is attracted to features that differ from its surroundings or the one that are unusual or unfamiliar to the human visual system. Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. Object or region based image processing can be performed more efficiently with information pertaining locations that are visually salient to human perception with the aid of a saliency map. Recently many authors have used wavelet domain for detection of salient regions. This domain has shown promising results but almost all the authors have ignored the detail components of wavelet domain which may have some useful information. So in this paper we have tried to use the wavelet domain method to detect salient regions using approximation and all detail components. Further this saliency map will be used for steganography.

Research paper thumbnail of Improved Max-Min Scheduling Algorithm

In this research paper, additional constrains have been considered to progress a holistic analysi... more In this research paper, additional constrains have been considered to progress a holistic analysis based algorithm based on Max-Min algorithm, which work on principle of sorting jobs(cloudlets) based on completion time of cloudlets. The improved algorithms here also reviews the job characteristics in method of size, pattern, payload ratio and available storage blocks in particular cluster of contribution of file systems. The observations show no significant overload due to addition of these constrains, as sorting operation remains same and efficient. Storage allocation helps in getting better performance.

Research paper thumbnail of Searching over the encrypted cloud data

Research paper thumbnail of Let’s Play with Images and Private Data Using Stick of Randomness

Communications in Computer and Information Science, 2012

Steganography is the process of hiding one message or file inside another message or file. For in... more Steganography is the process of hiding one message or file inside another message or file. For instance, steganographers can hide an image inside another image, an audio file, or a video file, or they can hide an audio or video file inside another media file or even inside a large graphic file. Steganography differs from cryptography in that while cryptography works to mask the content of a message, steganography works to mask the very existence of the message. With the war on terrorism and the hunt for those responsible for the September 11 attacks mounting, steganography is increasingly in the news. Some experts theorize the al Qaeda terrorists used the Internet to plan the attacks, possibly using steganography to keep their intentions secret [13]. The aim of this study was to investigate the various steganograhy methods & how they are implemented .LSB is a very well known method in this field. In binary images we are very much restricted in the scope as there are only 4 bits or 8 bits to represent a pixel so we are very much restricted to most popular LSB methods .But in coloured images there are generally up to 24 bits images with three different RGB channels, if using RGB colour space .So , we can explore a lot many new methods which can manipulate or use various channels of colouerd images in regular or arbitrary pattern to hide the information. Using this concept we have explored the various existing methods of data hiding in coloured images & taken an intersection between the arbitrary pixel manipulation & LSB method to propose our work which uses arbitrary channel of a pixel to reflect the presence of data in one or two other channels. We are sure that this work will show an attractive result as compared to the other present algorithms on the various parameters like security, imperceptibility capacity & robustness.

Research paper thumbnail of Comparative Analysis of Existing Dynamic Load Balancing Techniques

International Journal of Computer Applications, 2013

The anticipated uptake of Cloud computing, built on wellestablished research in Web Services, net... more The anticipated uptake of Cloud computing, built on wellestablished research in Web Services, networks, utility computing, distributed computing and virtualization, will bring many advantages in cost, flexibility and availability for service users. Cloud is based on the data centers which are powerful to handle large number of users. As the cloud computing is a new style of computing over internet, it has many advantages along with some crucial issues to be resolved in order to improve reliability of cloud environment. Central to this is the implementation of an effective load balancing algorithm. This paper investigates two distributed load balancing algorithms which have been proposed for load balancing: round robin and throttled scheduling.

Research paper thumbnail of Concurrency Control for Multilevel Secure Databases

International Journal of Network Security, 2009

A multilevel secure (MLS) database is intended to pro- tect classifled information from unauthori... more A multilevel secure (MLS) database is intended to pro- tect classifled information from unauthorized users based on the classiflcation of the data and the clearances of the users. The concurrency control requirements for transaction processing in multilevel secure database management systems (MLS/DBMSs) are difierent from those in conventional transaction processing systems. In MLS/DBMSs, coordination of transactions at difierent se- curity

Research paper thumbnail of Improved Approach for Maximizing Reliability in Fault Tolerant Networks

Journal of Advanced Computational Intelligence and Intelligent Informatics

The objective of this paper is to present a novelmethod for achievingmaximumreliability in fault-... more The objective of this paper is to present a novelmethod for achievingmaximumreliability in fault-tolerant optimal network design when networks have variable size. Reliability calculation is a most important and critical component when fault-tolerant optimal network design is required. A network must be supplied with certain parameters that guarantee proper functionality and maintainability in worse-case situations. Many alternative methods for measuring reliability have been stated in the literature for optimal network design. Most of these methods, mentioned in the literature for evaluating reliability, may be analytical and simulation-based. These methods provide significant ways for computing reliability when a network has a limited size. Significant computational effort is also required for growing variable-sized networks. A novel neural network method is therefore presented to achieve significant high reliability in fault-tolerant optimal network design in highly growing variab...

Research paper thumbnail of A Three-Step Authentication Model for Mobile Phone User Using Keystroke Dynamics

IEEE Access

The use of keystroke dynamics for user authentication has evolved over the years and has found it... more The use of keystroke dynamics for user authentication has evolved over the years and has found its application in mobile phones. But the primary challenge with mobile phones is that they can be used in any position. Thus, it becomes critical to analyze the use of keystroke dynamics using the data collected in various typing positions. This research proposed a three-step authentication model that could be used to authenticate a user who is using the mobile in sitting, walking, and relaxing position. Furthermore, the mobile orientation (portrait and landscape) was considered while taking input from the user. Apart from using traditional keystroke features, accelerometer data were also combined for classification using Random Forest(RF) and K-Nearest Neighbour(KNN) classifiers. The three-step authentication method was able to authenticate a user with an EER of 2.9% for the relaxing landscape position. Finally, the model was optimized using Particle Swarm Optimization (PSO) to reduce the feature set and make the model more practical for mobile phones. Optimization helped to reduce the number of features from 55 to 17 and improved the EER to 2.2%. The research validated that relaxing and walking positions are the best positions to authenticate a user using keystroke dynamics.

Research paper thumbnail of Review article: Achieving maximum reliability in fault tolerant network design for variable networks

Applied Soft Computing, Jul 1, 2013

Research paper thumbnail of Secret communication in colored images using saliency map as model

2014 Ieee Applied Imagery Pattern Recognition Workshop, Oct 1, 2014

Research paper thumbnail of Improved neural approach in maximising reliability for increasing networks

International Journal of Computational Science and Engineering, 2015

Research paper thumbnail of Keystroke Dynamics Based User Authentication using Numeric Keypad

Keystroke dynamics is the study to identify/authenticate a person based on his/her typing rhythms... more Keystroke dynamics is the study to identify/authenticate a person based on his/her typing rhythms, which are inferred from keystroke events like key-press and key-release. A lot of research work has been done in this field where the researchers have used either only alphabetic or alphanumeric or only numeric inputs. In this paper we address the question-What is the best possible numeric input for authentication using keystroke dynamics. We accomplished this by making the users enter four different numbers. Each number consisted of 8-digits. Out of these four numbers two were random numbers while the other two were formed using digits which had some pattern to them. Random Forest and Naive Bayes were used as classifiers. The results showed that using Random Forest classifier yielded best results when a random number is taken as input. The study also proved that a combination of hold time and latency as features yielded improved results. We achieved an average false acceptance rate of 2.7% and false rejection rate of 35.9%.