RISHABH RATHORE - Academia.edu (original) (raw)
Papers by RISHABH RATHORE
Mental stress is becoming a threat to people's health now a days. With the rapid pace of life, mo... more Mental stress is becoming a threat to people's health now a days. With the rapid pace of life, more and more people are feeling stressed. It is not easy to detect users stress in an early time to protect user [1]. We determined that students stress state is firmly diagnosed with that of his/her activities in on-line lifestyles. We initially load the data from dataset named as "Sentiment_140" from Kaggle and visualize properties from different viewpoints and afterward propose a Naïve Bayes algorithm-It is a classification technique based on Bayes' Theorem with an assumption of independence among predictors i.e. presence of a particular feature in a class is unrelated to the presence of any other feature.
International Journal of Quality & Reliability Management, 2020
PurposeThis paper investigates the risks involved in the Indian foodgrain supply chain (FSC) and ... more PurposeThis paper investigates the risks involved in the Indian foodgrain supply chain (FSC) and proposes risk mitigation taxonomy to enable decision making.Design/methodology/approachThis paper used failure mode and effect analysis (FMEA) for risk estimation. In the traditional FMEA, risk priority number (RPN) is evaluated by multiplying the probability of occurrence, severity and detection. Because of some drawbacks of the traditional FMEA, instead of calculating RPN, this paper prioritizes the FSC risk factors using fuzzy VIKOR. VIKOR is a multiple attribute decision-making technique which aims to rank FSC risk factors with respect to criteria.FindingsThe findings indicate that “technological risk” has a higher impact on the FSC, followed by natural disaster, communication failure, non-availability of procurement centers, malfunctioning in PDS and inadequate storage facility. Sensitivity analysis is performed to check the robustness of the results.Practical implicationsThe outcom...
International Journal of Production Research, 2020
The purpose of this paper is to model dynamic feedback effects and complex interactions among ris... more The purpose of this paper is to model dynamic feedback effects and complex interactions among risks affecting foodgrains transportation using a system dynamics approach. The risk scenario is simulated using a system dynamics model considering risk index values. It has been observed that there is a significant increase in inventory level from 8.39% to 28.4% and vehicle capacity from 8.99% to 28.4% with a change in risk value by 30%. This can help managers to develop inventory and transportation policies. It will, therefore, help to generate risk reduction scenarios for better availability of foodgrains through integrated risk control and mitigation in a supply chain. The key findings drawn for transportation and inventory management of foodgrains will help policy-makers to improve the efficiency of foodgrains supply chain. This research uniquely analyses the dynamic implication of inventory-transportation policies on foodgrains transportation system using system dynamics modelling approach.
The International Journal of Logistics Management, 2017
Purpose The food supply chain is exposed to severe environmental and social issues with serious e... more Purpose The food supply chain is exposed to severe environmental and social issues with serious economic consequences. The identification and assessment of risk involved in the food supply chain can help to overcome these challenges. In response, the purpose of this paper is to develop a risk assessment framework for a typical food supply chain. Design/methodology/approach An integrated methodology of grey analytical hierarchy process and grey technique for order preference by similarity to the ideal solution is proposed for developing a comprehensive risk index. The opinion of the experts is used to illustrate an application of the proposed methodology for the risk assessment of the food supply chain in India. Findings Valuable insights and recommendations are drawn from the results, which are helpful to the practitioners working at strategic and tactical levels in the food supply chain for minimising the supply chain disruptions. Research limitations/implications The risk quantifi...
The Journal of the Egyptian Public Health Association, 2007
Wastewater workers are exposed to various job-related hazards. This work was carried out in the p... more Wastewater workers are exposed to various job-related hazards. This work was carried out in the period from November, 2004 to January, 2005. All workers (one hundred and ninety two workers) in the Jeddah Municipal Wastewater Treatment Plants (MWTP) were interviewed. They were asked to answer a precoded questionnaire that included personal data and complete medical (present, past, and family) history. They were also asked about history of previous medical examinations in details. Psychological problems were the most common health problems as they formed 84.4% followed by mucous membranes' irritation which constituted 42.2%. The percentage of workers following the safety precautions: wearing anti-slide shoes; the use of personal protective tools for the protection of the skin and eyes; the use of safety precautions in mixing chemicals; the safe storage, transfer, and circulation of chemicals; and ensuring the safety of electrical appliances were 14.6%, 75%, 13.5%, 91.7%, and 95.8%...
Proceedings of the National Academy of Sciences, 2003
High-dimensional data sets generated by high-throughput technologies, such as DNA microarray, are... more High-dimensional data sets generated by high-throughput technologies, such as DNA microarray, are often the outputs of complex networked systems driven by hidden regulatory signals. Traditional statistical methods for computing low-dimensional or hidden representations of these data sets, such as principal component analysis and independent component analysis, ignore the underlying network structures and provide decompositions based purely on a priori statistical constraints on the computed component signals. The resulting decomposition thus provides a phenomenological model for the observed data and does not necessarily contain physically or biologically meaningful signals. Here, we develop a method, called network component analysis, for uncovering hidden regulatory signals from outputs of networked systems, when only a partial knowledge of the underlying network topology is available. The a priori network structure information is first tested for compliance with a set of identifi...
Journal of Intellectual Property Law & Practice, 2008
SLAS Discovery, 2008
Commonly used methods for isolated enzyme inhibitor screening typically rely on fluorescent or ch... more Commonly used methods for isolated enzyme inhibitor screening typically rely on fluorescent or chemiluminescent detection techniques that are often indirect and/or coupled assays. Mass spectrometry (MS) has been widely reported for measuring the conversion of substrates to products for enzyme assays and has more recently been demonstrated as an alternative readout system for inhibitor screening. In this report, a high-throughput mass spectrometry (HTMS) readout platform, based on the direct measurement of substrate conversion to product, is presented. The rapid ionization and desorption features of a new generation matrix-assisted laser desorption ionization-triple quadrupole (MALDI-QqQ) mass spectrometer are shown to improve the speed of analysis to greater than 1 sample per second while maintaining excellent Z′ values. Furthermore, the readout was validated by demonstrating the ability to measure IC50 values for several known kinase inhibitors against cyclic AMP—dependent protein ...
International Journal for Research in Applied Science and Engineering Technology, 2019
A collaborative neural network (ASN) is a combination of feed-forward neural networks and a group... more A collaborative neural network (ASN) is a combination of feed-forward neural networks and a group of closest neighboring techniques. The network offered represents a correlation between the responses collected as the measure of distance between the cases analyzed for the nearest neighbor technology and provides a better prediction by the bias improvement of the neural network ensemble. A collaborative neural network has a memory that can be found with the training set. If new data becomes available, the network improves further forecasting ability and can often provide a reasonable estimate of the unknown function without the need to stop the piece of neural network. I. INTRODUCTION The traditional synthetic feed-forward neural network (ANN) is a bleak vision. This means that after completing the training, all the information about the input pattern is stored in neural network weight and input data is not required, i.e. there is no clear storage of any presented instance in the system. On the contrary, the closest-to-neighbors (KNN) (for example, Dasherty, 1991), perjonwindow regressions (e.g., Hurdle, 1990), etc., represent the memory-based approach. These approaches keep the whole database of memory in memory and their predictions are based on some local projections of stored examples. Neural networks can be considered as a global model, while the other two approaches are generally considered to be local models (Lawrence et al., 1996). For example, consider the problem of multiplexing function approximation, i.e. finding mapping RM => RN from given set of sample points. For simplicity, we assume that n = 1 A global model input provides a good estimate of the data space RM's global metric. However, if the work analyzed, F, is very complex, then there is no guarantee that all details of F, i.e., will be represented in its good structure. Thus, the global model can be insufficient because it does not mainly describe the entire state's place due to the high bias of the global model in certain areas of space. ANN variation can also contribute to poor performance of this method (Gemman et al., 1992). However, variation can be reduced by analyzing the large number of variation networks, i.e. using the artificial neural network ensemble (ANNE), and for example, taking a general average of all networks in the form of the last model. The problem of ANN's bias can not be easily addressed, for example, using such a large nervous network, such networks can fall into the local minimum and thus there may still be enough bias. Local models are based on some neighborhood relationships, and analyzing these methods are more relevant to the discovery of a good structure of analysis tasks, i.e. they can get less bias than the global model. However, while applying these methods, the difficult question is how to properly determine the neighborhood relations in the analysis area? Input data analyzed, especially in practical applications, there can be a large number of dimensions and for the final representation, the actual importance of each input parameter and contribution is generally not known. Example 1. Consider the example of the sign function y = sin (x) (1) With dimensions of 1 equals vector x. The training and test sets consisted of N = 100 and 1000 cases respectively, and the input values were evenly distributed over parallel (0, π). The KNN method was used Z (2) JNN K (X) where z (x) is the estimated value for case X, using the NK (E) Euclidean metric, the training set {xi} Ni = 1 is a collection of the closest neighbors of X between the input vectors x x, xi ||. Note that the memory of KNN was shown by the entire training set {xi} Ni = 1. Number K = 1 was selected to provide minimum leave-one-out error (LOO) for the training set. KNN calculated the basic mean square error, RMS = 0.016 for the test set. Similar results, RMS = 0.022 was calculated by a group of M = 100 ANN According to the Levenburg-Marquart Algorithm (press et al., 1994), trained with 2 hidden neurons (a hidden layer) trained. Input and output values were generalized for normal (0.1.0.9) intervals and the signogyd activation function was used for all neurons. In this and all the analyzes 50% of the cases were given opportunity and each neural network (TETCO et al., 1995) was used as a training set. The rest of the cases were initially used as a verification set Stopping Method (Bishop, 1995). Thus, each neural network had its own training and verification set. To predict the test network, all the networks were used after learning a simple
Mental stress is becoming a threat to people's health now a days. With the rapid pace of life, mo... more Mental stress is becoming a threat to people's health now a days. With the rapid pace of life, more and more people are feeling stressed. It is not easy to detect users stress in an early time to protect user [1]. We determined that students stress state is firmly diagnosed with that of his/her activities in on-line lifestyles. We initially load the data from dataset named as "Sentiment_140" from Kaggle and visualize properties from different viewpoints and afterward propose a Naïve Bayes algorithm-It is a classification technique based on Bayes' Theorem with an assumption of independence among predictors i.e. presence of a particular feature in a class is unrelated to the presence of any other feature.
International Journal of Quality & Reliability Management, 2020
PurposeThis paper investigates the risks involved in the Indian foodgrain supply chain (FSC) and ... more PurposeThis paper investigates the risks involved in the Indian foodgrain supply chain (FSC) and proposes risk mitigation taxonomy to enable decision making.Design/methodology/approachThis paper used failure mode and effect analysis (FMEA) for risk estimation. In the traditional FMEA, risk priority number (RPN) is evaluated by multiplying the probability of occurrence, severity and detection. Because of some drawbacks of the traditional FMEA, instead of calculating RPN, this paper prioritizes the FSC risk factors using fuzzy VIKOR. VIKOR is a multiple attribute decision-making technique which aims to rank FSC risk factors with respect to criteria.FindingsThe findings indicate that “technological risk” has a higher impact on the FSC, followed by natural disaster, communication failure, non-availability of procurement centers, malfunctioning in PDS and inadequate storage facility. Sensitivity analysis is performed to check the robustness of the results.Practical implicationsThe outcom...
International Journal of Production Research, 2020
The purpose of this paper is to model dynamic feedback effects and complex interactions among ris... more The purpose of this paper is to model dynamic feedback effects and complex interactions among risks affecting foodgrains transportation using a system dynamics approach. The risk scenario is simulated using a system dynamics model considering risk index values. It has been observed that there is a significant increase in inventory level from 8.39% to 28.4% and vehicle capacity from 8.99% to 28.4% with a change in risk value by 30%. This can help managers to develop inventory and transportation policies. It will, therefore, help to generate risk reduction scenarios for better availability of foodgrains through integrated risk control and mitigation in a supply chain. The key findings drawn for transportation and inventory management of foodgrains will help policy-makers to improve the efficiency of foodgrains supply chain. This research uniquely analyses the dynamic implication of inventory-transportation policies on foodgrains transportation system using system dynamics modelling approach.
The International Journal of Logistics Management, 2017
Purpose The food supply chain is exposed to severe environmental and social issues with serious e... more Purpose The food supply chain is exposed to severe environmental and social issues with serious economic consequences. The identification and assessment of risk involved in the food supply chain can help to overcome these challenges. In response, the purpose of this paper is to develop a risk assessment framework for a typical food supply chain. Design/methodology/approach An integrated methodology of grey analytical hierarchy process and grey technique for order preference by similarity to the ideal solution is proposed for developing a comprehensive risk index. The opinion of the experts is used to illustrate an application of the proposed methodology for the risk assessment of the food supply chain in India. Findings Valuable insights and recommendations are drawn from the results, which are helpful to the practitioners working at strategic and tactical levels in the food supply chain for minimising the supply chain disruptions. Research limitations/implications The risk quantifi...
The Journal of the Egyptian Public Health Association, 2007
Wastewater workers are exposed to various job-related hazards. This work was carried out in the p... more Wastewater workers are exposed to various job-related hazards. This work was carried out in the period from November, 2004 to January, 2005. All workers (one hundred and ninety two workers) in the Jeddah Municipal Wastewater Treatment Plants (MWTP) were interviewed. They were asked to answer a precoded questionnaire that included personal data and complete medical (present, past, and family) history. They were also asked about history of previous medical examinations in details. Psychological problems were the most common health problems as they formed 84.4% followed by mucous membranes' irritation which constituted 42.2%. The percentage of workers following the safety precautions: wearing anti-slide shoes; the use of personal protective tools for the protection of the skin and eyes; the use of safety precautions in mixing chemicals; the safe storage, transfer, and circulation of chemicals; and ensuring the safety of electrical appliances were 14.6%, 75%, 13.5%, 91.7%, and 95.8%...
Proceedings of the National Academy of Sciences, 2003
High-dimensional data sets generated by high-throughput technologies, such as DNA microarray, are... more High-dimensional data sets generated by high-throughput technologies, such as DNA microarray, are often the outputs of complex networked systems driven by hidden regulatory signals. Traditional statistical methods for computing low-dimensional or hidden representations of these data sets, such as principal component analysis and independent component analysis, ignore the underlying network structures and provide decompositions based purely on a priori statistical constraints on the computed component signals. The resulting decomposition thus provides a phenomenological model for the observed data and does not necessarily contain physically or biologically meaningful signals. Here, we develop a method, called network component analysis, for uncovering hidden regulatory signals from outputs of networked systems, when only a partial knowledge of the underlying network topology is available. The a priori network structure information is first tested for compliance with a set of identifi...
Journal of Intellectual Property Law & Practice, 2008
SLAS Discovery, 2008
Commonly used methods for isolated enzyme inhibitor screening typically rely on fluorescent or ch... more Commonly used methods for isolated enzyme inhibitor screening typically rely on fluorescent or chemiluminescent detection techniques that are often indirect and/or coupled assays. Mass spectrometry (MS) has been widely reported for measuring the conversion of substrates to products for enzyme assays and has more recently been demonstrated as an alternative readout system for inhibitor screening. In this report, a high-throughput mass spectrometry (HTMS) readout platform, based on the direct measurement of substrate conversion to product, is presented. The rapid ionization and desorption features of a new generation matrix-assisted laser desorption ionization-triple quadrupole (MALDI-QqQ) mass spectrometer are shown to improve the speed of analysis to greater than 1 sample per second while maintaining excellent Z′ values. Furthermore, the readout was validated by demonstrating the ability to measure IC50 values for several known kinase inhibitors against cyclic AMP—dependent protein ...
International Journal for Research in Applied Science and Engineering Technology, 2019
A collaborative neural network (ASN) is a combination of feed-forward neural networks and a group... more A collaborative neural network (ASN) is a combination of feed-forward neural networks and a group of closest neighboring techniques. The network offered represents a correlation between the responses collected as the measure of distance between the cases analyzed for the nearest neighbor technology and provides a better prediction by the bias improvement of the neural network ensemble. A collaborative neural network has a memory that can be found with the training set. If new data becomes available, the network improves further forecasting ability and can often provide a reasonable estimate of the unknown function without the need to stop the piece of neural network. I. INTRODUCTION The traditional synthetic feed-forward neural network (ANN) is a bleak vision. This means that after completing the training, all the information about the input pattern is stored in neural network weight and input data is not required, i.e. there is no clear storage of any presented instance in the system. On the contrary, the closest-to-neighbors (KNN) (for example, Dasherty, 1991), perjonwindow regressions (e.g., Hurdle, 1990), etc., represent the memory-based approach. These approaches keep the whole database of memory in memory and their predictions are based on some local projections of stored examples. Neural networks can be considered as a global model, while the other two approaches are generally considered to be local models (Lawrence et al., 1996). For example, consider the problem of multiplexing function approximation, i.e. finding mapping RM => RN from given set of sample points. For simplicity, we assume that n = 1 A global model input provides a good estimate of the data space RM's global metric. However, if the work analyzed, F, is very complex, then there is no guarantee that all details of F, i.e., will be represented in its good structure. Thus, the global model can be insufficient because it does not mainly describe the entire state's place due to the high bias of the global model in certain areas of space. ANN variation can also contribute to poor performance of this method (Gemman et al., 1992). However, variation can be reduced by analyzing the large number of variation networks, i.e. using the artificial neural network ensemble (ANNE), and for example, taking a general average of all networks in the form of the last model. The problem of ANN's bias can not be easily addressed, for example, using such a large nervous network, such networks can fall into the local minimum and thus there may still be enough bias. Local models are based on some neighborhood relationships, and analyzing these methods are more relevant to the discovery of a good structure of analysis tasks, i.e. they can get less bias than the global model. However, while applying these methods, the difficult question is how to properly determine the neighborhood relations in the analysis area? Input data analyzed, especially in practical applications, there can be a large number of dimensions and for the final representation, the actual importance of each input parameter and contribution is generally not known. Example 1. Consider the example of the sign function y = sin (x) (1) With dimensions of 1 equals vector x. The training and test sets consisted of N = 100 and 1000 cases respectively, and the input values were evenly distributed over parallel (0, π). The KNN method was used Z (2) JNN K (X) where z (x) is the estimated value for case X, using the NK (E) Euclidean metric, the training set {xi} Ni = 1 is a collection of the closest neighbors of X between the input vectors x x, xi ||. Note that the memory of KNN was shown by the entire training set {xi} Ni = 1. Number K = 1 was selected to provide minimum leave-one-out error (LOO) for the training set. KNN calculated the basic mean square error, RMS = 0.016 for the test set. Similar results, RMS = 0.022 was calculated by a group of M = 100 ANN According to the Levenburg-Marquart Algorithm (press et al., 1994), trained with 2 hidden neurons (a hidden layer) trained. Input and output values were generalized for normal (0.1.0.9) intervals and the signogyd activation function was used for all neurons. In this and all the analyzes 50% of the cases were given opportunity and each neural network (TETCO et al., 1995) was used as a training set. The rest of the cases were initially used as a verification set Stopping Method (Bishop, 1995). Thus, each neural network had its own training and verification set. To predict the test network, all the networks were used after learning a simple