Radha Guha - Academia.edu (original) (raw)
Papers by Radha Guha
International journal of simulation. Systems, Science and Technology, Aug 30, 2013
International Journal of Informatics and Communication Technology (IJ-ICT)
When it comes to purchasing a product or attending an event, most people want to know what others... more When it comes to purchasing a product or attending an event, most people want to know what others think about it first. To construct a recommendation system, a user's likeness of a product can be measured numerically, such as a five-star rating or a binary like or dislike rating. If you don't have a numerical rating system, the product review text can still be used to make recommendations. Natural language comprehension is a branch of computer science that aims to make machines capable of natural language understanding (NLU). Negative, neutral, or positive sentiment analysis (SA) or opinion mining (OM) is an algorithmic method for automatically determining the polarity of comments and reviews based on their content. Emotional intelligence relies on text categorization to work. In the age of big data, there are countless ways to use sentiment analysis, yet SA remains a challenge. As a result of its enormous importance, sentiment analysis is a hotly debated topic in the commer...
Traditional way of software engineering is no longer fully suitable in the changing scenario of m... more Traditional way of software engineering is no longer fully suitable in the changing scenario of modern hardware and software architecture of parallel and distributed computing on Semantic web and Cloud computing platform. A parallel hardware architecture can support high performance computing but needs changes in programming style. Also the capability of Semantic web can link everything on the internet making an interoperable intelligent system. And with this capability many beneficial business models like Web services and Cloud computing platform have been conceptualized. Cloud computing is the most anticipated future trend of computing. These changes in hardware and software architecture means we need to re-visit the traditional software engineering process models meant for a single computer system. This paper first surveys the evolution of hardware architecture, newer business models, newer software applications and then analyses the need for changes in software engineering process models to leverage all the benefits of the newer business models. This paper also emphasizes the vulnerability of the web applications and cloud computing platform in terms of risk management of web applications in general and privacy and security of customer information in shared cloud platform which may threaten the adoption of the cloud platform.
Tim Berners Lee’s vision of the Semantic Web or Web 3.0 is to transform the World Wide Web into a... more Tim Berners Lee’s vision of the Semantic Web or Web 3.0 is to transform the World Wide Web into an intelligent web system of structured, linked data which can be queried and inferred as a whole by the computers themselves. This grand vision of the web is materializing many innovative use of the web. New business models like interoperable applications hosted on the web as services are getting implemented. These web services are designed to be automatically discovered by software agents and exchange data amongst themselves. Another business model is the cloud computing platform, where hardware, software, tools and applications all will be leased out to tenants across the globe over the internet. The advancement of the Semantic Web and the many advantages of the new business models are changing the way software needs to be developed. This makes one wonder how software engineering has to adapt to these anticipated futuristic trends and how much it will benefit from them. This paper anal...
2015 International Conference on Industrial Engineering and Operations Management (IEOM), 2015
In recent years emergence of many intelligent autonomous systems are possible due to the tremendo... more In recent years emergence of many intelligent autonomous systems are possible due to the tremendous advancement of various technologies like computer vision and automation and control engineering with sensor technology. One such intelligent system is autonomous underwater vehicle (AUV) for ocean floor mapping by SONAR technology. Success of this autonomous smart and precise intelligent system depends on accurate navigation of the unmanned vehicle for longer period of time and precise objects detection on the ocean floor by SONAR image processing. This paper describes various algorithms used for this purpose and investigates the computational payloads of such dynamic system. For very high computational payload requirement of such system parallel processing by reconfigurable field programmable gate array is proposed for higher performance.
An efficient placement algorithm for run-time reconfigurable embedded system
Computer Communications and Networks, 2013
Tim Berners-Lee’s vision of the Semantic Web or Web 3.0 is to transform the World Wide Web into a... more Tim Berners-Lee’s vision of the Semantic Web or Web 3.0 is to transform the World Wide Web into an intelligent Web system of structured, linked data which can be queried and inferred as a whole by the computers themselves. This grand vision of the Web is materializing many innovative uses of the Web. New business models like interoperable applications hosted on the Web as services are getting implemented. These Web services are designed to be automatically discovered by software agents and exchange data among themselves. Another business model is the cloud computing platform, where hardware, software, tools, and applications will be leased out as services to tenants across the globe over the Internet. There are many advantages of this business model, like no capital expenditure, speed of application deployment, shorter time to market, lower cost of operation, and easier maintenance of resources, for the tenants. Because of these advantages, cloud computing may be the prevalent computing platform of the future. To realize all the advantages of these new business models of distributed, shared, and self-provisioning environment of Web services and cloud computing resources, the traditional way of software engineering has to change as well. This chapter analyzes how cloud computing, on the background of Semantic Web, is going to impact on the software engineering processes to develop quality software. The need for changes in the software development and deployment framework activities is also analyzed to facilitate adoption of cloud computing platform.
2021 6th International Conference for Convergence in Technology (I2CT), 2021
In the era of smart digital transformation, automatic face recognition is the way of identificati... more In the era of smart digital transformation, automatic face recognition is the way of identification and verification of a person in many applications of security and authentication. If the person's 2D still image or 3D video frame is taken under controlled lighting and frontal face poses, then today automatic face recognition is a solved problem with more than 98 % accuracy. Under ideal condition of images, automatic face recognition outperforms manual face recognition rate. But the problem of uncontrolled illumination, occlusion, tilting of faces, different expressions of faces, use of accessories and hair color, growth of facial hair, aging effect of a person and low-resolution images makes automatic face recognition underperform; it gets defeated by human. Unless this problem is solved on real time, automatic face recognition system cannot be trusted for crucial security applications like e-passport, fraud detection, counter terrorism and mug-shot verification etc. Extensive research is underway to improve this technology. Traditionally Harr cascade classifier methods, histogram of oriented gradients (HoG), principal component analysis (PCA), Eigen-faces, and support vector machine (SVM) classifier are used in face recognition. Now modern deep learning innovation has superseded those traditional machine learning techniques with more computing power in GPUs and TPUs to tackle all variations in input image quality and ever-growing face database. The goal of this paper is to report the journey of face recognition system from traditional to modern techniques to increase precision and recall of the system.
International Journal of Computer Applications, 2017
Text mining is the technique of automatically deducing nonobvious but statistically supported nov... more Text mining is the technique of automatically deducing nonobvious but statistically supported novel information from various text data sources written in natural languages. In the big data and cloud computing era of today huge amount of text data are getting generated online. Thus text mining is becoming very essential for business intelligence extraction as volume of internet data generation is growing exponentially. Next generation computing is going to see text mining amongst other disruptive technologies like semantic web, mobile computing, big data generation, and cloud computing phenomena. Text mining needs proven techniques to be developed for it to be most effective. Even though structured data mining field is very active and mature, unstructured text mining field has just emerged. Challenges of text mining field are different from that of structured data analytics field. In this paper, I survey text mining techniques and various interesting and important applications of tex...
Tremendous advancement of technology in computer science and engineering discipline is recently a... more Tremendous advancement of technology in computer science and engineering discipline is recently aiming at improved patient care in a cost effective way by using more sophisticated techniques and patient health monitoring devices. Advancements in computer vision, virtual reality and robotics technology can be applied in rendering images of internal regions of the human body and for developing image-guided surgery by physicians or by medical robots. Detail visual representation of patient data helps doctors and researchers in analyzing the data, in tracing diseases and in quick decision making. With this goal of improved patient care, the newly emerged biomedical imaging field is facing the challenge of real time quantitative processing of large data sets of vital signals acquired from patient body by various compute intensive algorithms to extract important information, its visualization and efficient management of storing and retrieving of patient records. This research paper first ...
2021 6th International Conference for Convergence in Technology (I2CT), 2021
The most valuable artificial intelligence application for e-commerce to social media websites the... more The most valuable artificial intelligence application for e-commerce to social media websites these days is a smart recommendation engine that can filter panoply of information on the internet and recommend personalized products and services to each user. An efficient and reliable recommender engine (RE) increases sells and profit of the e-commerce websites, thus its performance is very crucial. Traditional RE suffers from cold-start, low accuracy, and scalability to Big Data problem. Thus, RE research has started again with great enthusiasm to explore newer techniques with deep learning artificial neural nets, as more computing power in parallel processing framework become available from latter half of this decade. In recent years deep learning (DL) artificial neural nets (ANN) have given breakthrough performance in areas like image processing and natural language processing tasks. So, its usability needs to be researched for recommender engine design also. This paper first explores the traditional ways of making a recommender engine and then evaluates the use of deep learning neural net techniques. The first contribution of this paper is expounding the theoretical foundation of different ways a RE can be built viz. content-based filtering (CBF) and collaborative filtering (CF) that comprises of complex algorithms. The second contribution of this paper is practical experiments with both traditional linear algebra techniques and deep learning auto-encoder architecture on large Movie-Lens dataset. Comparisons of the result shows deep learning methods outperform traditional methods.
Machine Learning for Sustainable Development, 2021
Current Chinese Computer Science, 2020
Background:: In the era of information overload it is very difficult for a human reader to make s... more Background:: In the era of information overload it is very difficult for a human reader to make sense of the vast information available in the internet quickly. Even for a specific domain like college or university website it may be difficult for a user to browse through all the links to get the relevant answers quickly. Objective:: In this scenario, design of a chat-bot which can answer questions related to college information and compare between colleges will be very useful and novel. Methods:: In this paper a novel conversational interface chat-bot application with information retrieval and text summariza-tion skill is designed and implemented. Firstly this chat-bot has a simple dialog skill when it can understand the user query intent, it responds from the stored collection of answers. Secondly for unknown queries, this chat-bot can search the internet and then perform text summarization using advanced techniques of natural language processing (NLP) and text mining (TM). Results...
International Research Journal of Computer Science, 2020
The recent beta release of is triggering a lot of excitement GPT is generative pre deep neural ne... more The recent beta release of is triggering a lot of excitement GPT is generative pre deep neural network model ever built by any a transformer based natural language processor (NLP) programming community. in another form. It can write fic language description. software development have to reposition and re every question posed in natural language the latest GPT-3 model and software development methodologies in the changing cloud computing, web services applications and mobile apps are just integration of conglome application programming interfac transformation, and machine lear future software engineering needs and demands.
International Research Journal of Computer Science, 2020
Today we are living in modern Internet era. We can get all our information from the internet anyt... more Today we are living in modern Internet era. We can get all our information from the internet anytime and from anywhere using a desktop PC or a smart phone. However, the underlying technology for relevant information retrieval from the internet is not so trivial, as internet is a huge repository of all different kinds of information. Moreover, data collection in the intern Retrieving the relevant information from the internet in the Big Data era is same as finding a needle in the haystack. This paper explores information retrieval models and experiments Semantic Indexing (LSI) first and then with the more efficient topic modeling algorithm of Latent Dirichlet Allocation (LDA). Comparisons between the two models are described clearly and concisely in their ef topic modeling. Various applications of topic modeling are also reviewed in this paper from the literature.
2009 First International Conference on Computational Intelligence, Communication Systems and Networks, 2009
In the era of application convergence, the embedded systems are required to support multiple appl... more In the era of application convergence, the embedded systems are required to support multiple applications with different traffic patterns and Quality of Service (QoS) requirements. But continual process technology's shrinking trend makes the communication network of the embedded ...
International Conference on Computational Intelligence, Communication Systems and Networks, 2009
The grand vision of Tim Barners-Lee, director of the World Wide Web Consortium (W3C) founded in 1... more The grand vision of Tim Barners-Lee, director of the World Wide Web Consortium (W3C) founded in 1994, of changing the nonsemantic Web (Web 1.0, Web 2.0) to semantic Web (Web 3.0) will connect all the Web sites and will make their systems interoperable. Though this system has not been fully matured yet, the goal of utilizing the full potential of
World Congress on Nature & Biologically Inspired Computing, 2009
High performance computing (HPC) by parallel computing effort faces several challenges. The first... more High performance computing (HPC) by parallel computing effort faces several challenges. The first challenge is the efficient design and management of the parallel computing resources of the hardware platform. The second challenge is the transformation of the sequential program meant for classic Von Neumann architecture to explicit parallel instruction computing (EPIC) architecture. The third challenge is the design of an
Process technology's continual shrinking trend following Moore's Law will enable Field ... more Process technology's continual shrinking trend following Moore's Law will enable Field Programmable Gate Array (FPGA) to pack billions of smaller and faster transistors on the same chip by the end of this decade. Enormous capacity of this FPGA chip increases parallel processing power ...
International journal of simulation. Systems, Science and Technology, Aug 30, 2013
International Journal of Informatics and Communication Technology (IJ-ICT)
When it comes to purchasing a product or attending an event, most people want to know what others... more When it comes to purchasing a product or attending an event, most people want to know what others think about it first. To construct a recommendation system, a user's likeness of a product can be measured numerically, such as a five-star rating or a binary like or dislike rating. If you don't have a numerical rating system, the product review text can still be used to make recommendations. Natural language comprehension is a branch of computer science that aims to make machines capable of natural language understanding (NLU). Negative, neutral, or positive sentiment analysis (SA) or opinion mining (OM) is an algorithmic method for automatically determining the polarity of comments and reviews based on their content. Emotional intelligence relies on text categorization to work. In the age of big data, there are countless ways to use sentiment analysis, yet SA remains a challenge. As a result of its enormous importance, sentiment analysis is a hotly debated topic in the commer...
Traditional way of software engineering is no longer fully suitable in the changing scenario of m... more Traditional way of software engineering is no longer fully suitable in the changing scenario of modern hardware and software architecture of parallel and distributed computing on Semantic web and Cloud computing platform. A parallel hardware architecture can support high performance computing but needs changes in programming style. Also the capability of Semantic web can link everything on the internet making an interoperable intelligent system. And with this capability many beneficial business models like Web services and Cloud computing platform have been conceptualized. Cloud computing is the most anticipated future trend of computing. These changes in hardware and software architecture means we need to re-visit the traditional software engineering process models meant for a single computer system. This paper first surveys the evolution of hardware architecture, newer business models, newer software applications and then analyses the need for changes in software engineering process models to leverage all the benefits of the newer business models. This paper also emphasizes the vulnerability of the web applications and cloud computing platform in terms of risk management of web applications in general and privacy and security of customer information in shared cloud platform which may threaten the adoption of the cloud platform.
Tim Berners Lee’s vision of the Semantic Web or Web 3.0 is to transform the World Wide Web into a... more Tim Berners Lee’s vision of the Semantic Web or Web 3.0 is to transform the World Wide Web into an intelligent web system of structured, linked data which can be queried and inferred as a whole by the computers themselves. This grand vision of the web is materializing many innovative use of the web. New business models like interoperable applications hosted on the web as services are getting implemented. These web services are designed to be automatically discovered by software agents and exchange data amongst themselves. Another business model is the cloud computing platform, where hardware, software, tools and applications all will be leased out to tenants across the globe over the internet. The advancement of the Semantic Web and the many advantages of the new business models are changing the way software needs to be developed. This makes one wonder how software engineering has to adapt to these anticipated futuristic trends and how much it will benefit from them. This paper anal...
2015 International Conference on Industrial Engineering and Operations Management (IEOM), 2015
In recent years emergence of many intelligent autonomous systems are possible due to the tremendo... more In recent years emergence of many intelligent autonomous systems are possible due to the tremendous advancement of various technologies like computer vision and automation and control engineering with sensor technology. One such intelligent system is autonomous underwater vehicle (AUV) for ocean floor mapping by SONAR technology. Success of this autonomous smart and precise intelligent system depends on accurate navigation of the unmanned vehicle for longer period of time and precise objects detection on the ocean floor by SONAR image processing. This paper describes various algorithms used for this purpose and investigates the computational payloads of such dynamic system. For very high computational payload requirement of such system parallel processing by reconfigurable field programmable gate array is proposed for higher performance.
An efficient placement algorithm for run-time reconfigurable embedded system
Computer Communications and Networks, 2013
Tim Berners-Lee’s vision of the Semantic Web or Web 3.0 is to transform the World Wide Web into a... more Tim Berners-Lee’s vision of the Semantic Web or Web 3.0 is to transform the World Wide Web into an intelligent Web system of structured, linked data which can be queried and inferred as a whole by the computers themselves. This grand vision of the Web is materializing many innovative uses of the Web. New business models like interoperable applications hosted on the Web as services are getting implemented. These Web services are designed to be automatically discovered by software agents and exchange data among themselves. Another business model is the cloud computing platform, where hardware, software, tools, and applications will be leased out as services to tenants across the globe over the Internet. There are many advantages of this business model, like no capital expenditure, speed of application deployment, shorter time to market, lower cost of operation, and easier maintenance of resources, for the tenants. Because of these advantages, cloud computing may be the prevalent computing platform of the future. To realize all the advantages of these new business models of distributed, shared, and self-provisioning environment of Web services and cloud computing resources, the traditional way of software engineering has to change as well. This chapter analyzes how cloud computing, on the background of Semantic Web, is going to impact on the software engineering processes to develop quality software. The need for changes in the software development and deployment framework activities is also analyzed to facilitate adoption of cloud computing platform.
2021 6th International Conference for Convergence in Technology (I2CT), 2021
In the era of smart digital transformation, automatic face recognition is the way of identificati... more In the era of smart digital transformation, automatic face recognition is the way of identification and verification of a person in many applications of security and authentication. If the person's 2D still image or 3D video frame is taken under controlled lighting and frontal face poses, then today automatic face recognition is a solved problem with more than 98 % accuracy. Under ideal condition of images, automatic face recognition outperforms manual face recognition rate. But the problem of uncontrolled illumination, occlusion, tilting of faces, different expressions of faces, use of accessories and hair color, growth of facial hair, aging effect of a person and low-resolution images makes automatic face recognition underperform; it gets defeated by human. Unless this problem is solved on real time, automatic face recognition system cannot be trusted for crucial security applications like e-passport, fraud detection, counter terrorism and mug-shot verification etc. Extensive research is underway to improve this technology. Traditionally Harr cascade classifier methods, histogram of oriented gradients (HoG), principal component analysis (PCA), Eigen-faces, and support vector machine (SVM) classifier are used in face recognition. Now modern deep learning innovation has superseded those traditional machine learning techniques with more computing power in GPUs and TPUs to tackle all variations in input image quality and ever-growing face database. The goal of this paper is to report the journey of face recognition system from traditional to modern techniques to increase precision and recall of the system.
International Journal of Computer Applications, 2017
Text mining is the technique of automatically deducing nonobvious but statistically supported nov... more Text mining is the technique of automatically deducing nonobvious but statistically supported novel information from various text data sources written in natural languages. In the big data and cloud computing era of today huge amount of text data are getting generated online. Thus text mining is becoming very essential for business intelligence extraction as volume of internet data generation is growing exponentially. Next generation computing is going to see text mining amongst other disruptive technologies like semantic web, mobile computing, big data generation, and cloud computing phenomena. Text mining needs proven techniques to be developed for it to be most effective. Even though structured data mining field is very active and mature, unstructured text mining field has just emerged. Challenges of text mining field are different from that of structured data analytics field. In this paper, I survey text mining techniques and various interesting and important applications of tex...
Tremendous advancement of technology in computer science and engineering discipline is recently a... more Tremendous advancement of technology in computer science and engineering discipline is recently aiming at improved patient care in a cost effective way by using more sophisticated techniques and patient health monitoring devices. Advancements in computer vision, virtual reality and robotics technology can be applied in rendering images of internal regions of the human body and for developing image-guided surgery by physicians or by medical robots. Detail visual representation of patient data helps doctors and researchers in analyzing the data, in tracing diseases and in quick decision making. With this goal of improved patient care, the newly emerged biomedical imaging field is facing the challenge of real time quantitative processing of large data sets of vital signals acquired from patient body by various compute intensive algorithms to extract important information, its visualization and efficient management of storing and retrieving of patient records. This research paper first ...
2021 6th International Conference for Convergence in Technology (I2CT), 2021
The most valuable artificial intelligence application for e-commerce to social media websites the... more The most valuable artificial intelligence application for e-commerce to social media websites these days is a smart recommendation engine that can filter panoply of information on the internet and recommend personalized products and services to each user. An efficient and reliable recommender engine (RE) increases sells and profit of the e-commerce websites, thus its performance is very crucial. Traditional RE suffers from cold-start, low accuracy, and scalability to Big Data problem. Thus, RE research has started again with great enthusiasm to explore newer techniques with deep learning artificial neural nets, as more computing power in parallel processing framework become available from latter half of this decade. In recent years deep learning (DL) artificial neural nets (ANN) have given breakthrough performance in areas like image processing and natural language processing tasks. So, its usability needs to be researched for recommender engine design also. This paper first explores the traditional ways of making a recommender engine and then evaluates the use of deep learning neural net techniques. The first contribution of this paper is expounding the theoretical foundation of different ways a RE can be built viz. content-based filtering (CBF) and collaborative filtering (CF) that comprises of complex algorithms. The second contribution of this paper is practical experiments with both traditional linear algebra techniques and deep learning auto-encoder architecture on large Movie-Lens dataset. Comparisons of the result shows deep learning methods outperform traditional methods.
Machine Learning for Sustainable Development, 2021
Current Chinese Computer Science, 2020
Background:: In the era of information overload it is very difficult for a human reader to make s... more Background:: In the era of information overload it is very difficult for a human reader to make sense of the vast information available in the internet quickly. Even for a specific domain like college or university website it may be difficult for a user to browse through all the links to get the relevant answers quickly. Objective:: In this scenario, design of a chat-bot which can answer questions related to college information and compare between colleges will be very useful and novel. Methods:: In this paper a novel conversational interface chat-bot application with information retrieval and text summariza-tion skill is designed and implemented. Firstly this chat-bot has a simple dialog skill when it can understand the user query intent, it responds from the stored collection of answers. Secondly for unknown queries, this chat-bot can search the internet and then perform text summarization using advanced techniques of natural language processing (NLP) and text mining (TM). Results...
International Research Journal of Computer Science, 2020
The recent beta release of is triggering a lot of excitement GPT is generative pre deep neural ne... more The recent beta release of is triggering a lot of excitement GPT is generative pre deep neural network model ever built by any a transformer based natural language processor (NLP) programming community. in another form. It can write fic language description. software development have to reposition and re every question posed in natural language the latest GPT-3 model and software development methodologies in the changing cloud computing, web services applications and mobile apps are just integration of conglome application programming interfac transformation, and machine lear future software engineering needs and demands.
International Research Journal of Computer Science, 2020
Today we are living in modern Internet era. We can get all our information from the internet anyt... more Today we are living in modern Internet era. We can get all our information from the internet anytime and from anywhere using a desktop PC or a smart phone. However, the underlying technology for relevant information retrieval from the internet is not so trivial, as internet is a huge repository of all different kinds of information. Moreover, data collection in the intern Retrieving the relevant information from the internet in the Big Data era is same as finding a needle in the haystack. This paper explores information retrieval models and experiments Semantic Indexing (LSI) first and then with the more efficient topic modeling algorithm of Latent Dirichlet Allocation (LDA). Comparisons between the two models are described clearly and concisely in their ef topic modeling. Various applications of topic modeling are also reviewed in this paper from the literature.
2009 First International Conference on Computational Intelligence, Communication Systems and Networks, 2009
In the era of application convergence, the embedded systems are required to support multiple appl... more In the era of application convergence, the embedded systems are required to support multiple applications with different traffic patterns and Quality of Service (QoS) requirements. But continual process technology's shrinking trend makes the communication network of the embedded ...
International Conference on Computational Intelligence, Communication Systems and Networks, 2009
The grand vision of Tim Barners-Lee, director of the World Wide Web Consortium (W3C) founded in 1... more The grand vision of Tim Barners-Lee, director of the World Wide Web Consortium (W3C) founded in 1994, of changing the nonsemantic Web (Web 1.0, Web 2.0) to semantic Web (Web 3.0) will connect all the Web sites and will make their systems interoperable. Though this system has not been fully matured yet, the goal of utilizing the full potential of
World Congress on Nature & Biologically Inspired Computing, 2009
High performance computing (HPC) by parallel computing effort faces several challenges. The first... more High performance computing (HPC) by parallel computing effort faces several challenges. The first challenge is the efficient design and management of the parallel computing resources of the hardware platform. The second challenge is the transformation of the sequential program meant for classic Von Neumann architecture to explicit parallel instruction computing (EPIC) architecture. The third challenge is the design of an
Process technology's continual shrinking trend following Moore's Law will enable Field ... more Process technology's continual shrinking trend following Moore's Law will enable Field Programmable Gate Array (FPGA) to pack billions of smaller and faster transistors on the same chip by the end of this decade. Enormous capacity of this FPGA chip increases parallel processing power ...