Piyush Makhija - Academia.edu (original) (raw)
Uploads
Papers by Piyush Makhija
2017 IEEE Wireless Communications and Networking Conference (WCNC)
Side-link relay system is targeted to provide critical public safety and broadcast services to ex... more Side-link relay system is targeted to provide critical public safety and broadcast services to extended coverage regions. It is also seen as an enabler for future communication systems supporting vehicular communication like V2V, V2X and as a compliment technique to extend 5G coverage. Enhanced system performance in terms of energy efficiency for overall system (both relay amp;amp; remote UEs) and scheduling strategies while enabling end-to-end QoS is of paramount importance in relay system. The paper targets this twin challenge and proposes novel architecture aspects as well as new approaches for discovery and communication scheduling enabling optimum system performance with DRX alignment and meeting varied QoS targets. Congestion handling during system overload situations is also addressed. Simulation results show proposed QoS scheduling algorithms that consider DRX function achieve substantial gains over the standard QoS-aware algorithm. Results provided derive significant insights for side-link relay system design and implementation.
ArXiv, 2020
Owing to BERT's phenomenal success on various NLP tasks and benchmark datasets, industry prac... more Owing to BERT's phenomenal success on various NLP tasks and benchmark datasets, industry practitioners have started to experiment with incorporating BERT to build applications to solve industry use cases. Industrial NLP applications are known to deal with much more noisy data as compared to benchmark datasets. In this work we systematically show that when the text data is noisy, there is a significant degradation in the performance of BERT. While this work is motivated from our business use case of building NLP applications for user generated text data which is known to be very noisy, our findings are applicable across various use cases in the industry. Specifically, we show that BERT's performance on fundamental tasks like sentiment analysis and textual similarity drops significantly as we introduce noise in data in the form of spelling mistakes and typos. For our experiments we use three well known datasets - IMDB movie reviews, SST-2 and STS-B to measure the performance. ...
We present hinglishNorm - a human annotated corpus of Hindi-English code-mixed sentences for text... more We present hinglishNorm - a human annotated corpus of Hindi-English code-mixed sentences for text normalization task. Each sentence in the corpus is aligned to its corresponding human annotated normalized form. To the best of our knowledge, there is no corpus of Hindi-English code-mixed sentences for text normalization task that is publicly available. Our work is the first attempt in this direction. The corpus contains 13494 segments annotated for text normalization. Further, we present baseline normalization results on this corpus. We obtain a Word Error Rate (WER) of 15.55, BiLingual Evaluation Understudy (BLEU) score of 71.2, and Metric for Evaluation of Translation with Explicit ORdering (METEOR) score of 0.50.
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), 2020
Owing to the phenomenal success of BERT on various NLP tasks and benchmark datasets, industry pra... more Owing to the phenomenal success of BERT on various NLP tasks and benchmark datasets, industry practitioners are actively experimenting with fine-tuning BERT to build NLP applications for solving industry use cases. For most datasets that are used by practitioners to build industrial NLP applications, it is hard to guarantee absence of any noise in the data. While BERT has performed exceedingly well for transferring the learnings from one use case to another, it remains unclear how BERT performs when fine-tuned on noisy text. In this work, we explore the sensitivity of BERT to noise in the data. We work with most commonly occurring noise (spelling mistakes, typos) and show that this results in significant degradation in the performance of BERT. We present experimental results to show that BERT's performance on fundamental NLP tasks like sentiment analysis and textual similarity drops significantly in the presence of (simulated) noise on benchmark datasets viz. IMDB Movie Review, STS-B, SST-2. Further, we identify shortcomings in the existing BERT pipeline that are responsible for this drop in performance. Our findings suggest that practitioners need to be vary of presence of noise in their datasets while fine-tuning BERT to solve industry use cases.
2017 IEEE Wireless Communications and Networking Conference (WCNC)
Side-link relay system is targeted to provide critical public safety and broadcast services to ex... more Side-link relay system is targeted to provide critical public safety and broadcast services to extended coverage regions. It is also seen as an enabler for future communication systems supporting vehicular communication like V2V, V2X and as a compliment technique to extend 5G coverage. Enhanced system performance in terms of energy efficiency for overall system (both relay amp;amp; remote UEs) and scheduling strategies while enabling end-to-end QoS is of paramount importance in relay system. The paper targets this twin challenge and proposes novel architecture aspects as well as new approaches for discovery and communication scheduling enabling optimum system performance with DRX alignment and meeting varied QoS targets. Congestion handling during system overload situations is also addressed. Simulation results show proposed QoS scheduling algorithms that consider DRX function achieve substantial gains over the standard QoS-aware algorithm. Results provided derive significant insights for side-link relay system design and implementation.
ArXiv, 2020
Owing to BERT's phenomenal success on various NLP tasks and benchmark datasets, industry prac... more Owing to BERT's phenomenal success on various NLP tasks and benchmark datasets, industry practitioners have started to experiment with incorporating BERT to build applications to solve industry use cases. Industrial NLP applications are known to deal with much more noisy data as compared to benchmark datasets. In this work we systematically show that when the text data is noisy, there is a significant degradation in the performance of BERT. While this work is motivated from our business use case of building NLP applications for user generated text data which is known to be very noisy, our findings are applicable across various use cases in the industry. Specifically, we show that BERT's performance on fundamental tasks like sentiment analysis and textual similarity drops significantly as we introduce noise in data in the form of spelling mistakes and typos. For our experiments we use three well known datasets - IMDB movie reviews, SST-2 and STS-B to measure the performance. ...
We present hinglishNorm - a human annotated corpus of Hindi-English code-mixed sentences for text... more We present hinglishNorm - a human annotated corpus of Hindi-English code-mixed sentences for text normalization task. Each sentence in the corpus is aligned to its corresponding human annotated normalized form. To the best of our knowledge, there is no corpus of Hindi-English code-mixed sentences for text normalization task that is publicly available. Our work is the first attempt in this direction. The corpus contains 13494 segments annotated for text normalization. Further, we present baseline normalization results on this corpus. We obtain a Word Error Rate (WER) of 15.55, BiLingual Evaluation Understudy (BLEU) score of 71.2, and Metric for Evaluation of Translation with Explicit ORdering (METEOR) score of 0.50.
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), 2020
Owing to the phenomenal success of BERT on various NLP tasks and benchmark datasets, industry pra... more Owing to the phenomenal success of BERT on various NLP tasks and benchmark datasets, industry practitioners are actively experimenting with fine-tuning BERT to build NLP applications for solving industry use cases. For most datasets that are used by practitioners to build industrial NLP applications, it is hard to guarantee absence of any noise in the data. While BERT has performed exceedingly well for transferring the learnings from one use case to another, it remains unclear how BERT performs when fine-tuned on noisy text. In this work, we explore the sensitivity of BERT to noise in the data. We work with most commonly occurring noise (spelling mistakes, typos) and show that this results in significant degradation in the performance of BERT. We present experimental results to show that BERT's performance on fundamental NLP tasks like sentiment analysis and textual similarity drops significantly in the presence of (simulated) noise on benchmark datasets viz. IMDB Movie Review, STS-B, SST-2. Further, we identify shortcomings in the existing BERT pipeline that are responsible for this drop in performance. Our findings suggest that practitioners need to be vary of presence of noise in their datasets while fine-tuning BERT to solve industry use cases.