Improving Software-Reduced Touchscreen Latency (original) (raw)

D. Weir, S. Rogers, M. Löchtefeld and R. Murray-Smith, A user-specific Machine Learning approach for improving touch accuracy on mobile devices, UIST 2012.

We present a flexible Machine Learning approach for learning user-specific touch input models to increase touch accuracy on mobile devices. The model is based on flexible, non-parametric Gaussian Process regression and is learned using recorded touch inputs. We demonstrate that significant touch accuracy improvements can be obtained when either raw sensor data is used as an input or when the device's reported touch location is used as an input, with the latter marginally outperforming the former. We show that learned offset functions are highly nonlinear and user-specific and that user-specific models outperform models trained on data pooled from several users. Crucially, significant performance improvements can be obtained with a small (≈ 200) number of training examples, easily obtained for a particular user through a calibration game or from keyboard entry data.

Classifying Learner Behavior from High Frequency Touchscreen Data Using Recurrent Neural Networks

Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, 2018

Sensor stream data, particularly those collected at the millisecond of granularity, have been notoriously difficult to leverage classifiable signal out of. Adding to the challenge is the limited domain knowledge that exists at these biological sensor levels of interaction that prohibits a comprehensive manual feature engineering approach to classification of those streams. In this paper, we attempt to enhance the assessment capability of a touchscreen based ratio tutoring system by using Recurrent Neural Networks (RNNs) to predict the strategy being demonstrated by students from their 60hz data streams. We hypothesize that the ability of neural networks to learn representations automatically, instead of relying on human feature engineering, may benefit this classification task. Our RNN and baseline models were trained and cross-validated at several levels on historical data which had been human coded with the task strategy believed to be exhibited by the learner. Our RNN approach to this historically difficult high frequency data classification task moderately advances performance above baselines and we discuss what implication this level of assessment performance has on enabling greater adaptive supports in the tutoring system.

Predictive Precompute with Recurrent Neural Networks

2020

In both mobile and web applications, speeding up user interface response times can often lead to significant improvements in user engagement. A common technique to improve responsiveness is to precompute data ahead of time for specific activities. However, simply precomputing data for all user and activity combinations is prohibitive at scale due to both network constraints and server-side computational costs. It is therefore important to accurately predict per-user application usage in order to minimize wasted precomputation ("predictive precompute"). In this paper, we describe the novel application of recurrent neural networks (RNNs) for predictive precompute. We compare their performance with traditional machine learning models, and share findings from their large-scale production use at Facebook. We demonstrate that RNN models improve prediction accuracy, eliminate most feature engineering steps, and reduce the computational cost of serving predictions by an order of m...

Machine learning for intelligent mobile user interfaces using TensorFlow

Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, 2017

One key feature of TensorFlow includes the possibility to compile the trained model to run efficiently on mobile phones. This enables a wide range of opportunities for researchers and developers. In this tutorial, we teach attendees two basic steps to run neural networks on a mobile phone: Firstly, we will teach how to develop neural network architectures and train them in TensorFlow. Secondly, we show the process to run the trained models on a mobile phone.

Machine learning with tensorflow for mobile and ubiquitous interaction

Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia, 2017

Due to the increasing amount of sensors integrated into the environment and worn by the user, a sheer amount of context-sensitive data become available. While interpreting them with traditional methods (e.g., formulas and simple heuristics) is challenging, the latest machine learning techniques require only a set of labeled data. TensorFlow is an open-source library for machine learning which implements a wide range of neural network models. With TensorFlow Mobile, researchers and developers can further deploy the trained models on low-end mobile devices for ubiquitous scenarios. This facilitates the model export and offers techniques to optimize the model for a mobile deployment. In this tutorial, we teach attendees two basic steps to a deployment of neural networks on smartphones: Firstly, we will teach how to develop neural network architectures and train them in TensorFlow. Secondly, we show the process to run the trained models on a mobile phone using TensorFlow Mobile.

Predicting Delay in IoT Using Deep Learning: A Multiparametric Approach

IEEE Access, 2019

The proliferation of the Internet of Things (IoT) requires to accommodate diverse applications with stringent performance requirements. Delay is one of the key metrics in the IoT, particularly, for domains, such as health care, where critical cases requiring an emergency response frequently occur. In this paper, we analyze the performance data generated using the IEEE 802.15.4 standard to derive an accurate predictive model for delay-sensitive applications. A deep neural network (DNN) is adopted to model the relationship between diverse communication parameters (e.g., queue size, application traffic rate, and transmission power) and delay. Evaluation reveals that the DNN model achieves a prediction accuracy of over 98% and outperforms other popular regression models. In addition, a fine-grained analysis of the size of training data, depth (number of layers), width (number of neurons per layer), and epochs (number of iterations) is carried out in an attempt to achieve best possible prediction results with minimally complex DNN. The statistics show that the derived model achieves a comparable accuracy even when trained with a small fraction (≥10%) of data. The proposed model recommends the values for different controllable communication parameters to the transmitter that can be fine-tuned considering the desired delay bounds. INDEX TERMS Delay prediction, deep learning, e-health, internet of things, multi-layer neural networks, wireless sensor networks.

Mobile Deep Learning: Exploring Deep Neural Network for Predicting Context-Aware Smartphone Usage

2021

In this paper, we mainly formulate the problem of predicting smartphone usage based on contextual information, which involves both the user-centric and device-centric contexts. In the area of mobile analytics, traditional machine learning techniques, such as Decision Trees, Random Forests, Support Vector Machines, etc. are popular for building context-aware prediction models. However, real-life smartphone usage data may contain higher dimensions of contexts and can be huge in size considering the daily behavioral data of the users. Thus, the traditional machine learning models may not be effective to build the context-aware model. In this paper, we explore "Mobile Deep Learning", an artificial neural network learningbased model considering multiple hidden layers for predicting context-aware smartphone usage. Our model first takes into account context correlation analysis to reduce the neurons as well as to simplify the network model through filtering the irrelevant or less significant contexts, and then build the deep learning model with the selected contexts. The experimental results on smartphone usage datasets show the effectiveness of the model.

Towards a Hybrid Model for CPU Usage Prediction of Smartphone Users(Accepted)

2018

The increasing complexity of mobile applications leads to rapid battery drain in mobile devices. Limited improvements in battery technology have forced system designers to utilize the limited energy efficiently thus making energy management one of the foremost concerns in mobile devices. Our analysis reveals that users differ in their context and CPU usage patterns which can be utilized for energy savings. However, predicting CPU usage is challenging due to ever-increasing size of user data coupled with varying usage behavior. In this work, we develop a hybrid model using time series and deep neural networks to predict future usage which in turn can be leveraged for power savings. We start with studying the varying usage patterns of users and further proceed to describe our hybrid model and finally perform evaluation on the user traces from the Livelab dataset. Proof of concept evaluation for a single user shows that the proposed hybrid model incurs lesser errors compared to individ...

Deep learning on mobile devices: a review

Mobile Multimedia/Image Processing, Security, and Applications 2019, 2019

Recent breakthroughs in deep learning and artificial intelligence technologies have enabled numerous mobile applications. While traditional computation paradigms rely on mobile sensing and cloud computing, deep learning implemented on mobile devices provides several advantages. These advantages include low communication bandwidth, small cloud computing resource cost, quick response time, and improved data privacy. Research and development of deep learning on mobile and embedded devices has recently attracted much attention. This paper provides a timely review of this fast-paced field to give the researcher, engineer, practitioner, and graduate student a quick grasp on the recent advancements of deep learning on mobile devices. In this paper, we discuss hardware architectures for mobile deep learning, including Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuit (ASIC), and recent mobile Graphic Processing Units (GPUs). We present Size, Weight, Area and Power (SWAP) considerations and their relation to algorithm optimizations, such as quantization, pruning, compression, and approximations that simplify computation while retaining performance accuracy. We cover existing systems and give a state-of-the-industry review of TensorFlow, MXNet, Mobile AI Compute Engine (MACE), and Paddle-mobile deep learning platform. We discuss resources for mobile deep learning practitioners, including tools, libraries, models, and performance benchmarks. We present applications of various mobile sensing modalities to industries, ranging from robotics, healthcare and multimedia, biometrics to autonomous drive and defense. We address the key deep learning challenges to overcome, including low quality data, and small training/adaptation data sets. In addition, the review provides numerous citations and links to existing code bases implementing various technologies. These resources lower the user's barrier to entry into the field of mobile deep learning.

PyTouch: A Machine Learning Library for Touch Processing

2021 IEEE International Conference on Robotics and Automation (ICRA)

With the increased availability of rich tactile sensors, there is an equally proportional need for open-source and integrated software capable of efficiently and effectively processing raw touch measurements into high-level signals that can be used for control and decision-making. In this paper, we present PyTouch-the first machine learning library dedicated to the processing of touch sensing signals. PyTouch, is designed to be modular, easy-to-use and provides state-of-the-art touch processing capabilities as a service with the goal of unifying the tactile sensing community by providing a library for building scalable, proven, and performance-validated modules over which applications and research can be built upon. We evaluate PyTouch on real-world data from several tactile sensors on touch processing tasks such as touch detection, slip and object pose estimations. PyTouch is open-sourced at https://github.com/facebookresearch/pytouch.