Li Wang - Academia.edu (original) (raw)

Papers by Li Wang

Research paper thumbnail of DITIR: Distributed Index for High Throughput Trajectory Insertion and Real-time Temporal Range Query

The prosperity of mobile social network and location-based services, e.g., Uber, is backing the e... more The prosperity of mobile social network and location-based services, e.g., Uber, is backing the explosive growth of spatial temporal streams on the Internet. It raises new challenges to the underlying data store system, which is supposed to support extremely high-throughput trajectory insertion and low-latency querying with spatial and temporal constraints. State-of-the-art solutions, e.g., HBase, do not render satisfactory performance, due to the high overhead on index update. In this demonstration, we present DITIR, our new system prototype tailored to efficiently processing temporal and spacial queries over historical data as well as latest updates. Our system provides better performance guarantee, by physically partitioning the incoming data tuples on their arrivals and exploiting a template-based insertion schema, to reach the desired ingestion throughput. Load balancing mechanism is also introduced to DITIR, by using which the system is capable of achieving reliable performance against workload dynamics. Our demonstration shows that DITIR supports over 1 million tuple insertions in a second, when running on a 10-node cluster. It also significantly outper-forms HBase by 7 times on ingestion throughput and 5 times faster on query latency.

Research paper thumbnail of Mercury: Metro Density Prediction with Recurrent Neural Network on Streaming CDR Data

—Telecommunication companies possess mobility information of their phone users, containing accura... more —Telecommunication companies possess mobility information of their phone users, containing accurate locations and velocities of commuters travelling in public transportation system. Although the value of telecommunication data is well believed under the smart city vision, there is no existing solution to transform the data into actionable items for better transportation , mainly due to the lack of appropriate data utilization scheme and the limited processing capability on massive data. This paper presents the first ever system implementation of real-time public transportation crowd prediction based on telecommunication data, relying on the analytical power of advanced neural network models and the computation power of parallel streaming analytic engines. By analyzing the feeds of caller detail record (CDR) from mobile users in interested regions, our system is able to predict the number of metro passengers entering stations, the number of waiting passengers on the platforms and other important metrics on the crowd density. New techniques, including geographical-spatial data processing, weight-sharing recurrent neural network, and parallel streaming analytical programming, are employed in the system. These new techniques enable accurate and efficient prediction outputs, to meet the real-world business requirements from public transportation system.

Research paper thumbnail of A Hybrid Framework for Product Normalization in Online Shopping

The explosive growth of products in both variety and quantity is an obvious evidence for the boom... more The explosive growth of products in both variety and quantity is an obvious evidence for the booming of C2C (Customer-to-Customer) E-commerce. Product normalization, which determines whether products are referring to the same underlying entity, is a fundamental task of data management in C2C market. However, product normalization in C2C market is challenging because the data is noisy and lacks a uniform schema. In this paper, we propose a hybrid framework, which achieves product normalization by the schema integration and data cleaning. In the framework, a graph-based method was proposed to integrate the schema. The missing data was filled and the incorrect data was repaired by using the evidence extracted from surrounding information , such as the title and textual description. We distinguish products by clustering on the product similarity matrix which is learned through logistic regression. We conduct experiments on the real-world data and the experimental results confirm the effectiveness of our design by comparing with the existing methods.

Research paper thumbnail of NUMA-Aware Scalable and Efficient In-Memory Aggregation on Large Domains

—Business Intelligence (BI) is recognized as one of the most important IT applications in the com... more —Business Intelligence (BI) is recognized as one of the most important IT applications in the coming big data era. In recent years, non-uniform memory access (NUMA) has become the de-facto architecture of multiprocessors on the new generation of enterprise servers. Such new architecture brings new challenges to optimization techniques on traditional operators in BI. Aggregation, for example, is one of the basic building blocks of BI, while its processing performance with existing hash-based algorithms scales poorly in terms of the number of cores under NUMA architecture. In this paper, we provide new solutions to tackle the problem of parallel hash-based aggregation, especially targeting at domains of extremely large cardinality. We propose a NUMA-aware radix partitioning (NaRP) method which divides the original huge relation table into subsets, without invoking expensive remote memory access between nodes of the cores. We also present a new efficient aggregation algorithm (EAA), to aggregate the partitioned data in parallel with low cache coherence miss and locking costs. Theoretical analysis as well as empirical study on an IBM X5 server prove that our proposals are at least two times faster than existing methods.

Research paper thumbnail of Optimizing Pipelined Execution for Distributed In-Memory OLAP System

In the coming big data era, the demand for data analysis capability in real applications is growi... more In the coming big data era, the demand for data analysis capability in real applications is growing at amazing pace. The memory's increasing capacity and decreasing price make it possible and attractive for the distributed OLAP system to load all the data into memory and thus significantly improve the data processing performance. In this paper, we model the performance of pipelined execution in distributed in-memory OLAP system and figure out that the data communication among the computation nodes, which is achieved by data exchange operator, is the performance bottleneck. Consequently, we explore the pipelined data exchange in depth and give a novel solution that is efficient, scalable, and skew-resilient. Experimental results show the effectiveness of our proposals by comparing with state-of-art techniques.

Research paper thumbnail of Elastic Pipelining in an In-Memory Database Cluster

An in-memory database cluster consists of multiple interconnected nodes with a large capacity of ... more An in-memory database cluster consists of multiple interconnected nodes with a large capacity of RAM and modern multi-core CPUs. As a conventional query processing strategy, pipelining remains a promising solution for in-memory parallel database systems, as it avoids expensive intermediate result materialization and paral-lelizes the data processing among nodes. However, to fully unleash the power of pipelining in a cluster with multi-core nodes, it is crucial for the query optimizer to generate good query plans with appropriate intra-node parallelism, in order to maximize CPU and network bandwidth utilization. A suboptimal plan, on the contrary , causes load imbalance in the pipelines and consequently degrades the query performance. Parallelism assignment optimization at compile time is nearly impossible, as the workload in each node is affected by numerous factors and is highly dynamic during query evaluation. To tackle this problem, we propose elastic pipelining, which makes it possible to optimize intra-node paral-lelism assignments in the pipelines based on the actual workload at runtime. It is achieved with the adoption of new elastic iterator model and a fully optimized dynamic scheduler. The elastic iter-ator model generally upgrades traditional iterator model with new dynamic multi-core execution adjustment capability. And the dynamic scheduler efficiently provisions CPU cores to query execution segments in the pipelines based on the lightweight measurements on the operators. Extensive experiments on real and synthetic (TPC-H) data show that our proposal achieves almost full CPU utilization on typical decision-making analytical queries, out-performing state-of-the-art open-source systems by a huge margin.

Research paper thumbnail of DITIR: Distributed Index for High Throughput Trajectory Insertion and Real-time Temporal Range Query

The prosperity of mobile social network and location-based services, e.g., Uber, is backing the e... more The prosperity of mobile social network and location-based services, e.g., Uber, is backing the explosive growth of spatial temporal streams on the Internet. It raises new challenges to the underlying data store system, which is supposed to support extremely high-throughput trajectory insertion and low-latency querying with spatial and temporal constraints. State-of-the-art solutions, e.g., HBase, do not render satisfactory performance, due to the high overhead on index update. In this demonstration, we present DITIR, our new system prototype tailored to efficiently processing temporal and spacial queries over historical data as well as latest updates. Our system provides better performance guarantee, by physically partitioning the incoming data tuples on their arrivals and exploiting a template-based insertion schema, to reach the desired ingestion throughput. Load balancing mechanism is also introduced to DITIR, by using which the system is capable of achieving reliable performance against workload dynamics. Our demonstration shows that DITIR supports over 1 million tuple insertions in a second, when running on a 10-node cluster. It also significantly outper-forms HBase by 7 times on ingestion throughput and 5 times faster on query latency.

Research paper thumbnail of Mercury: Metro Density Prediction with Recurrent Neural Network on Streaming CDR Data

—Telecommunication companies possess mobility information of their phone users, containing accura... more —Telecommunication companies possess mobility information of their phone users, containing accurate locations and velocities of commuters travelling in public transportation system. Although the value of telecommunication data is well believed under the smart city vision, there is no existing solution to transform the data into actionable items for better transportation , mainly due to the lack of appropriate data utilization scheme and the limited processing capability on massive data. This paper presents the first ever system implementation of real-time public transportation crowd prediction based on telecommunication data, relying on the analytical power of advanced neural network models and the computation power of parallel streaming analytic engines. By analyzing the feeds of caller detail record (CDR) from mobile users in interested regions, our system is able to predict the number of metro passengers entering stations, the number of waiting passengers on the platforms and other important metrics on the crowd density. New techniques, including geographical-spatial data processing, weight-sharing recurrent neural network, and parallel streaming analytical programming, are employed in the system. These new techniques enable accurate and efficient prediction outputs, to meet the real-world business requirements from public transportation system.

Research paper thumbnail of A Hybrid Framework for Product Normalization in Online Shopping

The explosive growth of products in both variety and quantity is an obvious evidence for the boom... more The explosive growth of products in both variety and quantity is an obvious evidence for the booming of C2C (Customer-to-Customer) E-commerce. Product normalization, which determines whether products are referring to the same underlying entity, is a fundamental task of data management in C2C market. However, product normalization in C2C market is challenging because the data is noisy and lacks a uniform schema. In this paper, we propose a hybrid framework, which achieves product normalization by the schema integration and data cleaning. In the framework, a graph-based method was proposed to integrate the schema. The missing data was filled and the incorrect data was repaired by using the evidence extracted from surrounding information , such as the title and textual description. We distinguish products by clustering on the product similarity matrix which is learned through logistic regression. We conduct experiments on the real-world data and the experimental results confirm the effectiveness of our design by comparing with the existing methods.

Research paper thumbnail of NUMA-Aware Scalable and Efficient In-Memory Aggregation on Large Domains

—Business Intelligence (BI) is recognized as one of the most important IT applications in the com... more —Business Intelligence (BI) is recognized as one of the most important IT applications in the coming big data era. In recent years, non-uniform memory access (NUMA) has become the de-facto architecture of multiprocessors on the new generation of enterprise servers. Such new architecture brings new challenges to optimization techniques on traditional operators in BI. Aggregation, for example, is one of the basic building blocks of BI, while its processing performance with existing hash-based algorithms scales poorly in terms of the number of cores under NUMA architecture. In this paper, we provide new solutions to tackle the problem of parallel hash-based aggregation, especially targeting at domains of extremely large cardinality. We propose a NUMA-aware radix partitioning (NaRP) method which divides the original huge relation table into subsets, without invoking expensive remote memory access between nodes of the cores. We also present a new efficient aggregation algorithm (EAA), to aggregate the partitioned data in parallel with low cache coherence miss and locking costs. Theoretical analysis as well as empirical study on an IBM X5 server prove that our proposals are at least two times faster than existing methods.

Research paper thumbnail of Optimizing Pipelined Execution for Distributed In-Memory OLAP System

In the coming big data era, the demand for data analysis capability in real applications is growi... more In the coming big data era, the demand for data analysis capability in real applications is growing at amazing pace. The memory's increasing capacity and decreasing price make it possible and attractive for the distributed OLAP system to load all the data into memory and thus significantly improve the data processing performance. In this paper, we model the performance of pipelined execution in distributed in-memory OLAP system and figure out that the data communication among the computation nodes, which is achieved by data exchange operator, is the performance bottleneck. Consequently, we explore the pipelined data exchange in depth and give a novel solution that is efficient, scalable, and skew-resilient. Experimental results show the effectiveness of our proposals by comparing with state-of-art techniques.

Research paper thumbnail of Elastic Pipelining in an In-Memory Database Cluster

An in-memory database cluster consists of multiple interconnected nodes with a large capacity of ... more An in-memory database cluster consists of multiple interconnected nodes with a large capacity of RAM and modern multi-core CPUs. As a conventional query processing strategy, pipelining remains a promising solution for in-memory parallel database systems, as it avoids expensive intermediate result materialization and paral-lelizes the data processing among nodes. However, to fully unleash the power of pipelining in a cluster with multi-core nodes, it is crucial for the query optimizer to generate good query plans with appropriate intra-node parallelism, in order to maximize CPU and network bandwidth utilization. A suboptimal plan, on the contrary , causes load imbalance in the pipelines and consequently degrades the query performance. Parallelism assignment optimization at compile time is nearly impossible, as the workload in each node is affected by numerous factors and is highly dynamic during query evaluation. To tackle this problem, we propose elastic pipelining, which makes it possible to optimize intra-node paral-lelism assignments in the pipelines based on the actual workload at runtime. It is achieved with the adoption of new elastic iterator model and a fully optimized dynamic scheduler. The elastic iter-ator model generally upgrades traditional iterator model with new dynamic multi-core execution adjustment capability. And the dynamic scheduler efficiently provisions CPU cores to query execution segments in the pipelines based on the lightweight measurements on the operators. Extensive experiments on real and synthetic (TPC-H) data show that our proposal achieves almost full CPU utilization on typical decision-making analytical queries, out-performing state-of-the-art open-source systems by a huge margin.