+
Publications:  (by dates, by topics)

Under Preparation / Submitted

  1. Latent Consistency Models: Synthesizing High-Resolution Images with Few-step Inference. Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, Hang Zhao. [Paper][Github][Huggingface][LCM-Lora][ Show Abstract ]

  2. Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling is computationally intensive and leads to slow generation. Inspired by Consistency Models, we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768¡Á768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference.

  3. Coresets for clustering with general assignment constraints. Lingxiao Huang, Jian Li, Shaofeng Jiang, Xuan Wu.2023 [ArXiv] [ Show Abstract ]

  4. Designing small-sized \emph{coresets}, which approximately preserve the costs of the solutions for large datasets, has been an important research direction for the past decade. We consider coreset construction for a variety of general constrained clustering problems. We significantly extend and generalize the results of a very recent paper (Braverman et al., FOCS'22), by demonstrating that the idea of hierarchical uniform sampling (Chen, SICOMP'09; Braverman et al., FOCS'22) can be applied to efficiently construct coresets for a very general class of constrained clustering problems with general assignment constraints, including capacity constraints on cluster centers, and assignment structure constraints for data points (modeled by a convex body B). Our main theorem shows that a small-sized ϵ-coreset exists as long as a complexity measure Lip(B) of the structure constraint, and the \emph{covering exponent} ¦«ϵ(X) for metric space (X,d) are bounded. The complexity measure Lip(B) for convex body B is the Lipschitz constant of a certain transportation problem constrained in B, called \emph{optimal assignment transportation problem}. We prove nontrivial upper bounds of Lip(B) for various polytopes, including the general matroid basis polytopes, and laminar matroid polytopes (with better bound). As an application of our general theorem, we construct the first coreset for the fault-tolerant clustering problem (with or without capacity upper/lower bound) for the above metric spaces, in which the fault-tolerance requirement is captured by a uniform matroid basis polytope.

Survey

  1. [Survey] Approximation Algorithms for Stochastic Combinatorial Optimization Problems. Jian Li and Yu Liu. Journal of the Operations Research Society of China. (invited survey paper, excellent paper award),2016 [paper] [ Show Abstract ]

  2. Stochastic optimization has established itself as a major method to handle uncertainty in various optimization problems, by modeling the uncertainty by a probability distribution over possible realizations. Traditionally, the main focus in stochastic optimization has been various stochastic mathematical programming (such as linear programming, convex programming). In recent years, there has been a surge of interest in stochastic combinatorial optimization problems from the theoretical computer science community. In this article, we survey some of the recent results on various stochastic versions of classical combinatorial optimization problems. Since most problems in this domain are NP-hard (or \#P-hard, or even PSPACE-hard), we focus on the results which provide polynomial time approximation algorithms, with provable approximation guarantees. Our discussions are centered around a few representative problems, such as stochastic knapsack, stochastic matching, multi-armed bandit etc. We use these examples to introduce several popular stochastic models, such as the fixed set model, 2-stage stochastic optimization model, stochastic adaptive probing model etc, as well as some useful techniques for designing approximation algorithms for stochastic combinatorial optimization problems, including the linear programming relaxation approach, boosted sampling, content resolution schemes, Poisson approximation etc. We also provide some open research questions along the way. Our purpose is to provide the readers a quick glimpse to the models, problems and techniques in this area, and hopefully inspire new contributions.

Selected Refereed Conference/Journal Papers £¨full list in google scholar page£©

  1. On Optimal Coreset Construction for (k,z)-Clustering. Lingxiao Huang, Jian Li, Xuan Wu. The 56th ACM Symposium on Theory of Computing (STOC 2024). [ArXiv] [ Show Abstract ]

  2. Constructing small-sized coresets for various clustering problems in different metric spaces has attracted significant attention for the past decade. A central problem in the coreset literature is to understand what is the best possible coreset size for (k,z)-clustering in Euclidean space. While there has been significant progress in the problem, there is still a gap between the state-of-the-art upper and lower bounds. For instance, the best known upper bound for k-means (z=2) is min{O(k3/2¦Å−2),O(k¦Å−4)} [1,2], while the best known lower bound is ¦¸(k¦Å−2) [1]. In this paper, we make progress on both upper and lower bounds. For a large range of parameters (i.e., ¦Å,k), we have a complete understanding of the optimal coreset size. In particular, we obtain the following results: (1) We present a new coreset lower bound ¦¸(k¦Å−z−2) for Euclidean (k,z)-clustering when ¦Å¡Ý¦¸(k−1/(z+2)). In view of the prior upper bound O~z(k¦Å−z−2) [1], the bound is optimal. The new lower bound is surprising since ¦¸(k¦Å−2) [1] is ``conjectured" to be the correct bound in some recent works (see e.g., [1,2]]). (2) For the upper bound, we provide efficient coreset construction algorithms for (k,z)-clustering with improved or optimal coreset sizes in several metric spaces. [1] Cohen-Addad, Larsen, Saulpic, Schwiegelshohn. STOC'22. [2] Cohen-Addad, Larsen, Saulpic, Schwiegelshohn, Sheikh-Omar, NeurIPS'22.

  3. GLIME: General, Stable and Local LIME Explanation. Zeren Tan, Tian Yang, Jian Li. Thirty-seventh Conference on Neural Information Processing Systems. 2023 (NeurIPS 2023, spotlight) [paper] [ Show Abstract ]

  4. Although Local Interpretable Model-agnostic Explanations (LIME) \cite{ribeiro2016should} is a widely adopted method for understanding model behavior, it suffers from instability with respect to random seeds \cite{zafar2019dlime, shankaranarayana2019alime, bansal2020sam} and exhibits low local fidelity (i.e., how the explanation explains model's local behaviors) \cite{rahnama2019study, laugel2018defining}. Our study demonstrates that this instability is caused by small sample weights, resulting in the dominance of regularization and slow convergence. Additionally, LIME's sampling approach is non-local and biased towards the reference, leading to diminished local fidelity and instability to references. To address these challenges, we propose \textsc{Glime}, an enhanced framework that extends LIME and unifies several previous methods. Within the \textsc{Glime} framework, we derive an equivalent formulation of LIME that achieves significantly faster convergence and improved stability. By employing a local and unbiased sampling distribution, \textsc{Glime} generates explanations with higher local fidelity compared to LIME, while being independent of the reference choice. Moreover, \textsc{Glime} offers users the flexibility to choose sampling distribution based on their specific scenarios.

  5. Generative Table Pre-training Empowers Models for Tabular Prediction. Tianping Zhang, Shaowen Wang, Shuicheng Yan, Li Jian, Qian Liu. The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023). [Paper][Github] [ Show Abstract ]

  6. Recently, the topic of table pre-training has attracted considerable research interest. However, how to employ table pre-training to boost the performance of tabular prediction remains an open challenge. In this paper, we propose TapTap, the first attempt that leverages table pre-training to empower models for tabular prediction. After pre-training on a large corpus of real-world tabular data, TapTap can generate high-quality synthetic tables to support various applications on tabular data, including privacy protection, low resource regime, missing value imputation, and imbalanced classification. Extensive experiments on 12 datasets demonstrate that TapTap outperforms a total of 16 baselines in different scenarios. Meanwhile, it can be easily combined with various backbone models, including LightGBM, Multilayer Perceptron (MLP) and Transformer. Moreover, with the aid of table pre-training, models trained using synthetic data generated by TapTap can even compete with models using the original dataset on half of the experimental datasets, marking a milestone in the development of synthetic tabular data generation.

  7. Efficient Algorithms for Sparse Moment Problems without Separation. Zhiyuan Fan, Jian Li. The 36th Annual Conference on Learning Theory (COLT 2023). [ArXiv] [ Show Abstract ]

  8. We consider the sparse moment problem of learning a k-spike mixture in high dimensional space from its noisy moment information in any dimension. We measure the accuracy of the learned mixtures using transportation distance. Previous algorithms either assume certain separation assumptions, use more recovery moments, or run in (super) exponential time. Our algorithm for the 1-dimension problem (also called the sparse Hausdorff moment problem) is a robust version of the classic Prony's method, and our contribution mainly lies in the analysis. We adopt a global and much tighter analysis than previous work (which analyzes the perturbation of the intermediate results of Prony's method). A useful technical ingredient is a connection between the linear system defined by the Vandermonde matrix and the Schur polynomial, which allows us to provide tight perturbation bound independent of the separation and may be useful in other contexts. To tackle the high dimensional problem, we first solve the 2-dimensional problem by extending the 1-dimension algorithm and analysis to complex numbers. Our algorithm for the high dimensional case determines the coordinates of each spike by aligning a 1-d projection of the mixture to a random vector and a set of 2d-projections of the mixture. Our results have applications to learning topic models and Gaussian mixtures, implying improved sample complexity results or running time over prior work.

  9. OpenFE: Automated Feature Generation with Expert-level Performance. Tianping Zhang, Zheyu Zhang, Zhiyuan Fan, Haoyan Luo, Fengyuan Liu, Qian Liu, Wei Cao, Jian Li. The 40th International Conference on Machine Learning (ICML 2023) [Github] [ Show Abstract ]

  10. The goal of automated feature generation is to liberate machine learning experts from the laborious task of manual feature generation, which is crucial for improving the learning performance of tabular data. The major challenge in automated feature generation is to efficiently and accurately identify useful features from a vast pool of candidate features. In this paper, we present OpenFE, an automated feature generation tool that provides competitive results against machine learning experts. OpenFE achieves efficiency and accuracy with two components: 1) a novel feature boosting method for accurately estimating the incremental performance of candidate features. 2) a feature-scoring framework for retrieving effective features from a large number of candidates through successive featurewise halving and feature importance attribution. Extensive experiments on seven benchmark datasets show that OpenFE outperforms existing baseline methods. We further evaluate OpenFE in two famous Kaggle competitions with thousands of data science teams participating. In one of the competitions, features generated by OpenFE with a simple baseline model can beat 99.3\% data science teams. In addition to the empirical results, we provide a theoretical perspective to show that feature generation is beneficial in a simple yet representative setting.

  11. Not All Tasks Are Born Equal: Understanding Zero-Shot Generalization. Jing Zhou, Zongyu Lin, Yanan Zheng, Jian Li, Zhilin Yang. The Eleventh International Conference on Learning Representations (ICLR 2023 spotlight). [Paper] [ Show Abstract ]

  12. Recent work has achieved remarkable zero-shot performance with multi-task prompted pretraining, but little has been understood. For the first time, we show that training on a small number of key tasks beats using all the training tasks, while removing these key tasks substantially hurts performance. We also find that these key tasks are mostly question answering (QA) tasks. These novel findings combined deepen our understanding about zero-shot generalization¡ªtraining on certain tasks such as QA encodes general knowledge transferable to a wide range of tasks. In addition, to automate this procedure, we devise a method that (1) identifies key training tasks without observing the test tasks by examining the pairwise generalization results and (2) resamples training tasks for better data distribution. Empirically, our approach achieves improved results across various model scales and tasks.

  13. Unbiased Gradient Boosting Decision Tree with Unbiased Feature Importance. Zheyu Zhang, Tianping Zhang, Jian Li. The 32th International Joint Conference on Artificial Intelligence (IJCAI 2023) [Paper] [ Show Abstract ]

  14. Gradient Boosting Decision Tree (GBDT) has achieved remarkable success in a wide variety of applications. The split finding algorithm, which determines the tree construction process, is one of the most crucial components of GBDT. However, the split finding algorithm has long been criticized for its bias towards features with a larger number of potential splits. This bias introduces severe interpretability and overfitting issues in GBDT. To this end, we provide a fine-grained analysis of bias in GBDT and demonstrate that the bias originates from 1) the systematic bias in the gain estimation of each split and 2) the bias in the split finding algorithm resulting from the use of the same data to evaluate the split improvement and determine the best split. Based on the analysis, we propose unbiased gain, a new unbiased measurement of gain importance using out-of-bag samples. Moreover, we incorporate the unbiased property into the split finding algorithm during tree construction and develop UnbiasedGBM to solve the overfitting issue of GBDT. We empirically assess the performance of UnbiasedGBM and unbiased gain in a large-scale empirical study comprising 60 tabular datasets and demonstrate that: 1) UnbiasedGBM exhibits better performance than popular GBDT implementations such as LightGBM, XGBoost, and Catboost on average on the 60 datasets and 2) unbiased gain achieves better average performance in feature selection than popular feature importance methods including gain importance, permutation feature importance, and SHAP importance.

  15. Towards Generalizable Reinforcement Learning for Trade Execution. Chuheng Zhang, Yitong Duan, Xiaoyu Chen, Jianyu Chen, Jian Li, Li Zhao. The 32th International Joint Conference on Artificial Intelligence (IJCAI 2023) [Paper] [ Show Abstract ]

  16. Optimized trade execution is to sell (or buy) a given amount of assets in a given time with the lowest possible trading cost. Recently, reinforcement learning (RL) has been applied to optimized trade execution to learn smarter policies from market data. However, we find that many existing RL methods exhibit considerable overfitting which prevents them from real deployment. In this paper, we provide an extensive study on the overfitting problem in optimized trade execution. First, we model the optimized trade execution as offline RL with dynamic context, where the context represents market variables that cannot be influenced by the trading policy and are collected in an offline manner. Under this framework, we derive the generalization bound and find that the overfitting issue is caused by large context space and limited context samples in the offline setting. Accordingly, we propose to learn compact representations for context to address the overfitting problem, either by leveraging prior knowledge or in an end-to-end manner. To evaluate our algorithms, we also implement a carefully designed simulator based on historical limit order book (LOB) data to provide a high-fidelity benchmark for different algorithms. Our experiments on the high-fidelity simulator demonstrate that our algorithms can effectively alleviate overfitting and achieve better performance.

  17. AEC-GAN: Adversarial Error Correction GANs for Auto-Regressive Long Time-series Generation. Lei Wang, Liang Zeng, Jian Li. The Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2023) [Paper] [ Show Abstract ]

  18. Large-scale high-quality data is critical for training modern deep neural networks. However, data acquisition can be costly or time-consuming for many time-series applications, thus researchers turn to generative models for generating synthetic time-series data. In particular, recent generative adversarial networks (GANs) have achieved remarkable success in time-series generation. Despite their success, existing GAN models typically generate the sequences in an auto-regressive manner, and we empirically observe that they suffer from severe distribution shifts and bias amplification, especially when generating long sequences. To resolve this problem, we propose Adversarial Error Correction GAN (AEC-GAN), which is capable of dynamically correcting the bias in the past generated data to alleviate the risk of distribution shifts and thus can generate high-quality long sequences. AEC-GAN contains two main innovations: (1) We develop an error correction module to mitigate the bias. In the training phase, we adversarially perturb the realistic time-series data and then optimize this module to reconstruct the original data. In the generation phase, this module can act as an efficient regulator to detect and mitigate the bias. (2) We propose an augmentation method to facilitate GAN's training by introducing adversarial examples. Thus, AEC-GAN can generate high-quality sequences of arbitrary lengths, and the synthetic data can be readily applied to downstream tasks to boost their performance. We conduct extensive experiments on six widely used datasets and three state-of-the-art time-series forecasting models to evaluate the quality of our synthetic time-series data in different lengths and downstream tasks. Both the qualitative and quantitative experimental results demonstrate the superior performance of AEC-GAN over other deep generative models for time-series generation.

  19. ImGCL: Revisiting Graph Contrastive Learning on Imbalanced Node Classification. Liang Zeng, Lanqing Li, Ziqi Gao, Pinlin Zhao, Jian Li. The Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2023) [Paper] [ Show Abstract ]

  20. Graph contrastive learning (GCL) has attracted a surge of attention due to its superior performance for learning node/graph representations without labels. However, in practice, the underlying class distribution of unlabeled nodes for the given graph is usually imbalanced. This highly imbalanced class distribution inevitably deteriorates the quality of learned node representations in GCL. Indeed, we empirically find that most state-of-the-art GCL methods cannot obtain discriminative representations and exhibit poor performance on imbalanced node classification. Motivated by this observation, we propose a principled GCL framework on Imbalanced node classification (ImGCL), which automatically and adaptively balances the representations learned from GCL without labels. Specifically, we first introduce the online clustering based progressively balanced sampling (PBS) method with theoretical rationale, which balances the training sets based on pseudo-labels obtained from learned representations in GCL. We then develop the node centrality based PBS method to better preserve the intrinsic structure of graphs, by upweighting the important nodes of the given graph. Extensive experiments on multiple imbalanced graph datasets and imbalanced settings demonstrate the effectiveness of our proposed framework, which significantly improves the performance of the recent state-of-the-art GCL methods. Further experimental ablations and analyses show that the ImGCL framework consistently improves the representation quality of nodes in under-represented (tail) classes.

  21. Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models. Qiong Wu, Jian Li, Zhenming Liu, Yanhua Li, Mihai Cucuringu. The Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2023) [Paper] [ Show Abstract ]

  22. This paper revisits building machine learning algorithms that involve interactions between entities, such as those between financial assets in an actively managed portfolio, or interactions between users in a social network. Our goal is to forecast the future evolution of ensembles of multivariate time series in such applications (e.g., the future return of a financial asset or the future popularity of a Twitter account). Designing ML algorithms for such systems requires addressing the challenges of high-dimensional interactions and non-linearity. We propose a novel framework, which we dub as the additive influence model. Under our modeling assumption, we show that it is possible to decouple the learning of high-dimensional interactions from the learning of non-linear feature interactions. To learn the high-dimensional interactions, we leverage kernel-based techniques, with provable guarantees, to embed the entities in a low-dimensional latent space. To learn the non-linear feature-response interactions, we generalize prominent machine learning techniques, including designing a new statistically sound non-parametric method and an ensemble learning algorithm optimized for vector regressions.

  23. Generalized Unrelated Machine Scheduling Problem. Shichuan Deng, Jian Li, Yuval Rabani. ACM-SIAM Symposium on Discrete Algorithms (SODA 2023). [ArXiv] [ Show Abstract ]

  24. We study the generalized load-balancing (GLB) problem, where we are given n jobs, each of which needs to be assigned to one of m unrelated machines with processing times {pij}. The load of each machine i is a symmetric monotone norm function of the processing times. Our goal is to minimize the generalized makespan, which is another symmetric monotone norm over the m-dimensional machine load vector. This problem significantly generalizes many classic optimization problems, e.g., makespan minimization, set cover, minimum-norm load-balancing, etc. We obtain a polynomial time randomized algorithm that achieves an approximation factor of O(logn), matching the lower bound of set cover up to constant factor.

  25. Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation. Jin Xu, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, Jian Li. Proceedings of the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2022). [ArXiv] [ Show Abstract ]

  26. While large-scale neural language models, such as GPT2 and BART, have achieved impressive results on various text generation tasks, they tend to get stuck in undesirable sentence-level loops with maximization-based decoding algorithms (\textit{e.g.}, greedy search). This phenomenon is counter-intuitive since there are few consecutive sentence-level repetitions in human corpora (e.g., 0.02\% in Wikitext-103). To investigate the underlying reasons for generating consecutive sentence-level repetitions, we study the relationship between the probabilities of the repetitive tokens and their previous repetitions in the context. Through our quantitative experiments, we find that 1) Language models have a preference to repeat the previous sentence; 2) The sentence-level repetitions have a \textit{self-reinforcement effect}: the more times a sentence is repeated in the context, the higher the probability of continuing to generate that sentence; 3) The sentences with higher initial probabilities usually have a stronger self-reinforcement effect. Motivated by our findings, we propose a simple and effective training method \textbf{DITTO} (Pseu\underline{D}o-Repet\underline{IT}ion Penaliza\underline{T}i\underline{O}n), where the model learns to penalize probabilities of sentence-level repetitions from pseudo repetitive data. Although our method is motivated by mitigating repetitions, experiments show that DITTO not only mitigates the repetition issue without sacrificing perplexity, but also achieves better generation quality. Extensive experiments on open-ended text generation (Wikitext-103) and text summarization (CNN/DailyMail) demonstrate the generality and effectiveness of our method.

  27. Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability. Zhouzi Li, Zixuan Wang, Jian Li. Proceedings of the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2022). [ArXiv] [ Show Abstract ]

  28. Recent findings (e.g., arXiv:2103.00065) demonstrate that modern neural networks trained by full-batch gradient descent typically enter a regime called Edge of Stability (EOS). In this regime, the sharpness, i.e., the maximum Hessian eigenvalue, first increases to the value 2/(step size) (the progressive sharpening phase) and then oscillates around this value (the EOS phase). This paper aims to analyze the GD dynamics and the sharpness along the optimization trajectory. Our analysis naturally divides the GD trajectory into four phases depending on the change of the sharpness. We empirically identify the norm of output layer weight as an interesting indicator of sharpness dynamics. Based on this empirical observation, we attempt to theoretically and empirically explain the dynamics of various key quantities that lead to the change of sharpness in each phase of EOS. Moreover, based on certain assumptions, we provide a theoretical proof of the sharpness behavior in EOS regime in two-layer fully-connected linear neural networks. We also discuss some other empirical findings and the limitation of our theoretical results.

  29. Generalization Bounds for Gradient Methods via Discrete and Continuous Prior. Jian Li, Xuanyuan Luo. Proceedings of the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2022). [ArXiv] [ Show Abstract ]

  30. Proving algorithm-dependent generalization error bounds for gradient-type optimization methods has attracted significant attention recently in learning theory. However, most existing trajectory-based analyses require either restrictive assumptions on the learning rate (e.g., fast decreasing learning rate), or continuous injected noise (such as the Gaussian noise in Langevin dynamics). In this paper, we introduce a new discrete data-dependent prior to the PAC-Bayesian framework, and prove a high probability generalization bound for Floored GD (i.e. a finite precision version of gradient descent). We remark that our bound holds for nonconvex and nonsmooth scenarios. Moreover, our theoretical results provide numerically favorable upper bounds of testing errors (e.g., 0.037 on MNIST). Using a similar technique, we can also obtain new generalization bounds for certain variants of SGD. Furthermore, we study the generalization bounds for gradient Langevin Dynamics (GLD). Using the same framework with a carefully constructed continuous prior, we show a new high probability tighter generalization bound for GLD. The new faster rate is due to the concentration of the difference between the gradient of training samples and that of the prior.

  31. Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization. Zhize Li, Jian Li. Journal of Machine Learning Research (JMLR), 2022. [ArXiv] [ Show Abstract ]

    We propose and analyze several stochastic gradient algorithms for finding stationary points or local minimum in nonconvex, possibly with nonsmooth regularizer, finite-sum and online optimization problems. First, we propose a simple proximal stochastic gradient algorithm based on variance reduction called ProxSVRG+. We provide a clean and tight analysis of ProxSVRG+, which shows that it outperforms the deterministic proximal gradient descent (ProxGD) for a wide range of minibatch sizes, hence solves an open problem proposed in~\citet{reddi2016proximal}. Also, ProxSVRG+ uses much less proximal oracle calls than ProxSVRG (Reddi et al. 2016) and extends to the online setting by avoiding full gradient computations. Then, we further propose an optimal algorithm, called SSRGD, based on SARAH (Nguyen et al. 2017) and show that SSRGD further improves the gradient complexity of ProxSVRG+ and achieves the the optimal upper bound, matching the known lower bound. Moreover, we show that both ProxSVRG+ and SSRGD enjoy automatic adaptation with local structure of the objective function such as the Polyak-\L ojasiewicz (PL) condition for nonconvex functions in the finite-sum case, i.e., we prove that both of them can automatically switch to faster global linear convergence without any restart performed in prior work ProxSVRG. Finally, we focus on the more challenging problem of finding an $(\epsilon, \delta)$-local minimum instead of just finding an $\epsilon$-approximate (first-order) stationary point (which may be some bad unstable saddle points). We show that SSRGD can find an $(\epsilon, \delta)$-local minimum by simply adding some random perturbations. Our algorithm is almost as simple as its counterpart for finding stationary points, and achieves similar optimal rates.

  32. DeepScalper: A Risk-Aware Reinforcement Learning Framework to Capture Fleeting Intraday Trading Opportunities. Shuo Sun, Rundong Wang, Wanqi Xue, Xu He, Junlei Zhu, Jian Li and Bo An. The 31st ACM International Conference on Information and Knowledge Management (CIKM 2022). [ Show Abstract ]

    Reinforcement learning (RL) techniques have shown great success in many challenging quantitative trading tasks, such as portfolio management and algorithmic trading. Especially, intraday trading is one of the most profitable and risky tasks because of the intraday behaviors of the financial market that reflect billions of rapidly fluctuating capitals. However, a vast majority of existing RL methods focus on the relatively low frequency trading scenarios (e.g., day-level) and fail to capture the fleeting intraday investment opportunities due to two major challenges: 1) how to effectively train profitable RL agents for intraday investment decision-making, which involves high-dimensional fine-grained action space; 2) how to learn meaningful multi-modality market representation to understand the intraday behaviors of the financial market at tick-level. Motivated by the efficient workflow of professional human intraday traders, we propose DeepScalper, a deep reinforcement learning framework for intraday trading to tackle the above challenges. Specifically, DeepScalper includes four components: 1) a dueling Q-network with action branching to deal with the large action space of intraday trading for efficient RL optimization; 2) a novel reward function with a hindsight bonus to encourage RL agents making trading decisions with a long-term horizon of the entire trading day; 3) an encoder-decoder architecture to learn multi-modality temporal market embedding, which incorporates both macro-level and micro-level market information; 4) a risk-aware auxiliary task to maintain a striking balance between maximizing profit and minimizing risk. Through extensive experiments on real-world market data spanning over three years on six financial futures (2 stock index and 4 treasury bond), we demonstrate that DeepScalper significantly outperforms many state-of-the-art baselines in terms of four financial criteria. Furthermore, we conduct a series of exploratory and ablative studies to analyze the contributions of each component in DeepScalper.

  33. Integrating Diverse Policies for Portfolio Management via Combining Imitation Learning and Reinforcement Learning. Hui Niu, Siyuan Li and Jian Li. The 31st ACM International Conference on Information and Knowledge Management (CIKM 2022). [ Show Abstract ]

    Portfolio management is a fundamental problem in finance. It involves periodic reallocations of assets to maximize the expected returns within an appropriate level of risk exposure. Deep reinforcement learning (RL) has been considered a promising approach to solving this problem owing to its strong ability in sequential decision making. However, due to the non-stationary nature of financial markets, applying RL techniques to portfolio optimization remains a challenging problem. Extracting trading knowledge from various expert strategies could be helpful for agents to accommodate the changing markets. In this paper, we propose \textit{MetaTrader}, a novel two-stage RL-based approach for portfolio management, which learns to integrate diverse trading policies to adapt to various market conditions. In the first stage, MetaTrader incorporates an imitation learning objective into the reinforcement learning framework. Through imitating different expert demonstrations, MetaTrader acquires a set of trading policies with great diversity. In the second stage, MetaTrader learns a meta-policy to recognize the market conditions and decide on the most proper learned policy to follow. We evaluate the proposed approach on three real-world index datasets and compare it to state-of-the-art baselines. The empirical results demonstrate that MetaTrader significantly outperforms those baselines in balancing profits and risks. Furthermore, thorough ablation studies validate the effectiveness of the components in the proposed approach.

  34. Analyzing and Mitigating Interference in Neural Architecture Search. Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, Jian Li. The 39th International Conference on Machine Learning (ICML 2022, spotlight) [ Show Abstract ]

    Weight sharing is a popular approach to reduce the cost of neural architecture search (NAS) by reusing the weights of shared operators from previously trained child models. However, the rank correlation between the estimated accuracy and ground truth accuracy of those child models is low due to the interference among different child models caused by weight sharing. In this paper, we investigate the interference issue by sampling different child models and calculating the gradient similarity of shared operators, and observe: 1) the interference on a shared operator between two child models is positively correlated with the number of different operators; 2) the interference is smaller when the inputs and outputs of the shared operator are more similar. Inspired by these two observations, we propose two approaches to mitigate the interference: 1) MAGIC-T: rather than randomly sampling child models for optimization, we propose a gradual modification scheme by modifying one operator between adjacent optimization steps to minimize the interference on the shared operators; 2) MAGIC-A: forcing the inputs and outputs of the operator across all child models to be similar to reduce the interference. Experiments on a BERT search space verify that mitigating interference via each of our proposed methods improves the rank correlation of super-pet and combining both methods can achieve better results. Our discovered architecture outperforms RoBERTa by 1.1 and 0.6 points and ELECTRA by 1.6 and 1.1 points on the dev and test set of GLUE benchmark. Extensive results on the BERT compression, reading comprehension and ImageNet task demonstrate the effectiveness and generality of our proposed methods.

  35. FactorVAE: A Probabilistic Dynamic Factor Model Based on Variational Autoencoder for Predicting Cross-sectional Stock Returns. Yitong Duan, Lei Wang, Qizhong Zhang, Jian Li. AAAI Conference on Artificial Intelligence (AAAI 2022). [ Show Abstract ]

    As an asset pricing model in economics and finance, factor model has been widely used in quantitative investment. Towards building more effective factor models, recent years have witnessed the paradigm shift from linear models to more flexible nonlinear data-driven machine learning models. However, due to low signal-to-noise ratio of the financial data, it is quite challenging to learn effective factor models. In this paper, we propose a novel factor model, FactorVAE, as a probabilistic model with inherent randomness for noise modeling. Essentially, our model integrates the dynamic factor model (DFM) with the variational autoencoder (VAE) in machine learning, and we propose a prior-posterior learning method based on VAE, which can effectively guide the learning of model by approximating an optimal posterior factor model with future information. Particularly, considering that risk modeling is important for the noisy stock data, FactorVAE can estimate the variances from the distribution over the latent space of VAE, in addition to predicting returns. The experiments on the real stock market data demonstrate the effectiveness of FactorVAE, which outperforms various baseline methods.

  36. FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning. Jing Zhou, Yanan Zheng, Jie Tang, Jian Li, and Zhilin Yang. 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022). [ArXiv] [ Show Abstract ]

    Most previous methods for text data augmentation are limited to simple tasks and weak baselines. We explore data augmentation on hard tasks (i.e., few-shot natural language understanding) and strong baselines (i.e., pretrained models with over one billion parameters). Under this setting, we reproduced a large number of previous augmentation methods and found that these methods bring marginal gains at best and sometimes degrade the performance much. To address this challenge, we propose a novel data augmentation method FlipDA that jointly uses a generative model and a classifier to generate label-flipped data. Central to the idea of FlipDA is the discovery that generating label-flipped data is more crucial to the performance than generating label-preserved data. Experiments show that FlipDA achieves a good tradeoff between effectiveness and robustness--- it substantially improves many tasks while not negatively affecting the others.

  37. FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding. Zheng, Yanan, Jing Zhou, Yujie Qian, Ming Ding, Jian Li, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, and Zhilin Yang. 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022). [ArXiv] [ Show Abstract ]

    The few-shot natural language understanding (NLU) task has attracted much recent attention. However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring progress of the field. To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability. Under this new evaluation framework, we re-evaluate several state-of-the-art few-shot methods for NLU tasks. Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline. We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods.

  38. Synthesizing Entity Resolution Datasets. Xuedi Qin, Chengliang Chai, Nan Tang,Jian Li, Yuyu Luo, Guoliang Li, Yaoyu Zhu. The 38th IEEE International Conference on Data Engineering (ICDE 2022). [ Show Abstract ]

    Entity resolution (ER) is a core problem in data integration. Many companies have lots of datasets where ER needs to be conducted to integrate the data. On the one hand, it is nontrivial for non-ER experts within companies to design ER solutions. On the other hand, most companies are reluctant to release their real datasets for multiple reasons (e.g., privacy issues). A typical solution from the machine learning (ML) and the statistical community is to create surrogate (a.k.a. analogous) datasets based on the real dataset, release these surrogate datasets to the public to train ML models, such that these models trained on surrogate datasets can be either directly used or be adapted for the real dataset by the companies. In this paper, we study a new problem of synthesizing surrogate ER datasets using transformer models, with the goal that the ER model trained on the synthesized dataset can be used directly on the real dataset. We propose methods to solve this problem: we first learn the true similarity distributions of both matching and non-matching entity pairs from real dataset. We then devise algorithms that can synthesize fake but semantically meaningful entities, add matching and non-matching labels to these fake entity pairs, and ensure that the fake and real datasets have similar distributions. We also describe a method for entity rejection to avoid synthesizing bad fake entities that may destroy the original distributions. Extensive experiments show that ER matchers trained on real and synthetic ER datasets have very close performance on the same test sets ¨C their F1 scores differ within 6% on 3 commonly used ER datasets, and their average precision, recall differences are less than 5%.

  39. AutoHEnsGNN: Winning Solution to AutoGraph Challenge for KDD Cup 2020. Jin Xu, Mingjian Chen, Jianqiang Huang, Tangxing Yuan, Ke Hu, Jian Li, Jia Cheng, Jun Lei. The 38th IEEE International Conference on Data Engineering (ICDE 2022). [paper] [ Show Abstract ]

    Graph Neural Networks (GNNs) have become increasingly popular and achieved impressive results in many graph-based applications. However, extensive manual work and domain knowledge are required to design effective architectures, and the results of GNN models have high variance with different training setups, which limits the application of existing GNN models. In this paper, we present AutoHEnsGNN, a framework to build effective and robust models for graph tasks without any human intervention. AutoHEnsGNN won first place in the AutoGraph Challenge for KDD Cup 2020, and achieved the best rank score of five real-life datasets in the final phase. Given a task, AutoHEnsGNN first applies a fast proxy evaluation to automatically select a pool of promising GNN models. Then it builds a hierarchical ensemble framework: 1) We propose graph self-ensemble (GSE), which can reduce the variance of weight initialization and efficiently exploit the information of local and global neighborhoods; 2) Based on GSE, a weighted ensemble of different types of GNN models is used to effectively learn more discriminative node representations. To efficiently search the architectures and ensemble weights, we propose AutoHEnsGNN$_{\text{Gradient}}$, which treats the architectures and ensemble weights as architecture parameters and uses gradient-based architecture search to obtain optimal configurations, and AutoHEnsGNN$_{\text{Adaptive}}$, which can adaptively adjust the ensemble weight based on the model accuracy. Extensive experiments on KDD Cup datasets and commonly used datasets Cora, Citeseer, Pubmed and ogbn-arxiv demonstrate the effectiveness and robustness of AutoHEnsGNN.

  40. Multi-token Markov Game with Switching Costs. Jian Li, Daogao Liu. ACM-SIAM Symposium on Discrete Algorithms (SODA22). [paper] [ Show Abstract ]

    We study a general Markov game with metric switching costs: in each round, the player adaptively chooses one of several Markov chains to advance with the objective of minimizing the expected cost for at least k chains to reach their target states. If the player decides to play a different chain, an additional switching cost is incurred. The special case in which there is no switching cost was solved optimally by Dumitriu, Tetali, and Winkler [DTW03] by a variant of the celebrated Gittins Index for the classical multi-armed bandit (MAB) problem with Markovian rewards [Gittins 74, Gittins79]. However, for multi-armed bandit (MAB) with nontrivial switching cost, even if the switching cost is a constant, the classic paper by Banks and Sundaram [BS94] showed that no index strategy can be optimal. In this paper, we complement their result and show there is a simple index strategy that achieves a constant approximation factor if the switching cost is constant and k=1. To the best of our knowledge, this is the first index strategy that achieves a constant approximation factor for a general MAB variant with switching costs. For the general metric, we propose a more involved constant-factor approximation algorithm, via a nontrivial reduction to the stochastic k-TSP problem, in which a Markov chain is approximated by a random variable. Our analysis makes extensive use of various interesting properties of the Gittins index.

  41. Simple Combinatorial Algorithms for Combinatorial Bandits: Corruptions and Approximations. Haike Xu, Jian Li. Uncertainty in Artificial Intelligence (UAI 2021). [paper] [ Show Abstract ]

    We consider the stochastic combinatorial semi-bandit problem with adversarial corruptions.

  42. NAS-BERT: Task-Agnostic and Adaptive-Size BERT Compression with Neural Architecture Search. Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, Tie-Yan Liu. In Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2021). [paper] [ Show Abstract ]

    While pre-trained language models (e.g., BERT) have achieved impressive results on different natural language processing tasks, they have large numbers of parameters and suffer from big computational and memory costs, which make them difficult for real-world deployment. Therefore, model compression is necessary to reduce the computation and memory cost of pre-trained models. In this work, we aim to compress BERT and address the following two challenging practical issues: (1) The compression algorithm should be able to output multiple compressed models with different sizes and latencies, in order to support devices with different memory and latency limitations; (2) The algorithm should be downstream task agnostic, so that the compressed models are generally applicable for different downstream tasks. We leverage techniques in neural architecture search (NAS) and propose NAS-BERT, an efficient method for BERT compression. NAS-BERT trains a big supernet on a search space containing a variety of architectures and outputs multiple compressed models with adaptive sizes and latency. Furthermore, the training of NAS-BERT is conducted on standard self-supervised pre-training tasks (e.g., masked language model) and does not depend on specific downstream tasks. Thus, the compressed models can be used across various downstream tasks. The technical challenge of NAS-BERT is that training a big supernet on the pre-training task is extremely costly. We employ several techniques including block-wise search, search space pruning, and performance approximation to improve search efficiency and accuracy. Extensive experiments on GLUE and SQuAD benchmark datasets demonstrate that NAS-BERT can find lightweight models with better accuracy than previous approaches, and can be directly applied to different downstream tasks with adaptive model sizes for different requirements of memory or latency.

  43. Return-Based Contrastive Representation Learning for Reinforcement Learning. Guoqing Liu, Chuheng Zhang, Li Zhao, Tao Qin, Jinhua Zhu, Li Jian, Nenghai Yu, Tie-Yan Liu. 2021 International Conference on Learning Representations (ICLR2021) [paper] [ Show Abstract ]

    Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most important feedback signals in RL, we propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns. Our auxiliary loss is theoretically justified to learn representations that capture the structure of a new form of state-action abstraction, under which state-action pairs with similar return distributions are aggregated together. Empirically, our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite, and achieves even better performance when combined with existing auxiliary tasks.

  44. Exploration by Maximizing Renyi Entropy for Reward-Free RL Framework. Chuheng Zhang, Yuanying Cai, Longbo Huang, Jian Li. AAAI Conference on Artificial Intelligence (AAAI 2021). [ArXiv] [ Show Abstract ]

    we consider a reward free meta RL framework that completely separates exploration from exploitation and is suitable for the meta RL setting where there are many reward functions of interest. In the exploration phase, the agent learns an exploratory policy by interacting with a reward-free environment and collects a dataset of transitions by executing the policy. In the planning phase, the agent computes a good policy for any reward function based on the dataset without further interacting with the environment. This framework brings new challenges for exploration algorithms. In the exploration phase, we propose to maximize the R¨¦nyi entropy over the state-action space and justify this objective theoretically. We further deduce a policy gradient formulation for this objective and design a practical exploration algorithm that can deal with complex environments based on PPO. In the planning phase, we use a batch RL algorithm, batch constrained deep Q-learning (BCQ), to solve for good policies given arbitrary reward functions. Empirically, we show that our exploration algorithm is effective and sample efficient, and results in superior policies for arbitrary reward functions in the planning phase.

  45. Improved Algorithms for Convex-Concave Minimax Optimization, Yuanhao Wang, Jian Li. 2020 Conference on Neural Information Processing Systems (NeurIPS 2020). [ArXiv] [ Show Abstract ]

    This paper studies minimax optimization problems min_x max_y f(x,y), where f(x,y) is mx-strongly convex with respect to x, my-strongly concave with respect to y and (Lx,Lxy,Ly)-smooth. This paper proposes a new algorithm with better gradient complexity upper bound, which improves over the best known upper bound by Lin et al. Our bound achieves linear convergence rate and tighter dependency on condition numbers, especially when Lxy≪L (i.e., when the interaction between x and y is weak). Via reduction, our new bound also implies improved bounds for strongly convex-concave and convex-concave minimax optimization problems. When f is quadratic, we can further improve the upper bound, which matches the lower bound up to a small sub-polynomial factor.

  46. DoubleEnsemble: A New Ensemble Method Based on Sample Reweighting and Feature Selection for Financial Data Analysis. Chuhang Zhang, Yuanqi Li, Xi Chen, Yifei Jin, Pingzhong Tang, Jian Li. The IEEE International Conference on Data Mining (ICDM 2020). [ArXiv] [ Show Abstract ]

    Modern machine learning models (such as deep neural networks and boosting decision tree models) have become increasingly popular in financial market prediction, due to their superior capacity to extract complex non-linear patterns. However, since financial datasets have very low signal-to-noise ratio and are non-stationary, complex models are often very prone to overfitting and suffer from instability issues. Moreover, as various machine learning and data mining tools become more widely used in quantitative trading, many trading firms have been producing an increasing number of features (aka factors). Therefore, how to automatically select effective features becomes an imminent problem. To address these issues, we propose DoubleEnsemble, an ensemble framework leveraging learning trajectory based sample reweighting and shuffling based feature selection. Specifically, we identify the key samples based on the training dynamics on each sample and elicit key features based on the ablation impact of each feature via shuffling. Our model is applicable to a wide range of base models, capable of extracting complex patterns, while mitigating the overfitting and instability issues for financial market prediction. We conduct extensive experiments, including price prediction for cryptocurrencies and stock trading, using both DNN and gradient boosting decision tree as base models. Our experiment results demonstrate that DoubleEnsemble achieves a superior performance compared with several baseline methods.

  47. Approximation Algorithms for Clustering with Dynamic Points. Deng, Shichuan, Jian Li, and Yuval Rabani. The European Symposium on Algorithms (ESA 2020). Journal version in Journal of Computer and System Sciences, 2022 [ArXiv] [ Show Abstract ]

  48. In many classic clustering problems, we seek to sketch a massive data set of n points in a metric space, by segmenting them into k categories or clusters, each cluster represented concisely by a single point in the metric space. Two notable examples are the k-center/k-supplier problem and the k-median problem. In practical applications of clustering, the data set may evolve over time, reflecting an evolution of the underlying clustering model. In this paper, we initiate the study of a dynamic version of clustering problems that aims to capture these considerations. In this version there are T time steps, and in each time step t in {1,2,¡­,T}, the set of clients needed to be clustered may change, and we can move the k facilities between time steps. More specifically, we study two concrete problems in this framework: the Dynamic Ordered k-Median and the Dynamic k-Supplier problem. We first consider the Dynamic Ordered k-Median problem, where the objective is to minimize the weighted sum of ordered distances over all time steps, plus the total cost of moving the facilities between time steps. We present one constant-factor approximation algorithm for T=2 and another approximation algorithm for fixed T>=3. Then we consider the Dynamic k-Supplier problem, where the objective is to minimize the maximum distance from any client to its facility, subject to the constraint that between time steps the maximum distance moved by any facility is no more than a given threshold. When the number of time steps T is 2, we present a simple constant factor approximation algorithm and a bi-criteria constant factor approximation algorithm for the outlier version, where some of the clients can be discarded. We also show that it is NP-hard to approximate the problem with any factor for T>=3.

  49. LRSpeech: Extremely Low-Resource Speech Synthesis and Recognition. Jin Xu, Xu Tan, Yi Ren, Tao Qin, Jian Li, Sheng Zhao, and Tie-Yan Liu. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2020). [Paper] [ Show Abstract ]

  50. Gradient Descent Maximizes the Margin of Homogeneous Neural Networks. Kaifeng Lyu, Jian Li. 2020 International Conference on Learning Representations (ICLR2020, Oral) [ArXiv] [ Show Abstract ]

  51. Recent works on implicit regularization have shown that gradient descent converges to the max-margin direction for logistic regression with one-layer or multi-layer linear networks. In this paper, we generalize this result to homogeneous neural networks, including fully-connected and convolutional neural networks with ReLU or LeakyReLU activations. In particular, we study the gradient flow (gradient descent with infinitesimal step size) optimizing the logistic loss or cross-entropy loss of any homogeneous model (possibly non-smooth), and show that if the training loss decreases below a certain threshold, then we can define a smoothed version of the normalized margin which increases over time. We also formulate a natural constrained optimization problem related to margin maximization, and prove that both the normalized margin and its smoothed version converge to the objective value at a KKT point of the optimization problem. Furthermore, we extend the above results to a large family of loss functions. We conduct several experiments to justify our theoretical finding on MNIST and CIFAR-10 datasets. For gradient descent with constant learning rate, we observe that the normalized margin indeed keeps increasing after the dataset is fitted, but the speed is very slow. However, if we schedule the learning rate more carefully, we can observe a more rapid growth of the normalized margin. Finally, as margin is closely related to robustness, we discuss potential benefits of training longer for improving the robustness of the model.

  52. On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning. Jian Li, Xuanyuan Luo, Mingda Qiao. 2020 International Conference on Learning Representations (ICLR2020) [ArXiv] [ Show Abstract ]

  53. Generalization error (also known as the out-of-sample error) measures how well the hypothesis obtained from the training data can generalize to previously unseen data. Obtaining tight generalization error bounds is central to statistical learning theory. In this paper, we study the generalization error bound in learning general non-convex objectives, which has attracted significant attention in recent years. In particular, we study the (algorithm-dependent) generalization bounds of various iterative gradient based methods.

    (1)We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics (SGLD) and several other noisy gradient methods (e.g., with momentum, mini-batch and acceleration, Entropy-SGD). Our result recovers (and is typically tighter than) a recent result in Mou et al. (2018) and improves upon the results in Pensia et al. (2018). Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al. (2017a).

    (2) We also study the setting where the total loss is the sum of a bounded loss and an additional \ell_2 regularization term. We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time. Our new bounds are more desirable when the noisy level of the process is not small, and do not become vacuous even when T tends to infinity.

  54. Algorithms and Adaptivity Gaps for Stochastic k-TSP. Haotian Jiang, Jian Li, Daogao Liu, Sahil Singla. The 11th Innovations in Theoretical Computer Science (ITCS 2020). [ArXiv] [ Show Abstract ]

  55. Given a metric $(V,d)$ and a $\depot \in V$, the classical $\KTSP$ problem is to find a tour originating at $\depot$ of minimum length that visits at least $k$ nodes in $V$. In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of $\KTSP$. In \StochKTSP, originally defined by Ene-Nagarajan-Saket, each vertex $v$ in the given metric $(V,d)$ contains a stochastic reward $R_v$. The goal is to adaptively find a tour of minimum expected length that collects at least reward $k$; Ene et al. give an $O(\log k)$-approximation adaptive algorithm for this problem, and left open if there is an $O(1)$-approximation algorithm. We totally resolve their open question, and even give an $O(1)$-approximation \emph{non-adaptive} algorithm for \StochKTSP. We also introduce and obtain similar results for the \StochKCost problem. In this problem each vertex $v$ has a stochastic cost $C_v$, and the goal is to visit and select at least $k$ vertices to minimize the expected \emph{sum} of tour length and cost of selected vertices. Besides being a natural stochastic generalization of \KTSP, this problem is also interesting because it generalizes the Price of Information framework by Singla from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: ``repetitions'' and ``critical scaling''. In general, replacing a random variable with its expectation leads to very poor results. We show that for our problems, if we truncate the random variables at an ideal threshold, then their expected values form a good surrogate. Here, we rely on running several repetitions of our algorithm with the same threshold, and then argue concentration using Freedman's and Jogdeo-Samuels inequalities. Unfortunately, this ideal threshold depends on how far we are from achieving our target $k$, which a non-adaptive algorithm does not know. To overcome this barrier, we truncate the random variables at various different scales and identify a ``critical'' scale.

  56. Stochastic Gradient Hamiltonian Monte Carlo with Variance Reduction for Bayesian Inference. Zhize Li, Tianyi Zhang, Shuyu Cheng, Jun Zhu, Jian Li. Machine Learning, 2019. [ArXiv] [ Show Abstract ]

  57. Gradient-based Monte Carlo sampling algorithms, like Langevin dynamics and Hamiltonian Monte Carlo, are important methods for Bayesian inference. In large-scale settings, full-gradients are not affordable and thus stochastic gradients evaluated on mini-batches are used as a replacement. In order to reduce the high variance of noisy stochastic gradients, Dubey et al. [2016] applied the standard variance reduction technique on stochastic gradient Langevin dynamics and obtained both theoretical and experimental improvements. In this paper, we apply the variance reduction tricks on Hamiltonian Monte Carlo and achieve better theoretical convergence results compared with the variance-reduced Langevin dynamics. Moreover, we apply the symmetric splitting scheme in our variance-reduced Hamiltonian Monte Carlo algorithms to further improve the theoretical results. The experimental results are also consistent with the theoretical results. As our experiment shows, variance-reduced Hamiltonian Monte Carlo demonstrates better performance than variance-reduced Langevin dynamics in Bayesian regression and classification tasks on real-world datasets.

  58. Gradient Boosting With Piece-Wise Linear Regression Trees. Yu Shi, Jian Li, Zhize Li. The 28th International Joint Conference on Artificial Intelligence (IJCAI 2019). [ArXiv] [ Show Abstract ]

  59. Gradient boosting using decision trees as base learners, so called Gradient Boosted Decision Trees (GBDT), is a very successful ensemble learning algorithm widely used across a variety of applications. Recently, various GDBT construction algorithms and implementation have been designed and heavily optimized in some very popular open sourced toolkits such as Xgboost and LightGBM. In this paper, we show that both the accuracy and efficiency of GBDT can be further enhanced by using more complex base learners. Specifically, we extend gradient boosting to use \textit{piecewise linear regression trees} (PL Trees), instead of \textit{piecewise constant regression trees}. We show PL Trees can accelerate convergence of GBDT. Moreover, our new algorithm fits better to modern computer architectures with powerful Single Instruction Multiple Data (SIMD) parallelism. We propose optimization techniques to speedup our algorithm. The experimental results show that GBDT with PL Trees can provide very competitive testing accuracy with comparable or less training time. Our algorithm also produces much concise tree ensembles, thus can often reduce testing time costs.

  60. NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization. Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang and Jie Tang. The 2019 Web Conference (WWW 2019, oral). [paper] [ Show Abstract ]

  61. We study the problem of large-scale network embedding, which aims to learn latent representations for network mining applications. Previous research shows that 1) popular network embedding benchmarks, such as DeepWalk, are in essence implicitly factorizing a matrix with a closed form, and 2) the explicit factorization of such matrix generates more powerful embeddings than existing methods. However, directly constructing and factorizing this matrix¡ªwhich is dense¡ªis prohibitively expensive in terms of both time and space, making it not scalable for large networks. In this work, we present the algorithm of large-scale network embedding as sparse matrix factorization (NetSMF). NetSMF leverages theories from spectral sparsification to efficiently sparsify the aforementioned dense matrix, enabling significantly improved effi- ciency in embedding learning. The sparsified matrix is spectrally close to the original dense one with a theoretically bounded ap- proximation error, which helps maintain the representation power of the learned embeddings. We conduct experiments on networks of various scales and types. Results show that among both popular benchmarks (i.e., DeepWalk and LINE) and factorization based methods, NetSMF is the only method that achieves both high effi- ciency and effectiveness. We show that NetSMF requires only 24 hours to generate effective embeddings for a large-scale academic collaboration network with tens of millions of nodes, while it would cost DeepWalk months and is computationally infeasible for the dense matrix factorization solution.

  62. Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems; Jian Li, and Amol Deshpande; Mathematics of Operations Research (MOR), Vol. 44, No. 1, 2018. Conference version in Proceedings of the 52nd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2011), Palm Springs, California, 2011. [Paper] [ArXiv]. [ Show Abstract ]

  63. We consider the problem of maximizing the expected utility E[u(X(S))] where X(S)=\sum_{i \in S}X_i and S is a feasible set for some  combinatorial optimization problems (such as shortest path, minimum spanning tree, knapsack, etc.). (1) We present an additive PTAS for bounded Lipshitz utility function. This has implications in the VaR problem such as Pr[X(S)\leq 1](which corresponds to a step utility function). (2) We give PTAS for increasing concave functions (which are widely used to model risk-averse behaviors) and increasing function with bounded derivative.

  64. BRITS: Bidirectional Recurrent Imputation for Time Series. Wei Cao, Dong Wang,Jian Li, Hao Zhou, Lei Li, Yitan Li. Thirty-second Conference on Neural Information Processing Systems (NeurIPS 2018) [paper] [ Show Abstract ]

  65. Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing value imputation in time series data. Our proposed method directly learns the missing values in a bidirectional recurrent dynamical system, without any specific assumption. The imputed values are treated as variables of RNN graph and can be effectively updated during the backpropagation. BRITS has three advantages: (a) it can handle multiple correlated missing values in time series; (b) it generalizes to time series with nonlinear dynamics underlying; (c) it provides a data-driven imputation procedure and applies to general settings with missing data. We evaluate our model on three real-world datasets, including an air quality dataset, a health-care data, and a localization data for human activity. Experiments show that our model outperforms the state-of-the-art methods in both imputation and classification/regression accuracies.

  66. A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization. Zhize Li, Jian Li. Thirty-second Conference on Neural Information Processing Systems (NeurIPS 2018 spotlight) [ArXiv] [ Show Abstract ]

  67. We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. The algorithm is a slight variant of the ProxSVRG algorithm [Reddi et al. NIPS2016]. Our main contribution lies in the analysis of ProxSVRG+. It recovers several existing convergence results (in terms of the number of stochastic gradient oracle calls and proximal operations), and improves/generalizes some others. In particular, ProxSVRG+ generalizes the best results given by the SCSG algorithm, recently proposed by [Lei at al. NIPS2017] for the smooth nonconvex case. ProxSVRG+ is more straightforward than SCSG and yields simpler analysis. Moreover, ProxSVRG+ outperforms the deterministic proximal gradient descent (ProxGD) for a wide range of minibatch sizes, which partially solves an open problem proposed in \cite{reddi2016proximal}. Finally, for nonconvex functions satisfied Polyak-\L{}ojasiewicz condition, we show that ProxSVRG+ achieves global linear convergence rate without restart. ProxSVRG+ is always no worse than ProxGD and ProxSVRG/SAGA, and sometimes outperforms them (and generalizes the results of SCSG) in this case.

  68. eps-Coresets for Clustering (with Outliers) in Doubling Metrics. Lingxiao Huang, Shaofeng H.-C. Jiang, Jian Li, Xuan Wu. The 59th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2018) [Full version in ArXiv] [ Show Abstract ]

  69. We present the first efficient algorithm that constructs an eps-coreset for the a general class of clustering problems in a doubling metric.

    To this end, we establish the first relation between the doubling dimension of and the shattering dimension (or VC-dimension) of the range space induced by the distance. Such a relation is not known before, since one can easily construct instances in which neither one can be bounded by (some function of) the other. Surprisingly, we show that if we allow a small (1+ -eps)-distortion of the distance function $d$ (the distorted distance is called the smoothed distance function), the shattering dimension can be upper bounded. We also introduce the notion of $\tau$-error {\em probabilistic shattering dimension}, and prove a (drastically better) upper bound for the probabilistic shattering dimension for weighted doubling metrics. We believe the new relation between doubling and shattering dimensions is of independent interest and may find other applications.

    Furthermore, we study robust coresets for (k,z)-clustering with outliers in a doubling metric. We show an improved connection between $\alpha$-approximation and robust coreset. This also leads to improvement upon the previous best known bound of the size of robust coreset for Euclidean space [Feldman and Langberg, STOC 11]. The new bound entails a few new results in clustering and property testing.

    As another application, we show constant-sized (\eps, k, z)-centroid sets in doubling metrics can be constructed by extending our coreset construction. Prior to our result, constant-sized centroid sets for general clustering problems were only known for Euclidean spaces. We can apply our centroid set to accelerate the local search algorithm (studied in [Friggstad et al., FOCS 2016]) for the (k, z)-clustering problem in doubling metrics.

  70. A PTAS for a Class of Stochastic Dynamic Programs. Hao Fu, Jian Li and Pan Xu. The 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018) [ArXiv] [ Show Abstract ]

  71. We develop a framework for obtaining polynomial time approximation schemes (PTAS) for a class of stochastic dynamic programs. Using our framework, we obtain the first PTAS for several stochastic combinatorial optimization problems. In particular, we obtain PTAS for the ProbeMax Probemax: We are given a set of n items, each item i has a value Xi which is an independent random variable with a known (discrete) distribution. We can probe a subset P of items sequentially. Each time after probing an item i, we observe its value realization, which follows the distribution of Xi. We can adaptively probe at most m items and each item can be probed at most once. The reward is the maximum among the m realized values. Our goal is to design an adaptive probing policy such that the expected value of the reward is maximized. To the best of our knowledge, the best known approximation ratio is 1-1/e, due to Asadpour et al. We also obtain PTAS for several other problems: Committed Pandora¡¯s Box, Stochastic Target, Stochastic Blackjack Knapsack.

  72. Odd Yao-Yao Graphs are Not Spanners. Yifei Jin, Jian Li, Wei Zhan. In 34th International Symposium on Computational Geometry (SoCG 2018). [ArXiv] [ Show Abstract ]

  73. It is a long standing open problem whether Yao-Yao graphs YYk are all spanners. Bauer and Damian showed that all YY6k for k \geq 6 are spanners. Li and Zhan generalized their result and proved that all even Yao-Yao graphs YY2k are spanners (for k \geq 42). However, their technique cannot be extended to odd Yao-Yao graphs, and whether they are spanners are still elusive. In this paper, we show that, surprisingly, for any integer k \geq 1, there exist odd Yao-Yao graph YY2k+1 instances, which are not spanners. This provides a negative answer to (Problem 70) in [http://cs.smith.edu/~orourke/TOPP/P70.html]

  74. When Will You Arrive? Estimating Travel Time Based on Deep Neural Networks. Dong Wang, Junbo Zhang, Wei Cao, Jian Li, Yu Zheng. The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018) [Paper] [Code and Data] [ Show Abstract ]

  75. Estimating the travel time of any path (denoted by a sequence of connected road segments) in a city is of great importance to traffic monitoring, route planning, ridesharing, taxi/Uber dispatching, etc. However, it is a very challenging problem, affected by diverse complex factors, including spatial correlations, temporal dependencies, external conditions (e.g. weather, traffic lights). Prior work usually focuses on estimating the travel times of individual road segments or sub-paths and then summing up these times, which leads to an inaccurate estimation because such approaches do not consider road intersections/ traffic lights, and local errors may accumulate. To address these issues, we propose an end-to-end Deep learning framework for Travel Time Estimation (called DeepTTE) that estimates the travel time of the whole path directly. More specifically, we present a geo-convolution operation by integrating the geographic information into the classical convolution, capable of capturing spatial correlations. By stacking recurrent unit on the geo-convoluton layer, our DeepTTE can capture the temporal dependencies as well. A multi-task learning component is given on the top of DeepTTE, that learns to estimate the travel time of both the entire path and each local path simultaneously during the training phase. Extensive experiments on two trajectory datasets show our DeepTTE significantly outperforms the state-of-the-art methods.

  76. Optimal In-Place Suffix Sorting. Zhize Li, Jian Li, Hongwei Huo. Journal version in Information and Computation (I&C) 2021. [Full version in ArXiv] [ Show Abstract ]

  77. The suffix array is a fundamental data structure for many applications that involve string searching and data compression. We obtain the \emph{first} in-place suffix array construction algorithms that are optimal both in time and space for (read-only) integer alphabets. We provide the first linear time in-place algorithm for read-only integer alphabets with |Sigma|=O(n) (i.e., the input string cannot be modified). This algorithm settles the open problem posed by [Franceschini and Muthukrishnan, ICALP'07]. For the read-only general alphabets (i.e., only comparisons are allowed), we present an optimal O(nlogn) time in-place suffix sorting algorithm, recovering the result obtained by Franceschini and Muthukrishnan, which was an open problem posed by [Manzini and Ferragina, ESA'02].

  78. Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec. Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, Jie Tang. The 11th ACM International Conference on Web Search and Data Mining (WSDM 2018). [ArXiv] [ Show Abstract ]

  79. Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks' Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.

  80. Learning Gradient Descent: Better Generalization and Longer Horizons. Kaifeng Lv, Shunhua Jiang, Jian Li. The 34th International Conference on Machine Learning (ICML 2017). [ArXiv] [ Show Abstract ][Code]

  81. Training deep neural networks is a highly nontrivial task, involving carefully selecting appropriate training algorithms, scheduling step sizes and tuning other hyperparameters. Trying different combinations can be quite labor-intensive and time consuming. Recently, researchers have tried to use deep learning algorithms to exploit the landscape of the loss function of the training problem of interest, and learn how to optimize over it in an automatic way. In this paper, we propose a new learning-to-learn model and some useful and practical tricks. Our optimizer outperforms generic, hand-crafted optimization algorithms and state-of-the-art learning-to-learn optimizers by DeepMind in many tasks. We demonstrate the effectiveness of our algorithms on a number of tasks, including deep MLPs, CNNs, and simple LSTMs.

  82. Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration. Lijie Chen, Anupam Gupta, Jian Li, Mingda Qiao, Ruosong Wang. In the 30th Annual Conference on Learning Theory (COLT 2017) [Paper] [ Show Abstract ]

  83. We study the combinatorial pure exploration problem CPE in a stochastic multi-armed bandit game. In a CPE instance, we are given $n$ stochastic arms with unknown reward distributions, as well as a family $\subsetfam$ of feasible subsets over the arms. Let the weight of an arm be the mean of its reward distribution. Our goal is to identify the feasible subset in $\subsetfam$ with the maximum total weight, using as few samples as possible. The problem generalizes the classical best arm identification problem and the top-$k$ arm identification problem, both of which have attracted significant attention in recent years. We provide a novel \emph{instance-wise} lower bound for the sample complexity of the problem, as well as a nontrivial sampling algorithm, matching the lower bound up to a factor of $\ln|\subsetfam|$. For an important class of combinatorial families (including spanning trees, matchings, and path constraints), we also provide polynomial time implementation of the sampling algorithm, using the equivalence of separation and optimization for convex program, and the notion of approximate Pareto curves in multi-objective optimization (note that $|\subsetfam|$ can be exponential in $n$). We also show that the $\ln|\subsetfam|$ factor is inevitable in general, through a nontrivial lower bound construction utilizing a combinatorial structure resembling the Nisan-Wigderson design. Our results significantly improve several previous results for several important combinatorial constraints, and provide a tighter understanding of the general \combibandit problem. We further introduce an even more general problem, formulated in geometric terms. We are given $n$ Gaussian arms with unknown means and unit variance. Consider the $n$-dimensional Euclidean space $\mathbb{R}^n$, and a collection $\anssetcol$ of disjoint subsets. Our goal is to determine the subset in $\anssetcol$ that contains the mean profile (which is the $n$-dimensional vector of the means), using as few samples as possible. The problem generalizes most pure exploration bandit problems studied in the literature. We provide the first nearly optimal sample complexity upper and lower bounds for the problem.

  84. Towards Instance Optimal Bounds for Best Arm Identification. Lijie Chen, Jian Li, Mingda Qiao. In the 30th Annual Conference on Learning Theory (COLT 2017) [ArXiv] [ Show Abstract ]

  85. We study the best arm identification (BEST-1-ARM) problem, which is defined as follows. We are given n stochastic bandit arms. The ith arm has a reward distribution Di with an unknown mean \mu_i. Upon each play of the ith arm, we can get a reward, sampled i.i.d. from Di. We would like to identify the arm with largest mean with probability at least 1-\delta, using as few samples as possible. We obtain nearly instance optimal sample complexity for best arm, thereby making significant progress towards a complete resolution of the ( gap-entropy conjecture), from both the upper and lower bound sides. The paper is a followup of our early paper: On the Optimal Sample Complexity for Best Arm Identification. Lijie Chen, Jian Li. [ArXiv] The gap entropy conjecture - concerning the instance optimality of the best-1-arm problem. See COLT16 open problem

  86. CDB: optimizing queries with crowd-based selections and joins. Guolinag Li, Chengliang Chai, Ju Fan, Jian Li, Yudian Zheng etc. 2017 ACM International Conference on Management of Data (SIGMOD 2017). [paper][ Show Abstract ]

  87. Crowdsourcing database systems have been proposed to lever- age crowd-powered operations to encapsulate the complex- ities of interacting with the crowd. Existing systems suffer from two major limitations. Firstly, in order to optimize a query, they often adopt the traditional tree model to se- lect an optimized table-level join order. However, the tree model provides a coarse-grained optimization, which gener- ates the same order for different joined tuples and limits the optimization potential that different joined tuples can be optimized by different orders. Secondly, they mainly focus on optimizing the monetary cost. In fact, there are three optimization goals (i.e., smaller monetary cost, lower laten- cy, and higher quality) in crowdsourcing, and it calls for a system to enable multi-goal optimization. To address these limitations, we develop a crowd-powered database system CDB that supports crowd-based query op- timizations. CDB has the following fundamental differences compared with the existing systems. Firstly, CDB employs a graph-based query model that provides more fine-grained query optimization. Secondly, CDB adopts a unifieded frame- work to perform the multi-goal optimization based on the graph model. We have implemented our system and deployed it on AMT and CrowdFlower. We have also created a benchmark for evaluating crowd-powered databases. We have conducted both simulated and real experiments, and the experimental results demonstrate the performance su- periority of CDB on cost, latency and quality.

  88. Capacitated Center Problems with Two-Sided Bounds and Outliers, Hu Ding, Lunjia Hu, Huang Lingxiao and Jian Li. Workshop on Algorithms and Data Structures (WADS 2017) [ArXiv][ Show Abstract ]

  89. In recent years, the capacitated center problems have at- tracted a lot of research interest. Given a set of vertices V , we want to find a subset of vertices S, called centers, such that the maximum cluster radius is minimized. Moreover, each center in S should satisfy some capacity constraint, which could be an upper or lower bound on the number of vertices it can serve. Capacitated k-center problems with one-sided bounds (upper or lower) have been well studied in previous work, and a constant factor approximation was obtained. We are the first to study the capacitated center problem with both ca- pacity lower and upper bounds (with or without outliers). We assume each vertex has a uniform lower bound and a non-uniform upper bound. For the case of opening exactly k centers, we note that a generaliza- tion of a recent LP approach can achieve constant factor approximation algorithms for our problems. Our main contribution is a simple combi- natorial algorithm for the case where there is no cardinality constraint on the number of open centers. Our combinatorial algorithm is simpler and achieves better constant approximation factor compared to the LP approach.

  90. DeepSD: Supply-Demand Prediction for Online Car-hailing Services using Deep Neural Networks. Dong Wang, Wei Cao, Jian Li, Jieping Ye. The 33th IEEE International Conference on Data Engineering (ICDE 2017). [paper] [ Show Abstract ]

  91. The online car-hailing service has gained great popularity all over the world. As more passengers and more drivers use the service, it becomes increasingly more important for the the car-hailing service providers to effectively schedule the drivers to minimize the waiting time of passengers and maximize the driver utilization, thus to improve the overall user experience. In this paper, we study the problem of predicting the real-time car-hailing supply-demand, which is one of the most important component of an effective scheduling system. Our objective is to predict the gap between the car-hailing supply and demand in a certain area in the next few minutes. Based on the prediction, we can balance the supply-demands by scheduling the drivers in advance. We present an end-to-end framework called Deep Supply-Demand (DeepSD) using a novel deep neural network structure. Our approach can automatically discover complicated supply-demand patterns from the car-hailing service data while only requires a minimal amount hand-crafted features. Moreover, our framework is highly flexible and extendable. Based on our framework, it is very easy to utilize multiple data sources (e.g., car-hailing orders, weather and traffic data) to achieve a high accuracy. We conduct extensive experimental evaluations, which show that our framework provides more accurate prediction results than the existing methods.

  92. Nearly Instance Optimal Sample Complexity Bounds for Top-k Arm Selection. Lijie Chen, Jian Li, Mingda Qiao. The 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017). [full version in ArXiv] [ Show Abstract ]

  93. In the Best-k-Arm problem, we are given n stochastic bandit arms, each associated with an unknown reward distribution. We are required to identify the k arms with the largest means by taking as few samples as possible. In this paper, we make progress towards a complete characterization of the instance-wise sample complexity bounds for the Best-k-Arm problem. On the lower bound side, we obtain a novel complexity term to measure the sample complexity that every Best-k-Arm instance requires. This is derived by an interesting and nontrivial reduction from the Best-1-Arm problem. We also provide an elimination based algorithm that matches the instancewise lower bound within doubly-logarithmic factors. The sample complexity of our algorithm is strictly better than the state-of-the-art for Best-k-Arm (module constant factors).

  94. k-Regret Minimizing Set: Efficient Algorithms and Hardness. Wei Cao, Jian Li, Haitao Wang, Kangning Wang, Ruosong Wang, Raymond Chi-Wing Wong and Wei Zhan. The 20th International Conference on Database Theory (ICDT 2017), Venice, Italy. (best newcomer award) [paper] [full version] [ Show Abstract ]

  95. We study the k-regret minimizing query (k-RMS), which is a useful operator for supporting multi-criteria decision-making. Given two integers k and r, a k-RMS returns r tuples from the database which minimize the k-regret ratio, defined as one minus the worst ratio between the k-th maximum utility score among all tuples in the database and the maximum utility score of the r tuples returned. A solution set contains only r tuples, enjoying the benefits of both top-k queries and skyline queries. Proposed in 2012, the query has been studied extensively in recent years. In this paper, we advance the theory and the practice of k-RMS in the following aspects. First, we develop efficient algorithms for k-RMS (and its decision version) when the dimensionality is 2. The running time of our algorithms outperforms those of previous ones. Our experimental results show that our algorithms are more efficient than previous ones on both synthetic and real datasets up to three orders of magnitude. Second, we show that k-RMS is NP-hard even when the dimensionality is 3. This provides a complete characterization of the complexity of k-RMS, and answers an open question in previous studies. In addition, we present approximation algorithms for the problem when the dimensionality is 3 or larger.

  96. Stochastic k-Center and j-Flat-Center Problems. Lingxiao Huang, Jian Li. ACM-SIAM Symposium on Discrete Algorithms (SODA17). [ArXiv] [ Show Abstract ]

  97. Solving geometric optimization problems over uncertain data have become increasingly important in many applications and have attracted a lot of attentions in recent years. In this paper, we study two important geometric optimization problems, the k-center problem and the j-flat-center problem, over stochastic/uncertain data points in Euclidean spaces. For the stochastic k-center problem, we would like to find k points in a fixed dimensional Euclidean space, such that the expected value of the k-center objective is minimized. For the stochastic j-flat-center problem, we seek a j-flat (i.e., a j-dimensional affine subspace) such that the expected value of the maximum distance from any point to the j-flat is minimized. We consider both problems under two popular stochastic geometric models, the existential uncertainty model, where the existence of each point may be uncertain, and the locational uncertainty model, where the location of each point may be uncertain. We provide the first PTAS (Polynomial Time Approximation Scheme) for both problems under the two models. Our results generalize the previous results for stochastic minimum enclosing ball and stochastic enclosing cylinder.

  98. Combinatorial Multi-Armed Bandit with General Reward Functions, Wei Chen, Wei Hu, Fu Li, Jian Li, Yu Liu, Pinyan Lu. Neural Information Processing Systems (NIPS 2016, Oral). [Full version in ArXiv] [ Show Abstract ]

  99. In this paper, we study the stochastic combinatorial multi-armed bandit (CMAB) framework that allows a general nonlinear reward function, whose expected value may not depend only on the means of the input random variables but possibly on the entire distributions of these variables. Our framework enables a much larger class of reward functions such as the $\max()$ function and nonlinear utility functions. Existing techniques relying on accurate estimations of the means of random variables, such as the upper confidence bound (UCB) technique, do not work directly on these functions. We propose a new algorithm called stochastically dominant confidence bound (SDCB), which estimates the distributions of underlying random variables and their stochastically dominant confidence bounds. We prove that if the underlying variables have known finite supports, SDCB can achieve $O(\log T)$ distribution-dependent regret and $\tilde{O}(\sqrt{T})$ distribution-independent regret, where $T$ is the time horizon. For general arbitrary distributions, we further use a discretization technique and show an $\tilde{O}(\sqrt{T})$ regret bound. We apply our results to the $K$-MAX problem and the expected utility maximization problems. In particular, for $K$-MAX, we provide the first polynomial-time approximation scheme (PTAS) for its offline problem, and give the first $\tilde{O}(\sqrt{T})$ bound on the $(1-\epsilon)$-approximation regret of its online problem, for any $\epsilon > 0$.

  100. Demand Driven Store Placement via Multiple Spatial-temporal Data. Mengwen Xu, Tianyi Wang, Zhengwei Wu, Jingbo Zhou, Jian Li and Haishan Wu. ACM SIGSPATIAL 2016. [paper][ Show Abstract ]

  101. Choosing a good location when opening a new store is crucial for the future success of a business. Traditional methods in- clude online manual survey, analytic models based on census data, which are either unable to adapt to the dynamic mar- ket or very time consuming. The rapid increase of the avail- ability of big data from various types of mobile devices, such as online query data and online positioning data, provides us with the possibility to develop automatic and accurate data-driven prediction models for business store placemen- t. In this paper, we propose a Demand Distribution Driven Store Placement (D3SP) framework for business store place- ment by mining search query data from Baidu Maps. D3SP detects the spatial-temporal distributions of customer demands on different business services via query data from Baidu Maps, the largest online map search engine in China, and detects the gaps between demand and supply. Then we determine candidate locations via clustering such gaps. In the final stage, we solve the location optimization problem by predicting and ranking the number of customers. We not only deploy supervised regression models to predict the num- ber of customers, but also use learn-to-rank model to directly rank the locations. We evaluate our framework on various types of businesses in real-world cases, and the experiments results demonstrate the effectiveness of our methods. D3SP as the core function for store placement has already been implemented as a core component of our business analyt- ics platform and could be potentially used by chain store merchants on Baidu Nuomi.

  102. DESTPRE : A Data-Driven Approach to Destination Prediction for Taxi Rides. Mengwen Xu, Dong Wang, Jian Li. UbiComp 2016. [ Paper ] [ Show Abstract ]

  103. With the wide use of mobile devices, predicting the destination of moving vehicles has become an increasingly important problem for location based recommendation systems and destination-based advertising. Most existing approaches are based on various Markov chain models, in which the historical trajectories are used to train the model and the top-k most probable destinations are returned. We identify certain limitations of the previous approaches. Instead, we propose a new data-driven framework, called DESTPRE, which is not based on a probabilistic model, but directly operates on the trajectories and makes the prediction. We make use of only historic trajectories, without individual identity information. Our design of DESTPRE, although simple, is a result of several useful observations from the real trajectory data. DESTPRE involves an index based on Bucket PR Quadtree and Minwise hashing, for efficiently retrieving similar trajectories, and a clustering on destinations for predictions. By incorporating some additional ideas, we show that the prediction accuracy can be further improved. We have conducted extensive experiments on real Beijing Taxi dataset. The experimental results demonstrate the effectiveness of DESTPRE.

  104. epsilon-Kernel Coresets for Stochastic Points. Lingxiao Huang, Jian Li, Jeff Phillips and Haitao Wang. The 24rd Annual European Symposium on Algorithms (ESA 2016). [ArXiv] [ Show Abstract ]

  105. We consider the standard stochastic geometry model in which the existence or the location of each point may be stochastic. We show there exists constant-size kernel coreset (a kernel can approximate either the expected value or the distribution of the width of the point set in every direction) and we can construct such kernel coresets in nearly linear time. We show its applications to several function extent problems, tracking moving points, and shape fitting problems in the stochastic setting.

  106. Almost All Even Yao-Yao Graphs Are Spanners. Jian Li, Wei Zhan. The 24rd Annual European Symposium on Algorithms (ESA 2016). [ArXiv] [ Show Abstract ]

  107. It is an open problem whether Yao-Yao graphs YYk (also known as sparse-Yao graphs) are all spanners when the integer parameter k is large enough. In this paper we show that, for any integer k\geq 42, the Yao-Yao graph YY2k is a tk-spanner, with stretch factor tk=4.27+O(k−1) when k tends to infinity. Our result generalizes the best known result which asserts that all YY6k are spanners for k large enough [Bauer and Damian, SODA'13]. Our proof is also somewhat simpler.

  108. Pure Exploration of Multi-armed Bandit Under Matroid Constraints. Lijie Chen, Anupum Gupta, Jian Li. Conference on Learning Theory (COLT 2016). [Full version] [ Show Abstract ]

  109. We study the pure exploration problem subject to a matroid constraint (BEST-BASIS) in a stochastic multi-armed bandit game. In a BEST-BASIS instance, we are given n stochastic arms with unknown reward distributions, as well as a matroid M over the arms. Let the weight of an arm be the mean of its reward distribution. Our goal is to identify a basis of M with the maximum total weight, using as few samples as possible. The problem is a significant generalization of the best arm identification problem and the top-k arm identification problem, which have attracted significant attentions in recent years. We study both the exact and PAC versions of BEST-BASIS, and provide algorithms with nearly-optimal sample complexities for these versions. Our results generalize and/or improve on several previous results for the top-k arm identification problem and the combinatorial pure exploration problem (when the combinatorial constraint is a matroid).

  110. K-Means Clustering with Distributed Dimension. Hu Ding, Yu Liu, Lingxiao Huang, Jian Li. The 33rd International Conference on Machine Learning (ICML 2016). [Paper] [Supplementary] [ Show Abstract ]

  111. Distributed clustering has attracted significant attention in recent years. In this paper, we study the k-means problem in the distributed dimension setting, where the dimensions of the data are partitioned across multiple machines. We provide new approximation algorithms, which incur low communication costs and achieve constant approximation factors. The communication complexity of our algorithms significantly improve on existing algorithms. We also provide the first communication lower bound, which nearly matches our upper bound in a wide range of parameter setting. Our experimental results show that our algorithms outperform existing algorithms on real data sets in the distributed dimension setting.

  112. Cost-Effective Crowdsourced Entity Resolution: A Partial-Order Approach, Chengliang Chai, Guoliang Li, Jian Li, Dong Deng, Jianhua Feng. The annual ACM SIGMOD conference 2016. Journal version in VLDB Journal 2018. [Paper] [ Show Abstract ][Journal link]

  113. Crowdsourced entity resolution has recently attracted a significant attention because it can harness the wisdom of crowds to improve the quality of entity resolution. However existing techniques either cannot achieve perfect quality or incur huge monetary costs. To address these problems, we propose a cost-effective crowdsourced entity resolution framework, which can significantly reduce the monetary cost while keeping high quality. We first define a partial order on the pairs of records. Then we select a pair as a question and ask the crowd to check whether the records in the pair refer to the same entity. After getting the answer of this pair, we infer the answers of other pairs based on the partial order. Next we iteratively select pairs without answers to ask until all pairs have answers. We devise effective algorithms to judiciously select the pairs to ask in order to reduce the number of asked pairs. To further reduce the cost, we propose a grouping technique to group the pairs such that we only ask one pair instead of all pairs in each group. We develop error-tolerant techniques to tolerate the errors introduced by the partial order and the crowd. Experimental results show that our method reduces the cost to 1.25\% of existing approaches (or existing approaches take more than 80 times money of our method) while not sacrificing the quality.

  114. On Top-k Selection in Multi-Armed Bandits and Hidden Bipartite Graphs. Wei Cao, Jian Li,  Yufei Tao, Zhize Li. Neural Information Processing Systems (NIPS), 2015. [full paper] [ Show Abstract ]

  115. This paper discusses how to efficiently choose from n unknown distributions the k ones whose means are the greatest by a certain metric, up to a small relative error. We study the topic under two standard settings multi-armed bandits and hidden bipartite graphs. For both settings, we prove lower bounds on the total number of samples needed, and propose optimal algorithms whose sample complexities match those lower bounds.

  116. Stochastic Online Greedy Learning with Semi-bandit Feedbacks. Tian Lin, Jian Li, Wei Chen. Neural Information Processing Systems (NIPS), 2015. [full paper] [ Show Abstract ]

  117. We study the online learning problem when the input to the greedy algorithm is stochastic with unknown parameters that have to be learned over time. We first propose the greedy regret and eps-quasi greedy regret as learning metrics comparing with the performance of offline greedy algorithm. We propose two online greedy learning algorithms with semi-bandit feedbacks, which achieve O(log T) problem-dependent regret bound for a general class of combinatorial structures and reward functions that allow greedy solutions.

  118. Efficient Algorithms for One-Dimensional k-Center Problems. Danny Z. Chen, Jian Li, Haitao Wang. Theoretical Computer Science (TCS), 2015 [ArXiv] [ Show Abstract ]

  119. We consider the problem of finding k centers for n weighted points on a real line. This (weighted) k-center problem was solved in O(n log n) time previously by using Cole's parametric search and other complicated approaches. In this paper, we present an simpler O(n log n) time algorithm that avoids the parametric search, and in certain special cases our algorithm solves the problem in O(n) time.

  120. A PTAS for the Weighted Unit Disk Cover Problem, Jian Li, Yifei Jin. The 42nd International Colloquium on Automata, Languages, and Programming (ICALP 2015) [ArXiv] [ Show Abstract ]

  121. We are given a set of weighted unit disks and a set of points in Euclidean plane. The minimum weight unit disk cover (\UDC) problem asks for a subset of disks of minimum total weight that covers all given points. For the weighted \UDC\ problem, several constant approximations have been developed. However, whether the problem admits a PTAS has been an open question. We present the first PTAS for \UDC. Our result implies the first PTAS for the minimum weight dominating set problem in unit disk graphs, the first PTAS for the maximum lifetime coverage problem and an improved constant approximation ratio for the connected dominating set problem in unit disk graphs.

  122. Approximating the Expected Values for Combinatorial Optimization Problems over Stochastic Points. Lingxiao Huang, Jian Li. The 42nd International Colloquium on Automata, Languages, and Programming (ICALP 2015). [ArXiv] [ Show Abstract ]

  123. We consider the stochastic geometry model where the location of each node is a random point in a given metric space, or the existence of each node is uncertain. We study the problems of computing the expected lengths of several combinatorial or geometric optimization problems over stochastic points, including closest pair, minimum spanning tree, k-clustering, minimum perfect matching, and minimum cycle cover. We also consider the problem of estimating the probability that the length of the closest pair, or the diameter, is at most, or at least, a given threshold. Most of the above problems are known to be #P-hard. We obtain FPRAS for most of them in both the existential and the locational uncertainty models.

  124. Learning Arbitrary Statistical Mixtures of Discrete Distributions. Jian Li, Yuval Rabani, Leonard J. Schulman, Chaitanya Swamy, In ACM Symposium on the Theory of Computing (STOC 2015). [ArXiv] [ Show Abstract ]

  125. We study the problem of learning from unlabeled samples very general statistical mixture models on large finite sets. Specifically, the model to be learned, \mix, is a probability distribution over probability distributions p, where each such p is a probability distribution over [n] ={1,2,...,n}. When we sample from \mix, we do not observe p directly, but only indirectly and in very noisy fashion, by sampling from [n] repeatedly, independently K times from the distribution p. The problem is to infer \mix to high accuracy in transportation (earthmover) distance. We give the first efficient algorithms for learning this mixture model without making any restricting assumptions on the structure of the distribution. We bound the quality of the solution as a function of the size of the samples K and the number of samples used. Our model and results have applications to a variety of unsupervised learning scenarios, including learning topic models and collaborative filtering.

  126. Approximation Algorithms for the Connected Sensor Cover Problem. Lingxiao Huang, Jian Li, Qicai Shi. Theoretical Computer Science (TCS 2020). Preliminary version appeared in The 21st Annual International Computing and Combinatorics Conference (COCOON'15) (in the conference version, the construction of the steiner tree LP and lemma 4 have some problems. please read the journal version here)  [paper] [ Show Abstract ]

  127. We are given a set of sensors and a set of target points in the Euclidean plane. In MIN-ConnectSensorCover, our goal is to find a set of sensors of minimum cardinality, such that all target points are covered, and all sensors can communicate with each other (i.e., the communication graph is connected). In Budgeted-CSC problem, our goal is to choose a set of B sensors, such that the number of targets covered by the chosen sensors is maximized and the communication graph is connected. We obtain constant approximations for both problems.

  128. Near-Linear Time Approximation Schemes for Geometric Maximum Coverage, Kai Jin, Jian Li, Haitao Wang, Bowei Zhang, Ningye Zhang. Theoretical Computer Science (TCS 2018). [conference version] [ Show Abstract ] [full version]

  129. We study approximation algorithms for the following geometric version of the maximum coverage problem: Let P be a set of n weighted point in the plane. We would like to place m rectangles such that total weights of covered points in P is maximized. For any m, we present linear time approximation schemes that can find a 1+eps approximation to the optimal solution.

  130. Range Queries on Uncertain Data, Jian Li, Haitao Wang International Symposium on Algorithms and Computation (ISAAC 2014). Journal version accepted in TCS, 2015 [arXiv]. [ Show Abstract ]

  131. We are given a set of stochastic points on the real line, we consider the problem of building data structures on P to answer range queries: find the top-k points that lie in the given interval with the highest probability.

  132. On the Energy Efficiency of Device Discovery in Mobile Opportunistic Networks: A Systematic Approach. Bo Han, Jian Li,  Aravind Srinivasan. IEEE Transactions on Mobile Computing (TMC), 2014 [Paper]

  133. Optimal PAC Multiple Arm Identification with Applications to Crowdsourcing, Yuan Zhou, Xi Chen, Jian Li. International Conference on Machine Learning. ICML 2014. [full version] [ Show Abstract ]

  134. We study the problem of selecting K arms with the highest expected rewards in a stochastic N-armed bandit game. We propose a new PAC algorithm, which, with probability at least 1-delta , identifies a set of K arms with average reward at most \epsilon away from the top-k arm. Naive uniform sampling requires O(nlnn) samples. We show it is possible to achieve linear sample complexity. We also establish a matching lower bound (meaning our upper bound is worse-case optimal).

  135. The Multi-shop Ski Rental Problem, Lingqing Ai, Xian Wu, Lingxiao Huang, Longbo Huang, Pingzhong Tang, and Jian Li, Proceedings of ACM SIGMETRICS, 2014. [paper]

  136. We consider the multi-shop ski rental problem. This problem generalizes the classic ski rental problem to a multi-shop setting, in which each shop has different prices for renting and purchasing a pair of skis, and a consumer has to make decisions on when and where to buy. We given optimal close-form online competitive strategy from the consumer's perspective and a linear time algorithm for computing the optimal strategy. The problem finds applications in cost management in IaaS cloud and scheduling in distributed computing.

  137. Fully Polynomial Approximation Scheme for Approximating a Sum of Random Variables, Jian Li and Tianlin Shi. In Operation Research Letters (ORL), 2014 [ArXiv] [Code (by Tianlin)] [ Show Abstract ]

  138. We show there is an FPTAS for approximating Pr[∑X_i�] for a set of independent (not necessarily identically distributed) random variables X_i.

  139. A Pruned Exhaustive Search Algorithm for Nearly Optimal Diversified Result Ranking, Fei Chen, Yiqun Liu, Jian Li, Min Zhang and Shaoping Ma, 23rd International World Wide Web Conference, Seoul, Korea, April 7-11, 2014 (Poster)

  140. Egalitarian Pairwise Kidney Exchange: Fast Algorithms via Linear Programming and Parametric Flow. Jian Li, Yicheng Liu, Lingxiao Huang, Pingzhong Tang. In the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014) [paper] [ Show Abstract ]

  141. We obtain an efficient poly-time algorithm for the pairwise kidney exchange problem proposed by Roth et al. [J. Econ.Th. 05]. Their original algorithm runs in exponential time. We also provide an alternative short proof of the fact that there exists a majorizing vector in a polymatroid (original proof due to Tamir [MOR95]).

  142. A Constant Factor Approximation Algorithm for Fault-Tolerant k-Median. Mohammadtaghi Hajiaghayi, Wei Hu, Jian Li, Shi Li, Barna Saha. In the ACM-SIAM Symposium on Discrete Algorithms(SODA 2014), Portland, Oregon, USA. Journal version in ACM Transcations on Algorithms, 2016. [ArXiv] [ Show Abstract ]

  143. (1) We present the first constant approximation for the fault-tolerant k-median problem even when the demands are nonuniform (each demand point should be assigned to a given number of facilities). This generalizes a previous constant approximation for the uniform case and improves on a previous logarithmic approximation for the general nonuniform case. (2) We present the first constant approximation for the fault-tolerant facility location problem even when the weight function are non-monotone. This generalizes a previous constant approximation for the monotone case.

  144. Your Friends Have More Friends Than You Do: Identifying Influential Mobile Users Through Random Walks. Bo Han, Jian Li and Aravind Srinivasan. IEEE/ACM Transactions on Networking (TON), 2013 [Paper] [ Show Abstract ]

  145. we investigate the problem of identifying influential users in mobile social networks. Influential users are individuals with high centrality in their social-contact graphs. Traditional approaches find these users through centralized algorithms. However, the computational complexity of these algorithms is known to be very high, making them unsuitable for large-scale networks. We propose a lightweight and distributed protocol, iWander, to identify influential users through fixed length random-walk sampling. We prove that random-walk sampling with O(log n) steps, where n is the number of nodes in a graph, comes quite close to sampling vertices approximately according to their degrees. The most attractive feature of iWander is its extremely low control-message overhead, which lends itself well to mobile applications.

  146. Scalable Column Concept Determination for Web Tables Using Large Knowledge Bases. Dong Deng, Yu Jiang, Guoliang Li, Jian Li, Cong Yu. In the 39th International Conference on Very Large Data Bases (VLDB 2013), Italy, 2013 [Paper][Full version].

  147. Trinary-Projection Trees for Approximate Nearest Neighbor Search. Jingdong Wang, Naiyan Wang, You Jia; Jian Li, Gang Zeng, Hongbin Zha, Xian-Sheng Hua. The IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2013 [Paper]

  148. Stochastic Combinatorial Optimization via Poisson Approximation. Jian Li and Wen Yuan. In the 45th ACM Symposium on the Theory of Computing (STOC 2013), USA,2013 [Paper] [Full version in ArXiv] [ Show Abstract ]

  149. We propose a new technique, called Poisson approximation, for approximating/optimizing the sum of a set of random variables. In many cases, it reduces the stochastic problem to a deterministic multidimensional packing/covering problem. We can apply it to several problems, such as (1) Expected utility maximization (we can reproduce and generalize one result in our FOCS11 paper.) (2) Stochastic knapsack (proposed in [Dean et al. MOR08]): we are given a set items. The size and/or the profit of an item may not be fixed values and only their probability distributions are known to us in advance. The actual size and profit of an item are revealed to us as soon as it is inserted into the knapsack. We want to find a policy to insert the items such that the total expected profit is maximized. We obtain a bicriterion PTAS (i.e., 1+eps approximation with 1+eps capacity), even we allow correlation and cancellation of jobs. (3) Bayesian Online Selection with knapsack constraint: it is a variant of stochastic knapsack where the size and the profit of an item are revealed before the decision whether to select the item is made.

  150. Matroid and Knapsack Center Problems. Danny Z. Chen, Jian Li, Hongyu Liang, and Haitao Wang. In The 16th Conference on Integer Programming and Combinatorial Optimization (IPCO 2013), Chile, 2013 [Paper] [Full version in ArXiv]. Journal version in Algorithmica, 2015. [ Show Abstract ]

  151. We present the first constant approximation for the fault-tolerant k-median problem (each demand point should be assigned to a given number of facilities).

  152. Algorithms on Minimizing the Maximum Sensor Movement for Barrier Coverage of a Linear Domain; Danny Z. Chen, Yan Gu, Jian Li and Haitao Wang; [ArXiv]. Journal version in Discrete Computational Geometry (DCG), 2013 [Journal doi] [ Show Abstract ]

  153. Given a set of small intervals, and a target interval I, we would like to move the small intervals to cover the target interval. Our goal is to minimize the maximum movement of any interval. We obtain an O(n^2 log n) algorithm for this problem.

  154. The load-distance balancing problem. Edward Bortnikov, Samir Khuller, Jian Li, Yishay Mansour and Seffi Naor. Networks, 2012. [Paper] [doi] [ Show Abstract ]

  155. We study a problem referred to as the load-distance balancing (LDB) problem, where the objective is assigning a set of clients to a set of given servers. Each client suffers a delay, that is, the sum of the network delay (which is proportional to the distance to its server) and the congestion delay at this server, a nondecreasing function of the number of clients assigned to the server. We address two flavors of LDB—the first one seeking to minimize the maximum incurred delay, and the second one targeted for minimizing the average delay. We also provide bounds for the price of anarchy for the game theoretic version of the problem.

  156. DataSynth: Generating Synthetic Data using Declarative Constraints. Arvind Arasu, Raghav Kaushik, and Jian Li. In the 37th International Conference on Very Large Data Bases (VLDB 2011), Seattle, Wasington, 2011. (Demo)

  157. Data Generation using Declarative Constraints. Arvind Arasu, Raghav Kaushik, and Jian Li. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD 2011), Athens, Greece, 2011. [Paper] [ Show Abstract ]

  158. We study the problem of generating synthetic databases having declaratively specified characteristics. This problem is motivated by database system and application testing, data masking, and benchmarking. We argue that a natural, expressive, and declarative mechanism for specifying data characteristics is through cardinality constraints; a cardinality constraint specifies that the output of a query over the generated database have a certain cardinality. While the data generation problem is in tractable in general, we present efficient algorithms that can handle a large and useful class of constraints.

  159. Sensitivity Analysis and Explanations for Robust Query Evaluation in Probabilistic Databases. Bhargav Kanagal, Jian Li, Amol Deshpande. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD 2011), Athens, Greece, 2011. [Paper] [ Show Abstract ]

  160. We extend our VLDB09 paper to continuous distributions. We give exact algorithm for polynomial PDFs and approximation algorithms for arbitrary continuous distributions using techniques in approximation theory, such as splines, quadratures, with convergence guarantees better than naive Monte Carlo.

  161. Generalized Machine Activation Problems. Jian Li and Samir Khuller. In the ACM-SIAM Symposium on Discrete Algorithms (SODA 2011),  San Francisco, USA, 2011. [Paper][slides] [ Show Abstract ]

  162. We obtain tight approximation results for the machine activation problem proposed in our SODA10 work. We also obtain the first lnn-approximation for the universal facility location problem in non-metric spaces.

  163. Ranking Continuous Probabilistic Datasets. Jian Li and Amol Deshpande. In the 36th International Conference on Very Large Data Bases (VLDB 2010), Singapore, 2010. [Paper] [slides] [ Show Abstract ]

  164. We extend our VLDB09 paper to continuous distributions. We give exact algorithm for polynomial PDFs and approximation algorithms for arbitrary continuous distributions using techniques in approximation theory, such as splines, quadratures, with convergence guarantees better than naive Monte Carlo.

  165. Densest $k$-Subgraph Approximation on Intersection Graphs. Danny Z. Chen, Rudolf Fleischer, Jian Li. In the 8th Workshop on Approximation and Online Algorithms (WAOA 2010). [Paper][slides] [ Show Abstract ]

  166. We provide constant factor approximation algorithm for the densest k-subgraph problem on several graph classes, such as chordal graphs, disk graphs etc. 

  167. New Models and Algorithms for Throughput Maximization in Broadcast Scheduling. Chandra Chekuri, Avigdor Gal, Sungjin Im, Samir Khuller, Jian Li, Richard McCutchen, Benjamin Moseley, Louiqa Raschid. In the 8th Workshop on Approximation and Online Algorithms (WAOA 2010). [slides] [full version]

  168. When LP is the Cure for Your Matching Woes: Improved Bounds for Stochastic Matchings. Nikhil Bansal, Anupam Gupta, Jian Li, Julian Mestre, Viswanath Nagarajan, Atri Rudra. In the 18th Annual European Symposium on Algorithms (ESA 2010). (Best Paper Award) [Paper] [Slides] Journal version in Algorithimca, 2011.[Journal Version] [ Show Abstract ]

  169. We study the stochastic matching problem, which finds applications in kidney exchange, online dating and online ads. Consider a random graph model where each possible edge e is present independently with some probability pe. We are given these numbers pe, and want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to query it, and if the edge is indeed present in the graph, we are forced to add it to our matching. Further, each vertex i is allowed to be queried at most ti times. How should we adaptively query the edges to maximize the expected weight of the matching? Our main result is the first constant approximation for the weighted stochastic matching.

  170. Clustering with Diversity. Jian Li, Ke Yi, Qin Zhang. In the 37th International Colloquium on Automata, Languages and Programming (ICALP 2010),July 5-10, 2010. [full version in arXiv] [ Show Abstract ]

  171. We consider the clustering with diversity problem: given a set of colored points in a metric space, partition them into clusters such that each cluster has at least ell points, all of which have distinct colors. We give a 2-approximation to this problem for any ell when the objective is to minimize the maximum radius of any cluster. We show that the approximation ratio is optimal unless P\ne NP by providing a matching lower bound. Several extensions to our algorithm have also been developed for handling outliers. This problem can be considered as a metric variant of the l-diversity problem, a popular problem for privacy-preserving data publication

  172. On Computing Compression Trees for Data Collection in Wireless Sensor Networks. Jian Li, Amol Deshpande and Samir Khuller. In the 29th Conference on Computer Communications (INFOCOM 2010), San Diego, USA, 2010 [Paper] [slides]

  173. Energy Efficient Scheduling via Partial Shutdown. Samir Khuller, Jian Li, Barna Saha. In the ACM-SIAM Symposium on Discrete Algorithms (SODA 2010),  Austin, USA, 2010. [Paper] [ Show Abstract ]

  174. The central framework we introduce considers a collection of m machines (unrelated or related) with each machine i having an activation cost of ai. There is also a collection of n jobs that need to be performed, and pi,j is the processing time of job j on machine i. Standard scheduling models assume that the set of machines is fixed and all machines are available. However, in our setting, we assume that there is an activation cost budget of A �we would like to select a subset S of the machines to activate with total cost a(S) A and find a schedule for the n jobs on the machines in S minimizing the makespan (or any other metric).

  175. A Unified Approach to Ranking in Probabilistic Databases. Jian Li, Barna Saha and Amol Deshpande. In the 35th International Conference on Very Large Data Bases (VLDB 2009), Lyon, France, 2009. (Best Paper Award) [Paper] [Slides long short] Journal version: The VLDB Journal, 2011. [Journal Version] [ Show Abstract ]

  176. We propose two parameterized ranking functions, called PRF and PRFe, for ranking probabilistic datasets.  PRF and PRFe generalize or can approximate many of the previously proposed ranking functions. We present novel generating functions-based algorithms for efficiently ranking large datasets according to these ranking functions

  177. Consensus Answers for Queries over Probabilistic Databases. Jian Li and Amol Deshpande. In the 28th ACM Symposium on Principles of Database Systems (PODS 2009). Providence, USA, 2009 [Paper][slides] [ Show Abstract ]

  178. We address the problem of finding a “best�deterministic query answer to a query over a probabilistic database. We propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). We consider this problem for various types of queries including SPJ queries, Top-k ranking queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness).

  179. Minimizing communication cost in distributed multi-query processing. Jian Li, Amol Deshpande and Samir Khuller. In International Conference on Data Engineering (ICDE2009), Shanghai, China, 2009 [Paper] [slides] [ Show Abstract ]

  180. We are also given a set of queries, Q1.....Qm, with the query Qi requiring access to a subset of relations distributed in different nodes of the network. For each query, a query plan is provided, in the form of a rooted tree, which specifies the operations to be performed on the relations and the order in which to perform them. Given this input, our goal is to find a data movement plan that minimizes the total communication cost incurred while executing the queries. We provide efficient exact algorithm when the communication network is a tree and approximation algorithm for more general networks.

  181. An $O({logn\over loglogn})$ Upper Bound on the Price of Stability for Undirected Shapley Network Design Games. Jian Li.In Information Processing Letter (IPL). 2009. [Paper][slides] [ Show Abstract ]

  182. We have an edge weighted undirected network G(V, E) and n selfish players where player i wants to choose a low cost path from source vertex si to destination vertex ti . The cost of each edge is equally split among players who pass it. The price of stability is defined as the ratio of the cost of the best Nash equilibrium to that of the optimal solution. We present an O(logn/ log logn) upper bound on price of stability for the single sink case, i.e., ti =t for all i.

  183. More Efficient Algorithms and Analyses for Unequal Letter Cost Prefix-Free Coding. Mordecai Golin, Jian Li. In IEEE Transactions on Information Theory, Volume 54, Issue 8, Aug. Page(s):3412 - 3424, 2008 [Journal Version] [ Show Abstract ]

  184. There is a large literature devoted to the problem of finding an optimal (min-cost) prefix-free code with an unequal letter-cost encoding alphabet of size. While there is no known polynomial time algorithm for solving it optimally, there are many good heuristics that all provide additive errors to optimal. The additive error in these algorithms usually depends linearly upon the largest encoding letter size.
    This paper was motivated by the problem of finding optimal codes when the encoding alphabet is infinite. Because the largest letter cost is infinite, the previous analyses could give infinite error bounds. We provide a new algorithm that works with infinite encoding alphabets. When restricted to the finite alphabet case, our algorithm often provides better error bounds than the best previous ones known.