NCRN-Cornell graduate student Anshumali Shrivastava joining Rice University in August 2015

anshu_sAnshumali Shrivastava, PhD student in Computer Science at Cornell University advised by Prof. Ping Li, and part of the NCRN-Cornell node since its inception, will join Rice University in August 2015 as assistant professor after graduation. He continues to pursue his research interests in Large Scale Machine Learning, Randomized Algorithms for Big Data, Graphs and Social Networks Mining.

Latest publications at the NCRN-Cornell node

2014
  • Shrivastava, Anshumali and Ping Li, “Graph Kernels via Functional Embedding,” CoRR, vol. abs/1404.5214, 2014.
    [Abstract] [URL] [Bibtex]

    We propose a representation of graph as a functional object derived from the power iteration of the underlying adjacency matrix. The proposed functional representation is a graph invariant, i.e., the functional remains unchanged under any reordering of the vertices. This property eliminates the difficulty of handling exponentially many isomorphic forms. Bhattacharyya kernel constructed between these functionals significantly outperforms the state-of-the-art graph kernels on 3 out of the 4 standard benchmark graph classification datasets, demonstrating the superiority of our approach. The proposed methodology is simple and runs in time linear in the number of edges, which makes our kernel more efficient and scalable compared to many widely adopted graph kernels with running time cubic in the number of vertices.

    @Article{DBLP:journals/corr/Shrivastava014,
    Title = {Graph Kernels via Functional Embedding},
    Author = {Anshumali Shrivastava and Ping Li},
    Journal = {CoRR},
    Year = {2014},
    Volume = {abs/1404.5214},
    URL = {http://arxiv.org/abs/1404.5214},
    Owner = {vilhuber},
    Abstract = {We propose a representation of graph as a functional object derived from the power iteration of the underlying adjacency matrix. The proposed functional representation is a graph invariant, i.e., the functional remains unchanged under any reordering of the vertices. This property eliminates the difficulty of handling exponentially many isomorphic forms. Bhattacharyya kernel constructed between these functionals significantly outperforms the state-of-the-art graph kernels on 3 out of the 4 standard benchmark graph classification datasets, demonstrating the superiority of our approach. The proposed methodology is simple and runs in time linear in the number of edges, which makes our kernel more efficient and scalable compared to many widely adopted graph kernels with running time cubic in the number of vertices.},
    Timestamp = {2014.07.09}
    }
  • Shrivastava, Anshumali and Ping Li, “In Defense of MinHash Over SimHash,” in Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), Reykjavik, Iceland, 2014.
    [Abstract] [PDF] [URL] [Bibtex]

    MinHash and SimHash are the two widely adopted Locality Sensitive Hashing (LSH) algorithms for large-scale data processing applications. Deciding which LSH to use for a particular problem at hand is an important question, which has no clear answer in the existing literature. In this study, we provide a theoretical answer (validated by experiments) that MinHash virtually always outperforms SimHash when the data are binary, as common in practice such as search. The collision probability of MinHash is a function of resemblance similarity (R), while the collision probability of SimHash is a function of cosine similarity (S). To provide a common basis for comparison, we evaluate retrieval results in terms of S for both MinHash and SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH with respect to S, by using a general inequality S2≤R≤S2−S. Our worst case analysis can show that MinHash significantly outperforms SimHash in high similarity region. Interestingly, our intensive experiments reveal that MinHash is also substantially better than SimHash even in datasets where most of the data points are not too similar to each other. This is partly because, in practical data, often R≥Sz−S holds where z is only slightly larger than 2 (e.g., z≤2.1). Our restricted worst case analysis by assuming Sz−S≤R≤S2−S shows that MinHash indeed significantly outperforms SimHash even in low similarity region. We believe the results in this paper will provide valuable guidelines for search in practice, especially when the data are sparse.

    @InProceedings{Ping2014,
    Title = {In Defense of MinHash Over SimHash},
    Author = {Anshumali Shrivastava and Ping Li},
    Booktitle = {Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS)},
    Year = {2014},
    Address = {Reykjavik, Iceland},
    Volume = {33},
    Owner = {vilhuber},
    URL = {http://jmlr.org/proceedings/papers/v33/shrivastava14.html},
    Abstract = {MinHash and SimHash are the two widely adopted Locality Sensitive Hashing (LSH) algorithms for large-scale data processing applications. Deciding which LSH to use for a particular problem at hand is an important question, which has no clear answer in the existing literature. In this study, we provide a theoretical answer (validated by experiments) that MinHash virtually always outperforms SimHash when the data are binary, as common in practice such as search. The collision probability of MinHash is a function of resemblance similarity (R), while the collision probability of SimHash is a function of cosine similarity (S). To provide a common basis for comparison, we evaluate retrieval results in terms of S for both MinHash and SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH with respect to S, by using a general inequality S2≤R≤S2−S. Our worst case analysis can show that MinHash significantly outperforms SimHash in high similarity region. Interestingly, our intensive experiments reveal that MinHash is also substantially better than SimHash even in datasets where most of the data points are not too similar to each other. This is partly because, in practical data, often R≥Sz−S holds where z is only slightly larger than 2 (e.g., z≤2.1). Our restricted worst case analysis by assuming Sz−S≤R≤S2−S shows that MinHash indeed significantly outperforms SimHash even in low similarity region. We believe the results in this paper will provide valuable guidelines for search in practice, especially when the data are sparse.},
    pdf = {http://jmlr.org/proceedings/papers/v33/shrivastava14.pdf},
    Timestamp = {2014.07.09}
    }
2013
  • Li, Ping, Anshumali Shrivastava, and Arnd Christian König, “b-Bit Minwise Hashing in Practice,” in Internetware 2013, 2013.
    [Abstract] [PDF] [URL] [Bibtex]

    Minwise hashing is a standard technique in the context of search for approximating set similarities. The recent work [26, 32] demonstrated a potential use of b-bit minwise hashing [23, 24] for efficient search and learning on massive, high-dimensional, binary data (which are typical for many applications in Web search and text mining). In this paper, we focus on a number of critical issues which must be addressed before one can apply b-bit minwise hashing to the volumes of data often used industrial applications. Minwise hashing requires an expensive preprocessing step that computes k (e.g., 500) minimal values after applying the corresponding permutations for each data vector. We developed a parallelization scheme using GPUs and observed that the preprocessing time can be reduced by a factor of 20 ~ 80 and becomes substantially smaller than the data loading time. Reducing the preprocessing time is highly beneficial in practice, e.g., for duplicate Web page detection (where minwise hashing is a major step in the crawling pipeline) or for increasing the testing speed of online classifiers. Another critical issue is that for very large data sets it becomes impossible to store a (fully) random permutation matrix, due to its space requirements. Our paper is the first study to demonstrate that b-bit minwise hashing implemented using simple hash functions, e.g., the 2-universal (2U) and 4-universal (4U) hash families, can produce very similar learning results as using fully random permutations. Experiments on datasets of up to 200GB are presented.

    @InProceedings{PingShrivastava2013,
    Title = {b-Bit Minwise Hashing in Practice},
    Author = {Ping Li and Anshumali Shrivastava and K\"onig, Arnd Christian},
    Booktitle = {Internetware 2013},
    Year = {2013},
    Month = {October},
    Abstract = {Minwise hashing is a standard technique in the context of search for approximating set similarities. The recent work [26, 32] demonstrated a potential use of b-bit minwise hashing [23, 24] for efficient search and learning on massive, high-dimensional, binary data (which are typical for many applications in Web search and text mining). In this paper, we focus on a number of critical issues which must be addressed before one can apply b-bit minwise hashing to the volumes of data often used industrial applications. Minwise hashing requires an expensive preprocessing step that computes k (e.g., 500) minimal values after applying the corresponding permutations for each data vector. We developed a parallelization scheme using GPUs and observed that the preprocessing time can be reduced by a factor of 20 ~ 80 and becomes substantially smaller than the data loading time. Reducing the preprocessing time is highly beneficial in practice, e.g., for duplicate Web page detection (where minwise hashing is a major step in the crawling pipeline) or for increasing the testing speed of online classifiers. Another critical issue is that for very large data sets it becomes impossible to store a (fully) random permutation matrix, due to its
    space requirements. Our paper is the first study to demonstrate that b-bit minwise hashing implemented using simple hash functions, e.g., the 2-universal (2U) and 4-universal (4U) hash families, can produce very similar learning results as using fully random permutations. Experiments on datasets of up to 200GB are presented.},
    Owner = {vilhuber},
    Timestamp = {2013.10.07},
    PDF = {http://ecommons.library.cornell.edu/bitstream/1813/37986/2/a13-li.pdf},
    Url = {http://www.nudt.edu.cn/internetware2013/}
    }
  • Shrivastava, Anshumali and Ping Li, “Beyond Pairwise: Provably Fast Algorithms for Approximate k-Way Similarity Search,” in Advances in Neural Information Processing Systems 26, 2013, pp. 791-799.
    [Abstract] [PDF] [URL] [Bibtex]

    We go beyond the notion of pairwise similarity and look into search problems with k-way similarity functions. In this paper, we focus on problems related to 3-way Jaccard similarity: R3way = |S1∩S2∩S3| |S1∪S2∪S3| , S1, S2, S3 ∈ C, where C is a size n collection of sets (or binary vectors). We show that approximate R3way similarity search problems admit fast algorithms with provable guarantees, analogous to the pairwise case. Our analysis and speedup guarantees naturally extend to k-way resemblance. In the process, we extend traditional framework of locality sensitive hashing (LSH) to handle higher-order similarities, which could be of independent theoretical interest. The applicability of R3way search is shown on the “Google Sets” application. In addition, we demonstrate the advantage of R3way resemblance over the pairwise case in improving retrieval quality.

    @InProceedings{ShrivastavaLi2013a,
    title = {Beyond Pairwise: Provably Fast Algorithms for Approximate k-Way Similarity Search},
    author = {Shrivastava, Anshumali and Li, Ping},
    booktitle = {Advances in Neural Information Processing Systems 26},
    editor = {C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger},
    pages = {791--799},
    year = {2013},
    publisher = {Curran Associates, Inc.},
    abstract = {We go beyond the notion of pairwise similarity and look into search problems with k-way similarity functions. In this paper, we focus on problems related to 3-way Jaccard similarity: R3way = |S1∩S2∩S3| |S1∪S2∪S3| , S1, S2, S3 ∈ C, where C is a size n collection of sets (or binary vectors). We show that approximate R3way similarity search problems admit fast algorithms with provable guarantees, analogous to the pairwise case. Our analysis and speedup guarantees naturally extend to k-way resemblance. In the process, we extend traditional framework of locality sensitive hashing (LSH) to handle higher-order similarities, which could be of independent theoretical interest. The applicability of R3way search is shown on the “Google Sets” application. In addition, we demonstrate the advantage of R3way resemblance over the pairwise case in improving retrieval quality.},
    url = {http://papers.nips.cc/paper/5216-beyond-pairwise-provably-fast-algorithms-for-approximate-k-way-similarity-search/},
    pdf = {http://papers.nips.cc/paper/5216-beyond-pairwise-provably-fast-algorithms-for-approximate-k-way-similarity-search.pdf},
    Owner = {vilhuber},
    Timestamp = {2013.09.06}
    }
2012
  • Li, Ping, Anshumali Shrivastava, and Arnd Christian König, “GPU-based minwise hashing: GPU-based minwise hashing,” in Proceedings of the 21st World Wide Web Conference (WWW 2012) (Companion Volume), 2012, pp. 565-566.
    [Abstract] [DOI] [URL] [Bibtex]

    {Minwise hashing is a standard technique for efficient set similarity estimation in the context of search. The recent work of b-bit minwise hashing provided a substantial improvement by storing only the lowest b bits of each hashed value. Both minwise hashing and b-bit minwise hashing require an expensive preprocessing step for applying k (e.g.

    @InProceedings{LiSK12,
    Title = {GPU-based minwise hashing: GPU-based minwise hashing},
    Author = {Ping Li and Anshumali Shrivastava and Arnd Christian K{\"o}nig},
    Booktitle = {Proceedings of the 21st World Wide Web Conference (WWW 2012) (Companion Volume)},
    Year = {2012},
    Pages = {565-566},
    Abstract ={Minwise hashing is a standard technique for efficient set similarity estimation in the context of search. The recent work of b-bit minwise hashing provided a substantial improvement by storing only the lowest b bits of each hashed value. Both minwise hashing and b-bit minwise hashing require an expensive preprocessing step for applying k (e.g., k=500) permutations on the entire data in order to compute k minimal values as the hashed data. In this paper, we developed a parallelization scheme using GPUs, which reduced the processing time by a factor of 20-80. Reducing the preprocessing time is highly beneficial in practice, for example, for duplicate web page detection (where minwise hashing is a major step in the crawling pipeline) or for increasing the testing speed of online classifiers (when the test data are not preprocessed).},
    Bibsource = {DBLP, http://dblp.uni-trier.de},
    Doi = {10.1145/2187980.2188129},
    Url = {http://doi.acm.org/10.1145/2187980.2188129}
    }
  • Shrivastava, Anshumali and Ping Li, “Fast Near Neighbor Search in High-Dimensional Binary Data,” in The European Conference on Machine Learning (ECML 2012), 2012.
    [Abstract] [PDF] [Bibtex]

    Abstract. Numerous applications in search, databases, machine learning, and computer vision, can benefit from efficient algorithms for near neighbor search. This paper proposes a simple framework for fast near neighbor search in high-dimensional binary data, which are common in practice (e.g., text). We develop a very simple and effective strategy for sub-linear time near neighbor search, by creating hash tables directly using the bits generated by b-bit minwise hashing. The advantages of our method are demonstrated through thorough comparisons with two strong baselines: spectral hashing and sign (1-bit) random projections.

    @InProceedings{ShrivastavaL12,
    Title = {Fast Near Neighbor Search in High-Dimensional Binary Data},
    Author = {Anshumali Shrivastava and Ping Li},
    Booktitle = {The European Conference on Machine Learning (ECML 2012)},
    Year = {2012},
    Abstract ={Abstract. Numerous applications in search, databases, machine learning,
    and computer vision, can benefit from efficient algorithms for near
    neighbor search. This paper proposes a simple framework for fast near
    neighbor search in high-dimensional binary data, which are common in
    practice (e.g., text). We develop a very simple and effective strategy for
    sub-linear time near neighbor search, by creating hash tables directly
    using the bits generated by b-bit minwise hashing. The advantages of
    our method are demonstrated through thorough comparisons with two
    strong baselines: spectral hashing and sign (1-bit) random projections.
    },
    PDF = {http://www.cs.bris.ac.uk/~flach/ECMLPKDD2012papers/1125548.pdf},
    }
  • Sun, Xu, Anshumali Shrivastava, and Ping Li, “Fast Multi-task Learning for Query Spelling Correction,” in The 21st ACM International Conference on Information and Knowledge Management (CIKM 2012), 2012, pp. 285-294.
    [Abstract] [DOI] [URL] [Bibtex]

    In this paper, we explore the use of a novel online multi-task learning framework for the task of search query spelling correction. In our procedure, correction candidates are initially generated by a ranker-based system and then re-ranked by our multi-task learning algorithm. With the proposed multi-task learning method, we are able to effectively transfer information from different and highly biased training datasets, for improving spelling correction on all datasets. Our experiments are conducted on three query spelling correction datasets including the well-known TREC benchmark dataset. The experimental results demonstrate that our proposed method considerably outperforms the existing baseline systems in terms of accuracy. Importantly, the proposed method is about one order of magnitude faster than baseline systems in terms of training speed. Compared to the commonly used online learning methods which typically require more than (e.g.,) 60 training passes, our proposed method is able to closely reach the empirical optimum in about 5 passes.

    @InProceedings{CIKM-SunSL12,
    Title = {Fast Multi-task Learning for Query Spelling Correction},
    Author = {Xu Sun and Anshumali Shrivastava and Ping Li},
    Booktitle = {The 21st ACM International Conference on Information and Knowledge Management (CIKM 2012) },
    Year = {2012},
    Pages = {285--294},
    Abstract ={In this paper, we explore the use of a novel online multi-task learning framework for the task of search query spelling correction. In our procedure, correction candidates are initially generated by a ranker-based system and then re-ranked by our multi-task learning algorithm. With the proposed multi-task learning method, we are able to effectively transfer information from different and highly biased training datasets, for improving spelling correction on all datasets. Our experiments are conducted on three query spelling correction datasets including the well-known TREC benchmark dataset. The experimental results demonstrate that our proposed method considerably outperforms the existing baseline systems in terms of accuracy. Importantly, the proposed method is about one order of magnitude faster than baseline systems in terms of training speed. Compared to the commonly used online learning methods which typically require more than (e.g.,) 60 training passes, our proposed method is able to closely reach the empirical optimum in about 5 passes.},
    Doi = {10.1145/2396761.2396800},
    Url = {http://dx.doi.org/10.1145/2396761.2396800}
    }
  • Sun, Xu, Anshumali Shrivastava, and Ping Li, “Query spelling correction using multi-task learning,” in Proceedings of the 21st World Wide Web Conference (WWW 2012)(Companion Volume), 2012, pp. 613-614.
    [Abstract] [DOI] [URL] [Bibtex]

    This paper explores the use of online multi-task learning for search query spelling correction, by effectively transferring information from different and biased training datasets for improving spelling correction across datasets. Experiments were conducted on three query spelling correction datasets, including the well-known TREC benchmark data. Our experimental results demonstrate that the proposed method considerably outperforms existing baseline systems in terms of accuracy. Importantly, the proposed method is about one-order of magnitude faster than baseline systems in terms of training speed. In contrast to existing methods which typically require more than (e.g.,) 50 training passes, our algorithm can very closely approach the empirical optimum in around five passes.

    @InProceedings{WWW-SunSL12,
    Title = {Query spelling correction using multi-task learning},
    Author = {Xu Sun and Anshumali Shrivastava and Ping Li},
    Booktitle = {Proceedings of the 21st World Wide Web Conference (WWW 2012)(Companion Volume)},
    Year = {2012},
    Pages = {613-614},
    Abstract ={This paper explores the use of online multi-task learning for search query spelling correction, by effectively transferring information from different and biased training datasets for improving spelling correction across datasets. Experiments were conducted on three query spelling correction datasets, including the well-known TREC benchmark data. Our experimental results demonstrate that the proposed method considerably outperforms existing baseline systems in terms of accuracy. Importantly, the proposed method is about one-order of magnitude faster than baseline systems in terms of training speed. In contrast to existing methods which typically require more than (e.g.,) 50 training passes, our algorithm can very closely approach the empirical optimum in around five passes.},
    Bibsource = {DBLP, http://dblp.uni-trier.de},
    Doi = {10.1145/2187980.2188153},
    Url = {http://doi.acm.org/10.1145/2187980.2188153}
    }