TZstat



Our current interests are: #learning, #network, #computation #TopicModeling and related topics.

Fall 2016
9/15 Chengliang
Representation learning: A review and new perspectives
__Y Bengio__, __A Courville__, __P Vincent__ - IEEE transactions on pattern …, 2013 - ieeexplore.ieee.org
... Among the various ways of learning representations, this paper focuses on deeplearning methods:
those that are formed by the composition of multiple nonlinear transformations with the goal of
yielding more abstract—and ultimately more useful—representations. ...
Cited by 1223Related articlesAll 41 versionsCiteSave
9/22 Nathan
[HTML]A toolbox for representational similarity analysis
__H Nili__, __C Wingfield__, A Walther, L Su… - PLoS Comput …, 2014 - journals.plos.org
Abstract Neuronal population codes are increasingly being investigated with multivariate
pattern-information analyses. A key challenge is to use measured brain-activity patterns to
test computational models of brain information processing. One approach to this problem ...
Cited by 65Related articlesAll 20 versionsCiteSaveMore
9/29 Tim
Nonlinear component analysis as a kernel eigenvalue problem
__B Schölkopf__, __A Smola__, __KR Müller__ - Neural computation, 1998 - MIT Press
... and Neural Network Architectures for the Classification of Paddy Kernels Using Morphological ...
Variform Object of Remote-Sensing Image Using Improved Robust Weighted KernelPrincipal
ComponentAnalysis. ... 2016) Nonlinear process monitoring based on kernel global–local ...
Cited by 6482Related articlesAll 29 versionsCiteSave
10/6 Jing
[PDF]__Deep Boltzmann Machines.__
__R Salakhutdinov__, __GE Hinton__ - AISTATS, 2009 - jmlr.org
Abstract We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and data independent expectations ...
Cited by 739Related articlesAll 18 versionsCiteSaveMore
10/13 Yang
[PDF]Sparse kernel principal component analysis
__ME Tipping__ - Advances in neural information processing …, 2001 - pdfs.semanticscholar.org
Abstract'Kernel'principal component analysis (PCA) is an elegant nonlinear generalisation
of the popular linear data analysis method, where a kernel function implicitly defines a
nonlinear transformation into a feature space wherein standard PCA is performed. ...
Cited by 162Related articlesAll 4 versionsCiteSaveMore
10/20 Swupnil

Learning eigenfunctions links spectral embedding and kernel PCA

__Y Bengio__, __O Delalleau__, __N Le Roux__, JF Paiement… - Neural …, 2004 - MIT Press
In this letter, we show a direct relation between spectral embedding methods and kernel
principal components analysis and how both are special cases of a more general learning
problem: learning the principal eigenfunctions of an operator defined from a kernel and ...
Cited by 228Related articlesAll 31 versionsWeb of Science: 94CiteSave
11/3
Chengliang
[HTML]Kernel methods for deep learning
Y Cho, __LK Saul__ - Advances in neural information processing systems, 2009 - papers.nips.cc
Abstract We introduce a new family of positive-definite kernel functions that mimic the
computation in large, multilayer neural nets. These kernel functions can be used in shallow
architectures, such as support vector machines (SVMs), or in deep kernel-based ...
Cited by 87Related articlesAll 13 versionsCiteSaveMore
11/10
Nathan
[PDF]Unsupervised feature learning and deep learning: A review and new perspectives
__Y Bengio__, __AC Courville__… - CoRR, abs/1206.5538, 2012 - pdfs.semanticscholar.org
Abstract—The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different representations can
entangle and hide more or less the different explanatory factors of variation behind the ...
Cited by 148Related articlesCiteSaveMore
11/17 Tim
[HTML]Kernel analysis of deep networks
__GÊ Montavon__, __ML Braun__, KR MÃŧller - Journal of Machine Learning …, 2011 - jmlr.org
Abstract When training deep networks it is common knowledge that an efficient and well
generalizing representation of the problem is formed. In this paper we aim to elucidate what
makes the emerging representation successful. We analyze the layer-wise evolution of the ...
Cited by 30Related articlesAll 10 versionsCiteSave
12/1 Jing
[PDF]Scaling learning algorithms towards AI
__Y Bengio__, __Y LeCun__ - Large-scale kernel machines, 2007 - iro.umontreal.ca
... Although a num- ber of learning algorithms for deep architectures have been available for some
time, training such architectures is still largely perceived as a difficult challenge. ... 8 Page 9. 3
Learning Architectures, Shallow and Deep 3.1 Architecture Types ...
Cited by 516Related articlesAll 19 versionsCiteSaveMore
12/8 Yang
[PDF]Deep Restricted Kernel Machines using Conjugate Feature Duality
__JAK Suykens__ - 2016 - esat.kuleuven.be
... The aim of this paper is to propose a theory of Deep Restricted Kernel Ma- chines offering new
foundations for deeplearning with kernel machines. ...deeplearning, kernel methods, least squares support vector machines, kernel PCA, restricted Boltzmann machines, duality. 1 ...
Related articlesAll 3 versionsCiteSaveMore
12/15 Swupnil
A novel supervised approach to learning efficient kernel descriptors for high accuracy object recognition
B Xie, Y Liu, H Zhang, __J Yu__ - Neurocomputing, 2016 - Elsevier
... In addition to methods for learning low-level image representations described above, methods
for learning higher level feature representations have been proposed. For example, Deep belief
nets (DBNs) [18] and [19] and convolutional neural networks [20] can be used to ...
Related articlesAll 3 versionsCiteSave

Spring 2015
This semester, our reading group has been doing a kaggle challenge together.

Fall 2014

This semester, we are going to experiment with a new format for our reading group. We will watch a video presentation and read a related paper every week.
  • Nov 20th, 2014 [Led by Yuting Ma]
    Mauro Maggioni from Duke on Geometric Methods for Statistical Learning and High-Dimensional Data, which was given on Colloquium UAM-ICMAT, Madrid in Sep, 2014. Here are the links for the talk and the related but not exact slides for the talk:
    Video:


  • December 4th, 2014 we will continue discussing the video presentations from Nov 20th.

  • December 11th, 2014 "The pursuit of low-dimensional structures in high-dimensional data", MMDS 2012, Yi Ma, UIUC, Shanghai Tech University, MSR Asia [video][slides][references]

Spring 2014

  1. Jan 30th, 2014 [Lu Meng] Robust Principal Component Analysis (Candes et al, 2010) http://dl.acm.org/citation.cfm?id=1970395 presentation
    #Learning #Computation
  2. Feb 6th, 2014 [Yuting Ma] Optimal weighted nearest neighbour classifiers (AOS, Samwirth, 2012) http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1359987536
    #Learning #Geometric
  3. Feb 13th, 2014. Cancelled due to snow.
  4. Feb 20th, 2014 [Rachel Fan] SKAT (Harvard, Xihong Lin group. Focus on motivation and implementation)
    #Learning #kernel
    References:
  5. Feb 27th, 2014 [Ran He] Graph limits and ERGM modeling http://dl.acm.org/citation.cfm?id=2492523
    #Network
  6. March 6th, 2014 [Tony Tong] On topic modeling.
    #TopicModeling
  7. March 13th, 2014 [Cancelled]
  8. March 27th, 2014 [Ran He] Mixed membership stochastic blockmodels http://jmlr.org/papers/v9/airoldi08a.html
    #Network
  9. April 3rd, 2014 [Yang Kang] Stochastic variational inference http://arxiv.org/abs/1206.7051
    #Computation #Inference
  10. April 10th, 2014 [Swupnil Sahai] How Many People Do You Know?: Efficiently Estimating Personal Network Size. http://www.tandfonline.com/doi/abs/10.1198/jasa.2009.ap08518#.Uxi76-ewJdQ
    #Network
  11. April 17th, 2014 [Yuting] On Deep learning http://www.nature.com/news/computer-science-the-learning-machines-1.14481 presentation #Learning #Computation
  12. April 24th, 2014 [Shuaiwen] Adaptive Subgradient Methods for Online Learning and Stochastic Optimization http://dl.acm.org/citation.cfm?id=2021068
    #Learning #Computation

To be resumed in Fall 2014
  1. Roderick J. Little (2013) In Praise of Simplicity not Mathematistry! Ten Simple Powerful Ideas for the Statistical Scientist, Journal of the American Statistical Association, 108:502, 359-369 http://amstat.tandfonline.com/doi/pdf/10.1080/01621459.2013.787932
  2. A comparative study of social network models: Network evolution models and nodal attribute models. http://www.sciencedirect.com/science/article/pii/S0378873309000331
    #Network