# An Analysis of Single-Layer Networks in Unsupervised Feature Learning

@inproceedings{Coates2011AnAO, title={An Analysis of Single-Layer Networks in Unsupervised Feature Learning}, author={Adam Coates and A. Ng and Honglak Lee}, booktitle={AISTATS}, year={2011} }

A great deal of research has focused on algorithms for learning features from unlabeled data. [...] Key Method Specifically, we will apply several othe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. Expand

#### Supplemental Presentations

Presentation Slides

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

#### Figures, Tables, and Topics from this paper

#### 2,362 Citations

C-SVDDNet: An Effective Single-Layer Network for Unsupervised Feature Learning

- Computer Science
- ArXiv
- 2014

An alternative network architecture with much smaller number of nodes but with much finer pooling size, hence emphasizing the local details of the object is explored, which is also extended with multiple receptive field scales and multiple pooling sizes. Expand

Convolutional Clustering for Unsupervised Learning

- Computer Science
- ArXiv
- 2015

This work proposes to train a deep convolutional network based on an enhanced version of the k-means clustering algorithm, which reduces the number of correlated parameters in the form of similar filters, and thus increases test categorization accuracy and outperforms other techniques that learn filters unsupervised. Expand

Hierarchical Extreme Learning Machine for unsupervised representation learning

- Computer Science
- 2015 International Joint Conference on Neural Networks (IJCNN)
- 2015

Compared to traditional deep learning methods, the proposed trans-layer representation method with ELM-AE based learning of local receptive filters has much faster learning speed and is validated in several typical experiments, such as digit recognition on MNIST and MNIST variations, object recognition on Caltech 101. Expand

Selecting Receptive Fields in Deep Networks

- Computer Science
- NIPS
- 2011

This paper proposes a fast method to choose local receptive fields that group together those low-level features that are most similar to each other according to a pairwise similarity metric, and produces results showing how this method allows even simple unsupervised training algorithms to train successful multi-layered networks that achieve state-of-the-art results on CIFAR and STL datasets. Expand

Unsupervised representation learning based on the deep multi-view ensemble learning

- Computer Science
- Applied Intelligence
- 2019

This work proposes a novel deep multi-view ensemble model that restricts the number of connections between successive layers while enhancing discriminatory power using a data-driven approach to deal with feature learning problems. Expand

A New Method of Multi-Scale Receptive Fields Learning

- Computer Science
- 2015

This paper proposes a method to limit the number of features by multi-scale receptive fields (MSRF) learning and can choose the most effective receptive fields in multiple scales, which will improve classification performance in the object recognition task. Expand

Building high-level features using large scale unsupervised learning

- Computer Science, Mathematics
- 2013 IEEE International Conference on Acoustics, Speech and Signal Processing
- 2013

Contrary to what appears to be a widely-held intuition, the experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Expand

Unsupervised and Transfer Learning Challenge: a Deep Learning Approach

- Computer Science
- ICML Unsupervised and Transfer Learning
- 2012

This paper describes different kinds of layers the authors trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge, and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples. Expand

A linear approach for sparse coding by a two-layer neural network

- Computer Science, Physics
- Neurocomputing
- 2015

The overall results suggest that linear encoders can be profitably used to obtain sparse data representations in the context of machine learning problems, provided that an appropriate error function is used during the learning phase. Expand

Deep Learning using Support Vector Machines

- Computer Science
- ArXiv
- 2013

This paper proposes to train all layers of the deep networks by backpropagating gradients through the top level SVM, learning features of all layers, and demonstrates a small but consistent advantage of replacing softmax layer with a linear support vector machine. Expand

#### References

SHOWING 1-10 OF 40 REFERENCES

Learning Multiple Layers of Features from Tiny Images

- Computer Science
- 2009

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand

Sparse Feature Learning for Deep Belief Networks

- Computer Science
- NIPS
- 2007

This work proposes a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the information content of the representation, and describes a novel and efficient algorithm to learn sparse representations. Expand

Measuring Invariances in Deep Networks

- Computer Science, Mathematics
- NIPS
- 2009

A number of empirical tests are proposed that directly measure the degree to which these learned features are invariant to different input transformations and find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images and convolutional deep belief networks learn substantially more invariant Features in each layer. Expand

Convolutional Deep Belief Networks on CIFAR-10

- 2010

We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images dataset. When training a convolutional DBN, one must decide what to do with the edge pixels… Expand

Sparse deep belief net model for visual area V2

- Computer Science
- NIPS
- 2007

An unsupervised learning model is presented that faithfully mimics certain properties of visual area V2 and the encoding of these more complex "corner" features matches well with the results from the Ito & Komatsu's study of biological V2 responses, suggesting that this sparse variant of deep belief networks holds promise for modeling more higher-order features. Expand

Extracting and composing robust features with denoising autoencoders

- Mathematics, Computer Science
- ICML '08
- 2008

This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. Expand

Efficient Learning of Sparse Representations with an Energy-Based Model

- Computer Science
- NIPS
- 2006

A novel unsupervised method for learning sparse, overcomplete features using a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Expand

Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

- Computer Science
- ICML '09
- 2009

The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Expand

Modeling pixel means and covariances using factorized third-order boltzmann machines

- Mathematics, Computer Science
- 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
- 2010

This approach provides a probabilistic framework for the widely used simple-cell complex-cell architecture, it produces very realistic samples of natural images and it extracts features that yield state-of-the-art recognition accuracy on the challenging CIFAR 10 dataset. Expand

A Fast Learning Algorithm for Deep Belief Nets

- Mathematics, Computer Science
- Neural Computation
- 2006

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand