Unsupervised learning

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Unsupervised learning is a type of machine learning in which the algorithm is not provided with any pre-assigned labels or scores for the training data.[1][2] As a result, unsupervised learning algorithms must first self-discover any naturally occurring patterns in that training data set. Common examples include clustering, where the algorithm automatically groups its training examples into categories with similar features, and principal component analysis, where the algorithm finds ways to compress the training data set by identifying which features are most useful for discriminating between different training examples, and discarding the rest. This contrasts with supervised learning in which the training data include pre-assigned category labels (often by a human, or from the output of non-learning classification algorithm).[3] Other intermediate levels in the supervision spectrum include reinforcement learning, where only numerical scores are available for each training example instead of detailed tags, and semi-supervised learning where only a portion of the training data have been tagged.

Advantages of unsupervised learning include a minimal workload to prepare and audit the training set, in contrast to supervised learning techniques where a considerable amount of expert human labor is required to assign and verify the initial tags, and greater freedom to identify and exploit previously undetected patterns that may not have been noticed by the "experts". This often comes at the cost of unsupervised techniques requiring a greater amount of training data and converging more slowly to acceptable performance, increased computational and storage requirements during the exploratory process, and potentially greater susceptibility to artifacts or anomalies in the training data that might be obviously irrelevant or recognized as erroneous by a human, but are assigned undue importance by the unsupervised learning algorithm.

Approaches[edit]

Common families of algorithms used in unsupervised learning include: (1) clustering, (2) anomaly detection, (3) neural networks (note that not all neural networks are unsupervised; they can be trained by supervised, unsupervised, semi-supervised, or reinforcement methods), and (4) latent variable models.

Method of moments[edit]

One statistical approach for unsupervised learning is the method of moments.[7] In the method of moments, the unknown parameters of interest in the model are related to the moments of one or more random variables. These moments are empirically estimated from the available data samples and used to calculate the most likely value distributions for each parameter. The method of moments is shown to be effective in learning the parameters of latent variable models, where in addition to the observed variables available in the training and input data sets, a number of unobserved latent variables are also assumed to exist and to determine the categorization of each same. One practical example of latent variable models in machine learning is topic modeling, which is a statistical model for predicting the words (observed variables) in a document based on the topic (latent variable) of the document. The method of moments (tensor decomposition techniques) has been shown to consistently recover the parameters of a large class of latent variable models under certain assumptions.[8]

The expectation–maximization algorithm is another practical method for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed to converge to the true unknown parameters of the model. In contrast, using the method of moments, global convergence is guaranteed under some conditions.[8]

Neural networks[edit]

The next five subsections contain basic material. More intermediate level materials follows it in Comparison of Networks, and Specific Networks. Advanced materials have their own Wikipedia entries.

Tasks vs. Methods[edit]

Tendency for a task to employ Supervised vs. Unsupervised methods. The separation can be blurred.

Traditionally, Supervised methods are used for recognition tasks and Unsupervised methods are used for generative tasks. As progress marches onward, some tasks employ both methods, and some tasks swing from one method to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of dropout, relu, and adaptive learning rates.

Training[edit]

During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (ie. correct its weights & biases). This resembles the mimicry behavior of children as they learn a language. Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be express as an unstable high energy state in the network.

In contrast to supervised method's dominant use of Backpropagation, unsupervised methods employ various learning algorithms including: Hopfield learning rule, Boltzmann learning rule, Contrastive Divergence, Wake Sleep, Variational Inference, Maximum A Posteriori, Gibbs Sampling, backpropagating the reconstruction error, or backpropagating the hidden state reparameterizations. See the table below for more details.

Energy[edit]

In Boltzmann machines, Energy plays the role of the Cost function. An energy function is a macroscopic measure of a network's state. This analogy with physics is inspired by Ludwig Boltzmann's analysis of a gas' macroscopic energy from the microscopic probabilities of particle motion p eE/kT, where k is the Boltzmann constant and T is temperature. In the RBM network the relation is p = e−E / Z,[2] where p & E vary over every possible activation pattern and Z = e -E(pattern). To be more precise, p(a) = e-E(a) / Z, where a is an activation pattern of all neurons (visible and hidden). Hence, early neural networks bear the name Boltzmann Machine. Paul Smolensky calls -E the Harmony. A network seeks low energy which is high Harmony.

Networks[edit]

This table shows connection diagrams of various unsupervised networks, the details of which will be given in the section Comparison of Network. Of the networks bearing people's names, only Hopfield worked directly with neural networks. Boltzmann and Helmholtz lived before the invention of artificial neural networks, but they did inspire the analytical methods that were used.

Hopfield Boltzmann RBM Helmholtz Autoencoder VAE
A network based on magnetic domains in iron with a single self-connected layer.
2 layers. Uses symmetric 2-way weights. Following Boltzmann's thermodynamics, individual probabilities give rise to macroscopic energies.
Restricted Boltzmann Machine. This is a Boltzmann machine where lateral connections within a layer are prohibited to make analysis tractable.
Instead of the bidirectional symmetric connection of a Boltzmann machine, we have separate one-way connections to form a loop. It does both generation and discrimination.
A feed forward network that aims to find a good middle layer representation of its input world.
Applies Variational Inference to the Autoencoder. The middle layer is a set of means & variances for Gaussian distributions.

History[edit]

1969 Perceptrons by Minsky & Papert shows a perceptron without hidden layers fails on XOR
1970s (approximate dates) AI winter I
1974 Ising magnetic model proposed by WA Little for cognition
1980 Fukushima introduces the neocognitron, which is later called a convolution neural network. It is mostly used in SL, but deserves a mention here.
1982 Ising variant Hopfield net described as CAMs and classifiers by John Hopfield.
1983 Ising variant Boltzmann machine with probabilistic neurons described by Hinton & Sejnowski following Sherington & Kirkpatrick's 1975 work.
1986 Paul Smolensky publishes Harmony Theory, which is an RBM with practically the same Boltzmann energy function. Smolensky did not give an practical training scheme. Hinton did in mid-2000s
1995 Schmidthuber introduces the LSTM neuron for languages.
1995 Dayan & Hinton introduces Helmholtz machine
1995-2005 (approximate dates) AI winter II
2013 Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.

Specific Networks[edit]

Here, we highlight some characteristics of each networks. Ferromagnetism inspired Hopfield networks, Boltzmann machines, and RBMs. A neuron correspond to an iron domain with binary magnetic moments Up and Down, and neural connections correspond to the domain's influence on each other. Symmetric connections enables a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights guarantees convergence to a stable activation pattern.

Hopfield
networks are used as CAMs and are guaranteed to settle to a some pattern. Without symmetric weights, the network is very hard to analyze. With the right energy function, a network will converge.
Boltzmann machines
These are stochastic Hopfield nets. Their state value is sampled from this pdf as follows: suppose a binary neuron fires with the Bernoulli probability p(1) = 1/3 and rests with p(0) = 2/3. One samples from it by taking a UNIFORMLY distributed random number y, and plugging it into the inverted cumulative distribution function, which in this case is the step function thresholded at 2/3. The inverse function = { 0 if x <= 2/3, 1 if x > 2/3 }
Helmholtz
These are early inspirations for the Variational Auto Encoders. It's 2 networks combined into one—forward weights operates recognition and backward weights implements imagination. It is perhaps the first network to do both. Helmholtz did not work in machine learning but he inspired the view of "statistical inference engine whose function is to infer probable causes of sensory input" (3). the stochastic binary neuron outputs a probability that its state is 0 or 1. The data input is normally not considered a layer, but in the Helmholtz machine generation mode, the data layer receives input from the middle layer has separate weights for this purpose, so it is considered a layer. Hence this network has 3 layers.
Variational Autoencoder
These are inspired by Helmholtz machines and combines probability network with neural networks. An Autoencoder is a 3-layer CAM network, where the middle layer is supposed to be some internal representation of input patterns. The encoder neural network is a probability distribution qφ(z given x) and the decoder network is pθ(x given z). The weights are named phi & theta rather than W and V as in Helmholtz—a cosmetic difference. These 2 networks here can be fully connected, or use another NN scheme.

Comparison of Networks[edit]

Hopfield Boltzmann RBM Helmholtz Autoencoder VAE
usage & notables CAM, traveling salesman problem CAM. The freedom of connections makes this network difficult to analyze. pattern recognition (MNIST, speech recognition) imagination, mimicry language: creative writing, translation. Vision: enhancing blurry images generate realistic data
neuron deterministic binary state. Activation = { 0 (or -1) if x is negative, 1 otherwise } stochastic binary Hopfield neuron stochastic binary. Extended to real-valued in mid 2000s stochastic, binary, sigmoid language: LSTM. vision: local receptive fields. usually real valued relu activation. middle layer neurons encode means & variances for Gaussians. In run mode (inference), the output of the middle layer are sampled values from the Gaussians.
connections 1-layer with symmetric weights. No self-connections. 2-layers. 1-hidden & 1-visible. symmetric weights. <-- same.
no lateral connections within a layer.
3-layers: asymmetric weights. 2 networks combined into 1. 3-layers. The input is considered a layer even though it has no inbound weights. recurrent layers for NLP. feedforward convolutions for vision. input & output have the same neuron counts. 3-layers: input, encoder, distribution sampler decoder. the sampler is not considered a layer (e)
inference & energy energy is given by Gibbs probability measure : ← same ← same minimize KL divergence inference is only feed-forward. previous UL networks ran forwards AND backwards minimize error = reconstruction error - KLD
training Δwij = si*sj, for +1/-1 neuron Δwij = e*(pij - p'ij). This is derived from minimizing KLD. e = learning rate, p' = predicted and p = actual distribution. contrastive divergence w/ Gibbs Sampling wake-sleep 2 phase training back propagate the reconstruction error reparameterize hidden state for backprop
strength resembles physical systems so it inherits their equations <--- same. hidden neurons act as internal representatation of the external world faster more practical training scheme than Boltzmann machines mildly anatomical. analyzable w/ information theory & statistical mechanics
weakness hard to train due to lateral connections

Hebbian Learning, ART, SOM
The classical example of unsupervised learning in the study of neural networks is Donald Hebb's principle, that is, neurons that fire together wire together.[9] In Hebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons.[10] A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as pattern recognition and experiential learning.

Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing.[11]

See also[edit]

References[edit]

  1. ^ Hinton, Geoffrey; Sejnowski, Terrence (1999). Unsupervised Learning: Foundations of Neural Computation. MIT Press. ISBN 978-0262581684.
  2. ^ a b Hinton, G (2010-08-02). "A Practical Guide to Training Restricted Boltzmann Machines".
  3. ^ Salian, Isha (2018-08-02). "NVIDIA Blog: Supervised Vs. Unsupervised Learning". The Official NVIDIA Blog. Retrieved 2021-01-15.
  4. ^ Hastie, Trevor, Robert Tibshirani, Friedman, Jerome (2009). The Elements of Statistical Learning: Data mining, Inference, and Prediction. New York: Springer. pp. 485–586. ISBN 978-0-387-84857-0.CS1 maint: multiple names: authors list (link)
  5. ^ Garbade, Dr Michael J. (2018-09-12). "Understanding K-means Clustering in Machine Learning". Medium. Retrieved 2019-10-31.
  6. ^ Roman, Victor (2019-04-21). "Unsupervised Machine Learning: Clustering Analysis". Medium. Retrieved 2019-10-01.
  7. ^ Jordan, Michael I.; Bishop, Christopher M. (2004). "Neural Networks". In Allen B. Tucker (ed.). Computer Science Handbook, Second Edition (Section VII: Intelligent Systems). Boca Raton, Florida: Chapman & Hall/CRC Press LLC. ISBN 1-58488-360-X.
  8. ^ a b Anandkumar, Animashree; Ge, Rong; Hsu, Daniel; Kakade, Sham; Telgarsky, Matus (2014). "Tensor Decompositions for Learning Latent Variable Models" (PDF). Journal of Machine Learning Research. 15: 2773–2832. arXiv:1210.7559. Bibcode:2012arXiv1210.7559A.
  9. ^ Buhmann, J.; Kuhnel, H. (1992). "Unsupervised and supervised data clustering with competitive neural networks". [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. 4. IEEE. pp. 796–801. doi:10.1109/ijcnn.1992.227220. ISBN 0780305590. S2CID 62651220.
  10. ^ Comesaña-Campos, Alberto; Bouza-Rodríguez, José Benito (June 2016). "An application of Hebbian learning in the design process decision-making". Journal of Intelligent Manufacturing. 27 (3): 487–506. doi:10.1007/s10845-014-0881-z. ISSN 0956-5515. S2CID 207171436.
  11. ^ Carpenter, G.A. & Grossberg, S. (1988). "The ART of adaptive pattern recognition by a self-organizing neural network" (PDF). Computer. 21 (3): 77–88. doi:10.1109/2.33. S2CID 14625094.

Further reading[edit]