4: The main data mining algorithms
One of the aims of the KDD-IG is to build up an
inventory of data mining algorithms that are of use to astronomy. We don't attempt to duplicate that here, but instead provide descriptions of some of the most well-known data mining algorithms, many of which have been fairly extensively used in astronomy.
- Artificial neural network
- Decision tree
- Genetic algorithms
- k nearest neighbor
- k-means clustering
- Kernel density estimation
- Kohonen self-organizing map
- Independent component analysis
- Mixture models and EM algorithm
- Support vector machine
Artificial Neural Networks
Artificial Neural Networks (ANNs) are one of the oldest data-mining algorithms, and one of the first to be applied in astronomy. Modelled after the mammalian Brain, ANNs consist of a large number of processing units that are interconnected with each other. The interconnections are represented by weights (numerical values in the range 0 to 1 or -1 to 1), and learning of the model occurs by adjusting the weights. ANNs can be used for both supervised (predictive) and unsupervised (descriptive) data mining.
ANNs come in a large variety of flavors, more so with respect to the architecture in which the processing units, so-called
perceptrons , are connected, but also with respect to the learning algorithm used. One typical architecture is that of a
feedforward network, in which a distinct
input layer is connected to a distinct
output layer via one or more
hidden layers. Connections in this particular network point in the forward direction only. Each node in the input layer represents one attribute for each sample, while each node in the output layer typically represents a class (with the exception of a single output node for a two-class problem). ANNs require numerical values as input, which should be normalized.
For a feedforward network, learning typically occurs through the
Backpropagation algorithm. For this algorithm, input values are presented to the input layer and passed through the hidden layers to produce values at each node in the output layer. The produced output is compared to the desired output (in the form of the correct class); the resulting error is then backpropagated in the reverse direction and used to adjust the weight so as to minimize the error.
Main characteristics:
- convergence of the weights is slow
- the model (the numerical values associated with the weights) is hard to interpret
- prone to settle into a local minimum because of the complexity of the error surface
- sensitive to noise
- choice of the architecture is non-trivial
- able to approximate any function, given a lack of certain restrictions with respect to the number of hidden layers and number of nodes in hidden layers
- easy to parallize
Under construction by group members
--
NickBall - 05 Sep 2010
--
SabineMcConnell - 16 Jan 2011