5 Data Classification
Data classification is the process which finds the common properties among a set of objects in a
database and classifies them into dierent classes, according to a classification model. To construct
such a classification model, a sample database E is treated as the training set, in which each tuple
consists of the same set of multiple attributes (or features) as the tuples in a large database W, and
additionally, each tuple has a known class identity (label) associated with it. The objective of the
classification is to first analyze the training data and develop an accurate description or a model for
each class using the features available in the data. Such class descriptions are then used to classify
future test data in the database W or to develop a better description (called classification rules)
for each class in the database. Applications of classification include medical diagnosis, performance
prediction, selective marketing, to name a few.
Data classication has been studied substantially in statistics, machine learning, neural networks,
and expert systems [82] and is an important theme in data mining [30].
19
5.1 Classification based on decision trees
A decision-tree-based classication method, such as [71, 72], has been in
ucential in machine learning
studies. It is a supervised learning method that constructs decision trees from a set of examples.
The quality (function) of a tree depends on both the classification accuracy and the size of the tree.
The method first chooses a subset of the training examples (a window) to form a decision tree. If
the tree does not give the correct answer for all the objects, a selection of the exceptions is added to
the window and the process continues until the correct decision set is found. The eventual outcome
is a tree in which each leaf carries a class name, and each interior node specifies an attribute with
a branch corresponding to each possible value of that attribute.
A typical decision tree learning system, ID-3 [71], adopts a top-down irrevocable strategy that
searches only part of the search space. It guarantees that a simple, but not necessarily the simplest,
tree is found. ID-3 uses an information-theoretic approach aimed at minimizing the expected
number of tests to classify an object. The attribute selection part of ID-3 is based on the plausible
assumption that the complexity of the decision tree is strongly related to the amount of information
conveyed by this message. An information-based heuristic selects the attribute providing the highest
information gain, i.e., the attribute which minimizes the information needed in the resulting subtrees
to classify the elements. An extension to ID-3, C4.5 [72], extends the domain of classification from
categorical attributes to numerical ones.
The ID-3 system [71] uses information gain as the evaluation functions for classification, with
the following evaluation function,
i = (piln(pi));
where pi is the probability that an object is in class i. There are many other evaluation functions,
such as Gini index, chi-square test, and so forth [14, 52, 68, 82]. For example, for Gini index [14, 59],
if a data set T contains examples from n classes, gini(T) is dened as,
gini(T) = 1
Data classification is the process which finds the common properties among a set of objects in a
database and classifies them into dierent classes, according to a classification model. To construct
such a classification model, a sample database E is treated as the training set, in which each tuple
consists of the same set of multiple attributes (or features) as the tuples in a large database W, and
additionally, each tuple has a known class identity (label) associated with it. The objective of the
classification is to first analyze the training data and develop an accurate description or a model for
each class using the features available in the data. Such class descriptions are then used to classify
future test data in the database W or to develop a better description (called classification rules)
for each class in the database. Applications of classification include medical diagnosis, performance
prediction, selective marketing, to name a few.
Data classication has been studied substantially in statistics, machine learning, neural networks,
and expert systems [82] and is an important theme in data mining [30].
19
5.1 Classification based on decision trees
A decision-tree-based classication method, such as [71, 72], has been in
ucential in machine learning
studies. It is a supervised learning method that constructs decision trees from a set of examples.
The quality (function) of a tree depends on both the classification accuracy and the size of the tree.
The method first chooses a subset of the training examples (a window) to form a decision tree. If
the tree does not give the correct answer for all the objects, a selection of the exceptions is added to
the window and the process continues until the correct decision set is found. The eventual outcome
is a tree in which each leaf carries a class name, and each interior node specifies an attribute with
a branch corresponding to each possible value of that attribute.
A typical decision tree learning system, ID-3 [71], adopts a top-down irrevocable strategy that
searches only part of the search space. It guarantees that a simple, but not necessarily the simplest,
tree is found. ID-3 uses an information-theoretic approach aimed at minimizing the expected
number of tests to classify an object. The attribute selection part of ID-3 is based on the plausible
assumption that the complexity of the decision tree is strongly related to the amount of information
conveyed by this message. An information-based heuristic selects the attribute providing the highest
information gain, i.e., the attribute which minimizes the information needed in the resulting subtrees
to classify the elements. An extension to ID-3, C4.5 [72], extends the domain of classification from
categorical attributes to numerical ones.
The ID-3 system [71] uses information gain as the evaluation functions for classification, with
the following evaluation function,
i = (piln(pi));
where pi is the probability that an object is in class i. There are many other evaluation functions,
such as Gini index, chi-square test, and so forth [14, 52, 68, 82]. For example, for Gini index [14, 59],
if a data set T contains examples from n classes, gini(T) is dened as,
gini(T) = 1