You are here

An Empirical Study of Performance Metrics for Classifier Evaluation in Machine Learning

Download pdf | Full Screen View

Date Issued:
2008
Abstract/Description:
A variety of classifiers for solving classification problems is available from the domain of machine learning. Commonly used classifiers include support vector machines, decision trees and neural networks. These classifiers can be configured by modifying internal parameters. The large number of available classifiers and the different configuration possibilities result in a large number of combinatiorrs of classifier and configuration settings, leaving the practitioner with the problem of evaluating the performance of different classifiers. This problem can be solved by using performance metrics. However, the large number of available metrics causes difficulty in deciding which metrics to use and when comparing classifiers on the basis of multiple metrics. This paper uses the statistical method of factor analysis in order to investigate the relationships between several performance metrics and introduces the concept of relative performance which has the potential to case the process of comparing several classifiers. The relative performance metric is also used to evaluate different support vector machine classifiers and to determine if the default settings in the Weka data mining tool are reasonable.
Title: An Empirical Study of Performance Metrics for Classifier Evaluation in Machine Learning.
80 views
16 downloads
Name(s): Bruhns, Stefan
Khoshgoftaar, Taghi M., Thesis advisor
Florida Atlantic University, Degree grantor
Type of Resource: text
Genre: Electronic Thesis Or Dissertation
Date Created: 2008
Date Issued: 2008
Publisher: Florida Atlantic University
Place of Publication: Boca Raton, Fla.
Physical Form: application/pdf
Extent: 164 p.
Language(s): English
Abstract/Description: A variety of classifiers for solving classification problems is available from the domain of machine learning. Commonly used classifiers include support vector machines, decision trees and neural networks. These classifiers can be configured by modifying internal parameters. The large number of available classifiers and the different configuration possibilities result in a large number of combinatiorrs of classifier and configuration settings, leaving the practitioner with the problem of evaluating the performance of different classifiers. This problem can be solved by using performance metrics. However, the large number of available metrics causes difficulty in deciding which metrics to use and when comparing classifiers on the basis of multiple metrics. This paper uses the statistical method of factor analysis in order to investigate the relationships between several performance metrics and introduces the concept of relative performance which has the potential to case the process of comparing several classifiers. The relative performance metric is also used to evaluate different support vector machine classifiers and to determine if the default settings in the Weka data mining tool are reasonable.
Identifier: FA00012508 (IID)
Degree granted: Thesis (M.S.)--Florida Atlantic University, 2008.
Collection: FAU Electronic Theses and Dissertations Collection
Note(s): College of Engineering and Computer Science
Subject(s): Machine learning
Computer algorithms
Pattern recognition systems
Data structures (Computer science)
Kernel functions
Pattern perception--Data processing
Held by: Florida Atlantic University Libraries
Sublocation: Digital Library
Persistent Link to This Record: http://purl.flvc.org/fau/fd/FA00012508
Use and Reproduction: Copyright © is held by the author with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Use and Reproduction: http://rightsstatements.org/vocab/InC/1.0/
Host Institution: FAU
Is Part of Series: Florida Atlantic University Digital Library Collections.