Data Condensation in Large Databases by Incremental Learning with Support Vector Machines. Mitra, P., Murthy, C. A., & Pal, S. K. In Proceedings of the International Conference on Pattern Recognition, volume 2, pages 708-711 vol.2, 2000. doi abstract bibtex An algorithm for data condensation using support vector machines (SVM) is presented. The algorithm extracts data points lying close to the class boundaries, which form a much reduced but critical set for classification. The problem of large memory requirements for training SVM in batch mode is circumvented by adopting an active incremental learning algorithm. The learning strategy is motivated from the condensed nearest neighbor classification technique. Experimental results presented show that such active incremental learning enjoy superiority in terms of computation time and condensation ratio, over related methods
@InProceedings{Mitra2000,
Title = {Data Condensation in Large Databases by Incremental Learning with Support Vector Machines},
Author = {Mitra, P. and Murthy, C. A. and Pal, S. K.},
Booktitle = {Proceedings of the International Conference on Pattern Recognition},
Year = {2000},
Pages = {708-711 vol.2},
Volume = {2},
Abstract = {An algorithm for data condensation using support vector machines (SVM) is presented. The algorithm extracts data points lying close to the class boundaries, which form a much reduced but critical set for classification. The problem of large memory requirements for training SVM in batch mode is circumvented by adopting an active incremental learning algorithm. The learning strategy is motivated from the condensed nearest neighbor classification technique. Experimental results presented show that such active incremental learning enjoy superiority in terms of computation time and condensation ratio, over related methods},
Doi = {10.1109/ICPR.2000.906173},
ISSN = {1051-4651},
Keywords = {computational complexity;data warehouses;learning (artificial intelligence);learning automata;pattern classification;SVM;active incremental learning algorithm;batch mode training;class boundaries;computation time;condensed nearest neighbor classification technique;data condensation;data point extraction;incremental learning;large databases;large memory requirements;pattern classification;support vector machines;Data mining;Databases;Machine intelligence;Machine learning;Machine learning algorithms;Nearest neighbor searches;Quadratic programming;Sampling methods;Support vector machine classification;Support vector machines},
Review = {Trains from a subset. Samples the rest of the data. Of the falsely classified, add them to the training pool, and retrain. Sounds like boosting.},
Timestamp = {2014.10.24}
}
Downloads: 0
{"_id":"8kuMxcx927HtmNoiA","bibbaseid":"mitra-murthy-pal-datacondensationinlargedatabasesbyincrementallearningwithsupportvectormachines-2000","downloads":0,"creationDate":"2017-09-14T16:34:36.836Z","title":"Data Condensation in Large Databases by Incremental Learning with Support Vector Machines","author_short":["Mitra, P.","Murthy, C. A.","Pal, S. K."],"year":2000,"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/jfslin/jfslin.github.io/master/jf2lin.bib","bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Data Condensation in Large Databases by Incremental Learning with Support Vector Machines","author":[{"propositions":[],"lastnames":["Mitra"],"firstnames":["P."],"suffixes":[]},{"propositions":[],"lastnames":["Murthy"],"firstnames":["C.","A."],"suffixes":[]},{"propositions":[],"lastnames":["Pal"],"firstnames":["S.","K."],"suffixes":[]}],"booktitle":"Proceedings of the International Conference on Pattern Recognition","year":"2000","pages":"708-711 vol.2","volume":"2","abstract":"An algorithm for data condensation using support vector machines (SVM) is presented. The algorithm extracts data points lying close to the class boundaries, which form a much reduced but critical set for classification. The problem of large memory requirements for training SVM in batch mode is circumvented by adopting an active incremental learning algorithm. The learning strategy is motivated from the condensed nearest neighbor classification technique. Experimental results presented show that such active incremental learning enjoy superiority in terms of computation time and condensation ratio, over related methods","doi":"10.1109/ICPR.2000.906173","issn":"1051-4651","keywords":"computational complexity;data warehouses;learning (artificial intelligence);learning automata;pattern classification;SVM;active incremental learning algorithm;batch mode training;class boundaries;computation time;condensed nearest neighbor classification technique;data condensation;data point extraction;incremental learning;large databases;large memory requirements;pattern classification;support vector machines;Data mining;Databases;Machine intelligence;Machine learning;Machine learning algorithms;Nearest neighbor searches;Quadratic programming;Sampling methods;Support vector machine classification;Support vector machines","review":"Trains from a subset. Samples the rest of the data. Of the falsely classified, add them to the training pool, and retrain. Sounds like boosting.","timestamp":"2014.10.24","bibtex":"@InProceedings{Mitra2000,\n Title = {Data Condensation in Large Databases by Incremental Learning with Support Vector Machines},\n Author = {Mitra, P. and Murthy, C. A. and Pal, S. K.},\n Booktitle = {Proceedings of the International Conference on Pattern Recognition},\n Year = {2000},\n Pages = {708-711 vol.2},\n Volume = {2},\n\n Abstract = {An algorithm for data condensation using support vector machines (SVM) is presented. The algorithm extracts data points lying close to the class boundaries, which form a much reduced but critical set for classification. The problem of large memory requirements for training SVM in batch mode is circumvented by adopting an active incremental learning algorithm. The learning strategy is motivated from the condensed nearest neighbor classification technique. Experimental results presented show that such active incremental learning enjoy superiority in terms of computation time and condensation ratio, over related methods},\n Doi = {10.1109/ICPR.2000.906173},\n ISSN = {1051-4651},\n Keywords = {computational complexity;data warehouses;learning (artificial intelligence);learning automata;pattern classification;SVM;active incremental learning algorithm;batch mode training;class boundaries;computation time;condensed nearest neighbor classification technique;data condensation;data point extraction;incremental learning;large databases;large memory requirements;pattern classification;support vector machines;Data mining;Databases;Machine intelligence;Machine learning;Machine learning algorithms;Nearest neighbor searches;Quadratic programming;Sampling methods;Support vector machine classification;Support vector machines},\n Review = {Trains from a subset. Samples the rest of the data. Of the falsely classified, add them to the training pool, and retrain. Sounds like boosting.},\n Timestamp = {2014.10.24}\n}\n\n","author_short":["Mitra, P.","Murthy, C. A.","Pal, S. K."],"key":"Mitra2000","id":"Mitra2000","bibbaseid":"mitra-murthy-pal-datacondensationinlargedatabasesbyincrementallearningwithsupportvectormachines-2000","role":"author","urls":{},"keyword":["computational complexity;data warehouses;learning (artificial intelligence);learning automata;pattern classification;SVM;active incremental learning algorithm;batch mode training;class boundaries;computation time;condensed nearest neighbor classification technique;data condensation;data point extraction;incremental learning;large databases;large memory requirements;pattern classification;support vector machines;Data mining;Databases;Machine intelligence;Machine learning;Machine learning algorithms;Nearest neighbor searches;Quadratic programming;Sampling methods;Support vector machine classification;Support vector machines"],"downloads":0},"search_terms":["data","condensation","large","databases","incremental","learning","support","vector","machines","mitra","murthy","pal"],"keywords":["computational complexity;data warehouses;learning (artificial intelligence);learning automata;pattern classification;svm;active incremental learning algorithm;batch mode training;class boundaries;computation time;condensed nearest neighbor classification technique;data condensation;data point extraction;incremental learning;large databases;large memory requirements;pattern classification;support vector machines;data mining;databases;machine intelligence;machine learning;machine learning algorithms;nearest neighbor searches;quadratic programming;sampling methods;support vector machine classification;support vector machines"],"authorIDs":[],"dataSources":["iCsmKnycRmHPxmhBd"]}