Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016:2016:3057481.
doi: 10.1155/2016/3057481. Epub 2015 Dec 29.

Self-Trained LMT for Semisupervised Learning

Affiliations

Self-Trained LMT for Semisupervised Learning

Nikos Fazakis et al. Comput Intell Neurosci. 2016.

Abstract

The most important asset of semisupervised classification methods is the use of available unlabeled data combined with a clearly smaller set of labeled examples, so as to increase the classification accuracy compared with the default procedure of supervised methods, which on the other hand use only the labeled data during the training phase. Both the absence of automated mechanisms that produce labeled data and the high cost of needed human effort for completing the procedure of labelization in several scientific domains rise the need for semisupervised methods which counterbalance this phenomenon. In this work, a self-trained Logistic Model Trees (LMT) algorithm is presented, which combines the characteristics of Logistic Trees under the scenario of poor available labeled data. We performed an in depth comparison with other well-known semisupervised classification methods on standard benchmark datasets and we finally reached to the point that the presented technique had better accuracy in most cases.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Comparison of average accuracy on benchmark datasets.
Algorithm 1
Algorithm 1
The self-trained LMT algorithm.
Algorithm 2
Algorithm 2
LMT classifier.

References

    1. Jain A. K. Data clustering: 50 years beyond K-means. Pattern Recognition Letters. 2010;31(8):651–666. doi: 10.1016/j.patrec.2009.09.011. - DOI
    1. Ye Q., Pan H., Liu C. Enhancement of ELM by clustering discrimination manifold regularization and multiobjective FOA for semisupervised classification. Computational Intelligence and Neuroscience. 2015;2015:9. doi: 10.1155/2015/731494.731494 - DOI - PMC - PubMed
    1. Friedhelm S., Edmondo T. Pattern classification and clustering: a review of partially supervised learning approaches. Pattern Recognition Letters. 2014;37:4–14. doi: 10.1016/j.patrec.2013.10.017. - DOI
    1. Sun S. A survey of multi-view machine learning. Neural Computing and Applications. 2013;23(7-8):2031–2038. doi: 10.1007/s00521-013-1362-6. - DOI
    1. Triguero I., García S., Herrera F. Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study. Knowledge and Information Systems. 2015;42(2):245–284. doi: 10.1007/s10115-013-0706-y. - DOI

LinkOut - more resources