It fits within the long standing interest in developing learning algorithms that do not require fully supervised data, such as learning from positive-only or one-class data (Khan and Madden 2014) and semi-supervised learning (Chapelle et al. 2005 Li and Liu 2005 Elkan and Noto 2008 Mordelet and Vert 2014 Du Plessis et al. The term PU learning first began to appear in the early 2000s and there has been a surge of interest in this setting in recent years (Liu et al. The assumption is that each unlabeled example could belong to either the positive or negative class. Learning from positive and unlabeled data or PU learning is a variant of this classical set up where the training data consists of positive and unlabeled examples. This is among the most widely studied problems in machine learning. In the most traditional setting, this data contains both positive and negative examples and is fully labeled, that is, the class value is not missing for any training example. To do so, an algorithm has access to training data. The goal of binary classification is to learn a model that is able to distinguish between positive and negative examples.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |