An empirical demonstration of the no free lunch theorem
MetadataShow full item record
In this paper, we provide a substantial empirical demonstration of thestatistical machine learning result known as the No Free Lunch Theorem (NFLT).We specifically compare the predictive performances of a wide variety of machinelearning algorithms/methods on a wide variety of qualitatively and quantitativelydifferent datasets. Our research work conclusively demonstrates a great evidence infavor of the NFLT by using an overall ranking of methods and their correspondinglearning machines, revealing in effect thatnone of the learning machines consideredpredictively outperforms all the other machines on all the widely different datasetsanalyzed. It is noteworthy however that while evidence from various datasets andmethods support the NFLT somewhat emphatically, some learning machines likeRandom Forest, Adaptive Boosting, and Support Vector Machines (SVM) appearto emerge as methods with the overall tendency to yield predictive performancesalmost always among the best.
Document typePeer reviewed
Document versionFinal PDF
SourceMathematics for Applications. 2019 vol. 8, č. 2, s. 173-188. ISSN 1805-3629
- 2019/2