Distributed Framework for Testing Machine Learning Methods
MetadataShow full item record
When designing new Machine learning (ML) methods, solid testing is very important part of process. This article describes a framework that was created for automated testing and comparison of different ML methods. This framework allows to automate most of tedious and recurrent tasks related to comparison and testing of new methods. It consists of two parts. First part is intended for work with results of ML methods. It allows to compare results of different methods with different settings. It allows to create tables and graph from these results and it also perform statistical tests on these results. It is often necessary to perform considerable number of test runs on different datasets and with different settings for purpose of comparison and statistical test. For these reasons it is advisable to automate these tasks as much as possible. Automation of these tasks is purpose of second part of this framework. It allows to divide, plan and execute tasks on remote machines. Whole framework is written using Python and Django framework allowing to easily extend customize it for particular task.
Document typePeer reviewed
Document versionFinal PDF
SourceProceedings of the 22nd Conference STUDENT EEICT 2016. s. 421-425. ISBN 978-80-214-5350-0
- Student EEICT 2016