Distributed Framework for Testing Machine Learning Methods

Loading...
Thumbnail Image
Date
2016
ORCID
Advisor
Referee
Mark
Journal Title
Journal ISSN
Volume Title
Publisher
Vysoké učení technické v Brně, Fakulta elektrotechniky a komunikačních technologií
Abstract
When designing new Machine learning (ML) methods, solid testing is very important part of process. This article describes a framework that was created for automated testing and comparison of different ML methods. This framework allows to automate most of tedious and recurrent tasks related to comparison and testing of new methods. It consists of two parts. First part is intended for work with results of ML methods. It allows to compare results of different methods with different settings. It allows to create tables and graph from these results and it also perform statistical tests on these results. It is often necessary to perform considerable number of test runs on different datasets and with different settings for purpose of comparison and statistical test. For these reasons it is advisable to automate these tasks as much as possible. Automation of these tasks is purpose of second part of this framework. It allows to divide, plan and execute tasks on remote machines. Whole framework is written using Python and Django framework allowing to easily extend customize it for particular task.
Description
Citation
Proceedings of the 22nd Conference STUDENT EEICT 2016. s. 421-425. ISBN 978-80-214-5350-0
http://www.feec.vutbr.cz/EEICT/
Document type
Peer-reviewed
Document version
Published version
Date of access to the full text
Language of document
en
Study field
Comittee
Date of acceptance
Defence
Result of defence
Document licence
© Vysoké učení technické v Brně, Fakulta elektrotechniky a komunikačních technologií
DOI
Citace PRO