IRTUM – Institutional Repository of the Technical University of Moldova

Practical benchmark of open-source MLOps platforms: Comparing MLflow, Metaflow and ZenML across model type

Show simple item record

dc.contributor.author BADEA, Dan Gabriel
dc.contributor.author MONEA, Damian
dc.contributor.author SAVA, Lilia
dc.date.accessioned 2026-02-18T16:04:20Z
dc.date.available 2026-02-18T16:04:20Z
dc.date.issued 2025
dc.identifier.citation BADEA, Dan Gabriel; Damian MONEA and Lilia SAVA. Practical benchmark of open-source MLOps platforms: Comparing MLflow, Metaflow and ZenML across model type. In: 24th RoEduNet International Conference Networking in Education and Research, Chisinau, Republic of Moldova, 17-19 September, 2025. Universitatea Politehnică din Bucureşti. IEEE, 2025, pp. 1-6. ISBN 979-8-3315-5714-0, eISBN 979-8-331-55713-3, ISSN 2068-1038, eISSN 2247-5443. en_US
dc.identifier.isbn 979-8-3315-5714-0
dc.identifier.isbn 979-8-331-55713-3
dc.identifier.issn 2068-1038
dc.identifier.issn 2247-5443
dc.identifier.uri https://doi.org/10.1109/RoEduNet68395.2025.11208376
dc.identifier.uri https://repository.utm.md/handle/5014/35303
dc.description Acces full text: https://doi.org/10.1109/RoEduNet68395.2025.11208376 en_US
dc.description.abstract This paper presents a comparison between three popular open-source MLOps frameworks: MLflow, Metaflow, and ZenML, studied in three real-world machine learning scenarios: extractive text summarization using a BERT-based model, image analysis using Res Net, and tabular data classification using Random Forest. The comparison was carried out by developing MLOps-enhanced versions of the baseline code using each studied framework, for each of the three models. Of the three frameworks studied MLflow is notable for its low level of integration: less than 1.2% additional runtime and less than 104 lines of additional code. Although ZenML requires about 208 additional lines and increases execution time by about 19.6%, traceability is significantly improved in exchange. Furthermore, Metaflow provides strong automatic artifact versioning, which adds approximately 195 additional lines of code and increases runtime by about 110.7%. Despite these variations, reproducibility was confirmed by the fact that all platforms maintained consistent model performance under the same conditions, within a margin of 0.1 % (Table IV). Disk usage increased by about 220.4M× for MLflow, 220× for ZenML and 143.4Mx for Metaflow, these findings indicate that Metaflow provides thorough provenance at the cost of additional code and runtime overhead, ZenML strikes a reasonable balance between control and usability and MLflow is best suited for fast, low-overhead experiment tracking. en_US
dc.language.iso en en_US
dc.publisher IEEE (Institute of Electrical and Electronics Engineers) en_US
dc.rights Attribution-NonCommercial-NoDerivs 3.0 United States *
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/us/ *
dc.subject mlflow en_US
dc.subject metaflow en_US
dc.subject zenml en_US
dc.title Practical benchmark of open-source MLOps platforms: Comparing MLflow, Metaflow and ZenML across model type en_US
dc.type Article en_US


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States

Search DSpace


Browse

My Account