DSpace Repository

Implementing a geodistributed data store with leaderless replication

Show simple item record

dc.contributor.author TIORA, Irina
dc.date.accessioned 2026-01-14T08:09:09Z
dc.date.available 2026-01-14T08:09:09Z
dc.date.issued 2026
dc.identifier.citation TIORA, Irina. Implementing a geodistributed data store with leaderless replication. In: Conferinţa Tehnico-Ştiinţifică a Colaboratorilor, Doctoranzilor şi Studenţilor = The Technical Scientific Conference of Undergraduate, Master and PhD Students, 14-16 Mai 2025. Universitatea Tehnică a Moldovei. Chişinău: Tehnica-UTM, 2026, vol. 1, pp. 576-581. ISBN 978-9975-64-612-3, ISBN 978-9975-64-613-0 (PDF). en_US
dc.identifier.isbn 978-9975-64-612-3
dc.identifier.isbn 978-9975-64-613-0
dc.identifier.uri https://repository.utm.md/handle/5014/34344
dc.description.abstract In most distributed data storage systems, a leader based replication is used. However, this architecture introduces significant latency in a geo-distributed environment, especially when writes are directed only to a leader node situated at large distances. The aim of this study was to develop a distributed key-value data store with leaderless replication. This would allow write operations to be performed on the nearest geographical node, reducing latency caused by physical distance. The system was implemented in the Go programming language and employs a leaderless consensus algorithm called Caesar to ensure data consistency. The final system can be configured to use either HTTP or gRPC communication protocol, and either B-tree or LSM (log-structured merge-tree) storage index. A benchmarking on a local computer was done in order to assess the performance of the implemented system. The system achieved a peak throughput of 14,000 read requests per second across three nodes. Under peak load, the maximum observed latency for read operations was 10 ms. 95% of read requests completed in under 1.58 ms, as monitored via Grafana dashboards. Write operations showed a higher latency, with 95% of write requests completing within 150 ms at a sustained load of 120 writes per second. The benchmarking on a local computer confirmed the system's capacity for high-throughput, low-latency operations, but the results are constrained by the non-distributed testing setup. en_US
dc.language.iso en en_US
dc.publisher Universitatea Tehnică a Moldovei en_US
dc.relation.ispartofseries Conferinţa tehnico-ştiinţifică a studenţilor, masteranzilor şi doctoranzilor = The Technical Scientific Conference of Undergraduate, Master and PhD Students: 14-16 mai 2025;
dc.rights Attribution-NonCommercial-NoDerivs 3.0 United States *
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/us/ *
dc.subject caesar consensus algorithm en_US
dc.subject distributed data storage en_US
dc.subject leaderless replication en_US
dc.subject geo- distributed database en_US
dc.subject geo-replication en_US
dc.title Implementing a geodistributed data store with leaderless replication en_US
dc.type Article en_US


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States

Search DSpace


Advanced Search

Browse

My Account