D 2017

Provenance-aware optimization of workload for distributed data production

MAKATUN, Dzmitry, Jerome LAURET, Hana RUDOVÁ a Michal ŠUMBERA

Základní údaje

Originální název

Provenance-aware optimization of workload for distributed data production

Autoři

MAKATUN, Dzmitry (112 Bělorusko), Jerome LAURET (840 Spojené státy), Hana RUDOVÁ (203 Česká republika, garant, domácí) a Michal ŠUMBERA (203 Česká republika)

Vydání

United Kingdom, Journal of Physics: Conference Series, vol. 898, od s. 1-8, 8 s. 2017

Nakladatel

Institute of Physics Publishing

Další údaje

Jazyk

angličtina

Typ výsledku

Stať ve sborníku

Obor

10201 Computer sciences, information science, bioinformatics

Stát vydavatele

Velká Británie a Severní Irsko

Utajení

není předmětem státního či obchodního tajemství

Forma vydání

tištěná verze "print"

Kód RIV

RIV/00216224:14330/17:00098484

Organizační jednotka

Fakulta informatiky

ISSN

Klíčová slova anglicky

data transfer planning; distributed data processing; Grid; network flows; data production

Příznaky

Mezinárodní význam, Recenzováno
Změněno: 27. 8. 2019 12:19, RNDr. Pavel Šmerk, Ph.D.

Anotace

V originále

Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). Previously we have developed a new job scheduling approach dedicated to distributed data production – an essential part of data processing in HENP (pre- processing in big data terminology). In this contribution, we discuss the load balancing with multiple data sources and data replication, present recent improvements made to our planner and provide results of simulations which demonstrate the advantage against standard scheduling policies for the new use case. Multi-source or provenance is common in computing models of many applications whereas the data may be copied to several destinations. The initial input data set would hence be already partially replicated to multiple locations and the task of the scheduler is to maximize overall computational throughput considering possible data movements and CPU allocation. The studies have shown that our approach can provide a significant gain in overall computational performance in a wide scope of simulations considering realistic size of computational Grid and various input data distribution.