D 2017

Provenance-aware optimization of workload for distributed data production

MAKATUN, Dzmitry, Jerome LAURET, Hana RUDOVÁ and Michal ŠUMBERA

Basic information

Original name

Provenance-aware optimization of workload for distributed data production

Authors

MAKATUN, Dzmitry (112 Belarus), Jerome LAURET (840 United States of America), Hana RUDOVÁ (203 Czech Republic, guarantor, belonging to the institution) and Michal ŠUMBERA (203 Czech Republic)

Edition

United Kingdom, Journal of Physics: Conference Series, vol. 898, p. 1-8, 8 pp. 2017

Publisher

Institute of Physics Publishing

Other information

Language

English

Type of outcome

Stať ve sborníku

Field of Study

10201 Computer sciences, information science, bioinformatics

Country of publisher

United Kingdom of Great Britain and Northern Ireland

Confidentiality degree

není předmětem státního či obchodního tajemství

Publication form

printed version "print"

RIV identification code

RIV/00216224:14330/17:00098484

Organization unit

Faculty of Informatics

ISSN

Keywords in English

data transfer planning; distributed data processing; Grid; network flows; data production

Tags

International impact, Reviewed
Změněno: 27/8/2019 12:19, RNDr. Pavel Šmerk, Ph.D.

Abstract

V originále

Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). Previously we have developed a new job scheduling approach dedicated to distributed data production – an essential part of data processing in HENP (pre- processing in big data terminology). In this contribution, we discuss the load balancing with multiple data sources and data replication, present recent improvements made to our planner and provide results of simulations which demonstrate the advantage against standard scheduling policies for the new use case. Multi-source or provenance is common in computing models of many applications whereas the data may be copied to several destinations. The initial input data set would hence be already partially replicated to multiple locations and the task of the scheduler is to maximize overall computational throughput considering possible data movements and CPU allocation. The studies have shown that our approach can provide a significant gain in overall computational performance in a wide scope of simulations considering realistic size of computational Grid and various input data distribution.