J 2021

Searching CUDA code autotuning spaces with hardware performance counters: data from benchmarks running on various GPU architectures

HOZZOVÁ, Jana; Jiří FILIPOVIČ; Amin NEZARAT; Jaroslav OĽHA; Filip PETROVIČ et. al.

Základní údaje

Originální název

Searching CUDA code autotuning spaces with hardware performance counters: data from benchmarks running on various GPU architectures

Autoři

HOZZOVÁ, Jana (703 Slovensko, domácí); Jiří FILIPOVIČ (203 Česká republika, garant, domácí); Amin NEZARAT (364 Írán, domácí); Jaroslav OĽHA (703 Slovensko, domácí) a Filip PETROVIČ (703 Slovensko, domácí)

Vydání

Data in Brief, Elsevier, 2021, 2352-3409

Další údaje

Jazyk

angličtina

Typ výsledku

Článek v odborném periodiku

Obor

10201 Computer sciences, information science, bioinformatics

Stát vydavatele

Nizozemské království

Utajení

není předmětem státního či obchodního tajemství

Odkazy

Kód RIV

RIV/00216224:14610/21:00123013

Organizační jednotka

Ústav výpočetní techniky

UT WoS

000725561900057

EID Scopus

2-s2.0-85101952751

Klíčová slova anglicky

Auto-tuning; Tuning spaces; Performance counters; CUDA

Štítky

Příznaky

Mezinárodní význam, Recenzováno
Změněno: 2. 2. 2022 14:05, doc. RNDr. Jiří Filipovič, Ph.D.

Anotace

V originále

We have developed several autotuning benchmarks in CUDA that take into account performance-relevant source-code parameters and reach near peak-performance on various GPU architectures. We have used them during the development and evaluation of a search method for tuning space proposed in [1]. With our framework Kernel Tuning Toolkit, freely available at Github, we measured computation times and hardware performance counters on several GPUs for the complete tuning spaces of five benchmarks. These data, which we provide here, might benefit research of search algorithms for the tuning spaces of GPU codes or research of relation between applied code optimization, hardware performance counters, and GPU kernels’ performance. Moreover, we describe the scripts we used for robust evaluation of our searcher and comparison to others in detail. In particular, the script that simulates the tuning, i.e., replaces time-demanding compiling and executing the tuned kernels with a quick reading of the computation time from our measured data, makes it possible to inspect the convergence of tuning search over a large number of experiments. These scripts, freely available with our other codes, make it easier to experiment with search algorithms and compare them in a robust and reproducible way. During our research, we generated models for predicting values of performance counters from values of tuning parameters of our benchmarks. Here, we provide the models themselves and describe the scripts we implemented for their training. These data might benefit researchers who want to reproduce or build on our research.

Návaznosti

EF16_013/0001802, projekt VaV
Název: CERIT Scientific Cloud
LM2018140, projekt VaV
Název: e-Infrastruktura CZ (Akronym: e-INFRA CZ)
Investor: Ministerstvo školství, mládeže a tělovýchovy ČR, e-Infrastruktura CZ