At least elementary knowledge of programming languages FORTRAN, C and eventually C++ is expected.
Course Enrollment Limitations
The course is also offered to the students of the fields other than those the course is directly associated with.
Fields of study the course is directly associated with
there are 18 fields of study the course is directly associated with, display
Main goal of this lecture is to provide information about supercomputing
architectures and basic programming methods for vector and parallel
computers. First part focuses to the hardware, during the second part
general optimization methods and programming methodology for parallel
computer is discussed. The last part of the lecture is aimed to
Graduate will be able to understand and explain properties of modern processors.
Graduate will be also able to analyze the program code and propose optimizations for a particular processor.
Graduate will be able to design and implement a simple parallel program to solve a particular problem.
Graduate will be able to design and realize benchmarks of computer systems or applications.
Graduate will be able to design a specially optimized system fo a concrete application.
High performance vector and superscalar processors.
Uniprocesor computers, computers with small number of processors,
massively parallel computers; distributed systems.
Performance measurements, LINPACK test, TOP 500 list.
High performance uniprocessor systems, programming languages, methodology
of efficient program writting, basis optimization methods for vector and
Distributed systems, data and task decomposition, coarse grain
parallelism, programming systems (PVM, LINDA, ...).
Multiprocessor systems with shared memory, programming languages,
decompozition of algorithms, basis optimization methods for small number
Massively parallel systems, parallel algorithms, fine grain parallelism.
Shared, distributed, and distributed shared memory; other alternatives.
Sdílená, distribuovaná a distribuovaná sdílená paměť.
Scalability of computers and tasks.
WOLFE, Michael Joseph. High performance compilers for parallel computing. Redwood City: Addison-Wesley Publishing Company, 1996. xiii, 570. ISBN 0-8053-2730-4. info
PROTIC, Jelica, Milo TOMASEVIC and Veljko MILUTINOVIC. Distributed shared memory. Los Alamitos: IEEE Computer Society, 1998. x, 365 s. ISBN 0-8186-7737-6. info
FOSDICK, Lloyd D. An introduction to high-performance scientific computing. Cambridge: MIT Press, 1996. ix, 760 s. ISBN 0-262-06181-3. info
DOWD, Kevin. High performance computing. Sebastopol: O'Reilly & Associates, 1993. xxv, 371 s. ISBN 1-56592-032-5. info
WILSON, Gregory V. Practical parallel programming. Cambridge: MIT Press, 1995. viii, 564. ISBN 0-262-23186-7. info
Standard lecture, no drills nor home work
No continuous evaluation during the semester, Only final exam in a written form (11 questions/subjects explicitly answered or discussed, total 110 points).