Accession Number:

AD1003134

Title:

FRPA: A Framework for Recursive Parallel Algorithms

Corporate Author:

University of California at Berkeley Berkeley United States

Report Date:

2015-05-01

Abstract:

Recursion continues to play an important role in high-performance computing. However, parallelizing recursive algorithms while achieving high performance is nontrivial and can result in complex, hard to maintain code. In particular, assigning processors to subproblems is complicated by recent observations that communication costs often dominate computation costs. Previous work 13 demonstrates that carefully choosing which divide-and-conquer steps to execute in parallel breadth-first steps and which to execute sequentially depth-first steps can result in significant performance gains over naive scheduling. Our Framework for Recursive Parallel Algorithms FRPA allows for the separation of an algorithms implementation from its parallelization. The programmer must simply define how to split a problem, solve the base case, and merge solved subproblems FRPA handles parallelizing the code and tuning the recursive parallelization strategy, enabling algorithms to achieve high performance. To demonstrate FRPAs performance capabilities, we present a detailed analysis of two algorithms Strassen-Winograd 1 and Communication-Optimal Parallel Recursive Rectangular Matrix Multiplication CARMA 3. Our single-precision CARMA implementation is fewer than 80 lines of code and achieves a speedup of up to 11x over Intels Math Kernel Library MKL 4 matrix multiplication routine on skinny matrices. Our double-precision Strassen-Winograd implementation, at just 150 lines of code, is up to 45 faster than MKL for large square matrix multiplications. To show FRPAs generality and simplicity, we implement six additional algorithms mergesort, quicksort, TRSM, SYRK, Cholesky decomposition, and Delaunay triangulation 5. FRPA is implemented in C, runs in shared-memory environments, uses Intels Cilk Plus 6 for task-based parallelism, and leverages OpenTuner 7 to tune the parallelization strategy.

Descriptive Note:

Technical Report

Pages:

0020

Communities Of Interest:

Distribution Statement:

Approved For Public Release;

File Size:

0.91MB