Accession Number : AD1018371

Title :   OpenMP Parallelization and Optimization of Graph-based Machine Learning Algorithms

Descriptive Note : Technical Report

Corporate Author : University of California, Los Angeles Los Angeles United States

Personal Author(s) : Meng,Zhaoyi ; Koniges,Alice ; He,Yun ; Williams,Samuel ; Kurth,Thorsten ; Cook,Brandon ; Deslippe,Jack ; Bertozzi,Andrea L

Full Text :

Report Date : 01 May 2016

Pagination or Media Count : 12

Abstract : We investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelize the most time consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and predict behavior on emerging testbed systems based on Intel's Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. A large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.

Descriptors :   classification , learning machines , eigenvectors , algorithms , optimization , unsupervised machine learning , digital data , graphs

Distribution Statement : APPROVED FOR PUBLIC RELEASE