Neuromorphic computing algorithms have become an area of strong interest for their strong inference capabilities. These algorithms are compute intensive and require high performance processing capabilities. This study examined the parallelization of several neuromorphic algorithms and their acceleration on a variety of highly parallel computing platforms. While the Bayesian algorithms examined had strong thread level parallelism, the neural algorithms examined had both data and thread level parallelism. As a result the Bayesian algorithms were mapped to chip-multiprocessors, such as Xeon processors, while the neural algorithms were mapped to both chip-multiprocessors and SIMD platforms, such as GPGPUs. Large compute clusters based on these processing architectures were also examined. The results indicate that these algorithms have a high degree of parallelism and are well suited multicore architectures. They are also well suited to large compute clusters of these multicore processors. In follow-on work, we are designing novel multicore neuromorphic computing architectures that will be several orders of magnitude more efficient than current systems.