A Case Study: Exploiting Neural Machine Translation To Translate CUDA To Opencl · The Large Language Model Bible Contribute to LLM-Bible

A Case Study: Exploiting Neural Machine Translation To Translate CUDA To Opencl

Kim Yonghae, Kim Hyesoon. Arxiv 2019

[Paper]    
Applications Tools Training Techniques

The sequence-to-sequence (seq2seq) model for neural machine translation has significantly improved the accuracy of language translation. There have been new efforts to use this seq2seq model for program language translation or program comparisons. In this work, we present the detailed steps of using a seq2seq model to translate CUDA programs to OpenCL programs, which both have very similar programming styles. Our work shows (i) a training input set generation method, (ii) pre/post processing, and (iii) a case study using Polybench-gpu-1.0, NVIDIA SDK, and Rodinia benchmarks.

Similar Work