The standard message-passing library, MPI, is generally used for parallel programming common heterogeneous NoCs. However MPI does not address some key challenges posed by NoCs. In particular, it does not support fault-tolerant parallel computing, and does not facilitate optimal distribution of computations and communications across heterogeneous NoCs. While some primary research has been carried out to improve MPI, further effort is needed to obtain a good version of MPI for heterogeneous NoCs. The future ideal hetrerogeneous MPI should combine the features separately provided by the Nexus implementation of MPI (mutliprotocol communications), FT-MPI (fault tolerance), and HMPI (optimal heterogeneous distribution computations and communications).