Languages and Compilers for Parallel Computing. Asenjo, R.; Plata, O.; Zapata, E.; no , J. T.; Doallo, R.; Chatterjee, S.; Prins, J. F.; Carter, L.; Ferrante, J.; Li, Z.; Sehr, D.; and Yew, P. C. Volume 1656 of Lecture Notes in Computer Science, May, 1999.
Languages and Compilers for Parallel Computing [link]Paper  abstract   bibtex   
There is a class of sparse matrix computations, such as direct solvers of systems of linear equations, that change the fill-in (nonzero entries) of the coefficient matrix, and involve row and column operations (pivoting). This paper addresses the problem of the parallelization of these sparse computations from the point of view of the parallel language and the compiler. Dynamic data structures for sparse matrix storage are analyzed, permitting to efficiently deal with fill-in and pivoting issues. Any of the data representations considered enforces the handling of indirections for data accesses, pointer referencing and dynamic data creation. All of these elements go beyond current data-parallel compilation technology. We propose a small set of new extensions to HPF-2 to parallelize these codes, supporting part of the new capabilities on a runtime library. This approach has been evaluated on a Cray T3E, implementing, in particular, the sparse LU factorization.
@book{ Asenjo1999a,
  author    = {R. Asenjo and O. Plata and E. Zapata and J. Touri no and R. Doallo and Siddhartha Chatterjee and Jan F. Prins and Larry Carter and Jeanne Ferrante and Zhiyuan Li and David Sehr and Pen Chung Yew},
  title     = {Languages and Compilers for Parallel Computing},
  series   = {Lecture Notes in Computer Science}, 
  abstract   = {There is a class of sparse matrix computations, such as direct solvers of systems of linear equations, that change the fill-in (nonzero entries) of the coefficient matrix, and involve row and column operations (pivoting). This paper addresses the problem of the parallelization of these sparse computations from the point of view of the parallel language and the compiler. Dynamic data structures for sparse matrix storage are analyzed, permitting to efficiently deal with fill-in and pivoting issues. Any of the data representations considered enforces the handling of indirections for data accesses, pointer referencing and dynamic data creation. All of these elements go beyond current data-parallel compilation technology. We propose a small set of new extensions to HPF-2 to parallelize these codes, supporting part of the new capabilities on a runtime library. This approach has been evaluated on a Cray T3E, implementing, in particular, the sparse LU factorization.},
  booktitle   = {Languages and Compilers for Parallel Computing},
  isbn   = {978-3-540-66426-0},
  month   = {May},
  pages   = {230--246},
  volume   = {1656},
  url   = {http://www.springerlink.com/content/dd8uk1lp4pvmclta} ,
  year   = {1999}
}
Downloads: 0