A Tool to Display Array Access Patterns in OpenMP Programs (original) (raw)
Abstract
A program analysis tool can play an important role in helping users understand and improve OpenMP codes. Array privatization is one of the most effective ways to improve the performance and scalability of OpenMP programs. In this paper we present an extension to the Open64 compiler and the Dragon tool, a program analysis tool built on top of this compiler, to enable them to collect and represent information on the manner in which threads access the elements of shared arrays at run time. This information can be useful to the programmer for restructuring their code to maximize data locality, reducing false sharing, identifying program errors (as a result of unintended true sharing) or accomplishing aggressive privatization.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
- Balasundaram, V., Kennedy, K.: A technique for summarizing data access and its use in parallelism enhancing transformations. In: Proceedings of the ACM SIGPLAN 1989 Conference on Programming language design and implementation, pp. 41–53 (1989)
Google Scholar - Browne, S., Dongarra, J., Garner, N., London, K., Mucci, P.: A Scalable Cross-Platform Infrastructure for Application Performance Tuning Using Hardware Counters. In: Proc. Supercomputing 2000, Dallas TX (November 2000)
Google Scholar - Buck, B., Hollingsworth, J.K.: An API for Runtime Code Patching. Journal of Supercomputing Applications 14(4), 317–329 (2000)
Article Google Scholar - Burke, M., Cytron, R.: Interprocedural dependence analysis and parallelization. In: Proceedings of the 1986 SIGPLAN symposium on Compiler contruction, pp. 162–175 (1986)
Google Scholar - Chapman, B., Hernandez, O., Huang, L.: Dragon: An Open64-Based Interactive Program Analysis Tool for Large Applications. In: Proceedings of the 4th International Conference on Parallel and Distributed Computing, Applications and Technologies, PDCAT 2003 (2003)
Google Scholar - Intel Corporation products for OpenMP. Intel ThreadChecker and VTUNE, http://developer.intel.softw.are/products/
- Li, Z., Yew, P.-C.: Efficient interprocedural analysis for program parallelization and restructuring. Efficient and precise array access analysis 24(1), 65–109 (2002)
Google Scholar - Liu, Z., Chapman, B., Wen, Y., Huang, L., Weng, T.-h., Hernandez, O.: Improving the Performance of OpenMP by Array Privatization. In: WOMPAT 2002, Workshop on OpenMP Applications and Tools, pp. 224–259 (2002)
Google Scholar - Liu, Z., Chapman, B., Wen, Y., Huang, L., Weng, T.-h., Hernandez, O.: Analyses for the Translation of OpenMP Codes into SPMD Style with Array Privatization. In: Voss, M.J. (ed.) WOMPAT 2003. LNCS, vol. 2716, pp. 26–41. Springer, Heidelberg (2003)
Chapter Google Scholar - Martonosi, M., Gupta, A., Anderson, T.: MemSpy: Analyzing Memory System Bottlenecks in Programs. In: SIGMETRICS Conference on Measurement and Modeling of Computer Systems, Newport, Rhode Island, pp. 1–12 (1992)
Google Scholar - The Open64 compiler Website, http://open64.sourceforge.net
- Pugh, W.: A practical algorithm for exact array dependence analysis. Commun. ACM 35(8), 102–114 (1992)
Article Google Scholar - Allen, R., Kennedy, K.: Optimizing Compilers for Modern Architectures, A Dependence-Based approach, pp. 585–588. Academic Press, London (2002)
Google Scholar - Snavely, A., Carrington, L., Wolter, N., Labarta, J., Badia, R., Purkayastha, A.: A Framework for Application PerformanceModeling and Prediction. In: Proceedings of Supercomputing 2002, Baltimore (November 2002)
Google Scholar - Chapman, B., Bregier, F., Patil, A., Prabhakar, A.: Achieving Performance under OpenMP on ccNUMA and Software Distributed Shared Memory Systems. Special Issue of Concurrency Practice and Experience 14(8-9) (September 2001)
Google Scholar - OpenMP Architecture Review Board. Official OpenMP Specifications, http://www.openmp.org
- Soukup, M.: A source-to-source OpenMP compiler Master’s thesis, University of Toronto, Toronoto, Ontario (2001)
Google Scholar - Barth, J.: An interprocedural data flow analysis algorithm. In: Conference Record of the Fourth ACM Symposium on the Principles of Programming Languages, Los Angeles (January 1977)
Google Scholar - Cooper, K.D., Kennedy, K.: Efficient computation of flow insensitive interprocedural summary information. In: SIGPLAN Symposium on Compiler Construction 1984, pp. 247–258 (1984)
Google Scholar - Havlak, P., Kennedy, K.: An Implementation of Interprocedural Bounded Regular Section Analysis. IEEE Transactions on Parallel and Distributed Systems 2(3), 350–360 (1991)
Article Google Scholar - Callahan, D., Kennedy, K.: Analysis of Interprocedural Side Effects in a Parallel Programming Environment. J. Parallel Distrib. Comput. 5(5), 517–550 (1988)
Article Google Scholar - Balasundaram, V., Kennedy, K.: A Technique for Summarizing Data Access and Its Use in Parallelism. Enhancing Transformations PLDI, 41–53 (June 1989)
Google Scholar - Triolet, R., Irigoin, F., Feautrier, P.: Direct parallelization of call statements. In: SIGPLAN Symposium on Compiler Construction, July 1986, pp. 176–185 (1986)
Google Scholar
Author information
Authors and Affiliations
- Computer Science Department, University of Houston, 4800 Calhoun Rd, Houston, TX, 77204-3010, USA
Oscar R. Hernandez, Chunhua Liao & Barbara M. Chapman
Authors
- Oscar R. Hernandez
- Chunhua Liao
- Barbara M. Chapman
Editor information
Editors and Affiliations
- Computer Science Department, University of Tennessee, 37996-3450, Knoxville, TN, USA
Jack Dongarra - Department of Informatics and Mathematical Modelling, Technical University of Denmark, DK-2800, Lyngby, Denmark
Kaj Madsen - Informatics & Mathematical Modeling, Technical University of Denmark, DK-2800, Lyngby, Denmark
Jerzy Waśniewski
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hernandez, O.R., Liao, C., Chapman, B.M. (2006). A Tool to Display Array Access Patterns in OpenMP Programs. In: Dongarra, J., Madsen, K., Waśniewski, J. (eds) Applied Parallel Computing. State of the Art in Scientific Computing. PARA 2004. Lecture Notes in Computer Science, vol 3732. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11558958\_58
Download citation
- .RIS
- .ENW
- .BIB
- DOI: https://doi.org/10.1007/11558958\_58
- Publisher Name: Springer, Berlin, Heidelberg
- Print ISBN: 978-3-540-29067-4
- Online ISBN: 978-3-540-33498-9
- eBook Packages: Computer ScienceComputer Science (R0)Springer Nature Proceedings Computer Science