SciELO - Scientific Electronic Library Online

 
vol.17 issue3Performance Evaluation of Infrastructure as Service Clouds with SLA ConstraintsTridimensional-Temporal-Thematic Hydroclimate Modeling of Distributed Parameters for the San Miguel River Basin author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

On-line version ISSN 2007-9737Print version ISSN 1405-5546

Comp. y Sist. vol.17 n.3 Ciudad de México Jul./Sep. 2013

 

Artículos

 

A New LU Decomposition on Hybrid GPU-Accelerated Multicore Systems

 

Una nueva descomposición LU calculada en sistemas multi-core acelerados con GPU

 

Héctor Eduardo González and Juan Carmona

 

Information Technology Department, Instituto Nacional de Investigaciones Nucleares (ININ), AP 18-1027, 11801, D.F., Mexico. eduardo.gonzalez@inin.gob.mx, juan.carmona@inin.gob.mx

 

Article received on 15/02/2013;
accepted on 10/08/2013.

 

Abstract

In this paper, we postulate a new decomposition theorem of a matrix A into two matrices, namely, a lower triangular matrix M, in which all entries are determinants, and an upper triangular matrix U whose entries are also in determinant form. From a well-known theorem on the pivot elements of the Doolittle-Gauss elimination process, we deduce a corollary to obtain a diagonal matrix D. With it, we scale the elementary lower triangular matrix of the Doolittle-Gauss elimination process and deduce a new elementary lower triangular matrix. Applying this linear transformation to A by means of both minimum and complete pivoting strategies, we obtain the determinant of A as if it had been calculated by means of a Laplace expansion. If we apply this new linear transformation and the above pivot strategy to an augmented matrix (A|b), we obtain a Cramer's solution of the linear system of equations. These algorithms present an O(n3) computational complexity when (A,b)⊂Rn on hybrid GPU-accelerated multicore systems.

Keywords: New LU theorem, Cramer rule, Gauss elimination, Laplace expansion, determinants, GPU, multicore systems.

 

Resumen

En este trabajo se postula un Nuevo Teorema de Descomposición de una Matriz A en dos matrices: una Matriz triangular inferior M, cuyas entradas son todas expresadas en forma de determinantes, y una matriz triangular superior U cuyas entradas están también expresadas en forma de determinantes. A partir de un muy conocido Teorema sobre los elementos pivotales del proceso de eliminación de Doolittle-Gauss, deducimos un corolario

para obtener una Matriz Diagonal D. Usando esta matriz, escalamos la Matriz Elemental Triangular Inferior obtenida durante el proceso de eliminación de Doolittle-Gauss y deducimos una Nueva Matriz Elemental Triangular Inferior. Aplicando esta transformación lineal a la matriz A, por medio de una estrategia de pivoteo total, se obtiene el determinante de A como si hubiera sido calculado a través de la Expansión de Laplace. Si aplicamos esta nueva transformación lineal y la estrategia de pivoteo anteriormente mencionada a la matriz aumentada (A|b) obtenemos la solución de la Regla de Cramer aplicada a un Sistema de Ecuaciones Lineales. Estos algoritmos presentan una complejidad computacional O(n3) cuando (A,b)⊂Rn se calcula en Sistemas Multi-Core Acelerados con GPU.

Palabras clave: Nuevo teorema LU, regla de Cramer, eliminación de Gauss, expansión de Laplace, determinantes, GPU, sistemas Multi-Core.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

References

1. Gerald, C. F. & Wheatley, P.O. (1994). Applied Numerical Analysis (5th ed.). Reading, Mass.: Addison Wesley Pub. Co.         [ Links ]

2. Lay, D.C. (1994). Linear Algebra and Its Applications. Reading, Mass.: Addison Wesley Publishing Co.         [ Links ]

3. Akai, T.J. (1994). Applied Numerical Methods For Engineers. New York: John Wiley and Sons.         [ Links ]

4. Valenza, R.J. (1993). Linear Algebra: An Introduction to Abstract Mathematics. New York: Springer-Verlag.         [ Links ]

5. Kolman, B. (1993). Introductory Linear Algebra with Applications. New York: Macmillan.         [ Links ]

6. Eaves, E.D. & Carruth, J.H. (1985). Introductory Mathematical Analysis (6th ed.). Boston: Allyn and Bacon.         [ Links ]

7. Kahaner, D., Moler, C., & Nash, S. (1989). Numerical Methods and Software. Englewood Cliffs. N.J.: Prentice Hall.         [ Links ]

8. Wilkinson, J.H. (1965). The algebraic eigenvalue problem. Oxford: Clarendon Press.         [ Links ]

9. Householder, A.S. (1964). The theory of Matrices in Numerical Analysis. New York: Blaisdell Pub. Co.         [ Links ]

10. Stewart, G.W. (1973). Introduction to Matrix Computations. New York: Academic Press.         [ Links ]

11. Golub, G.H. & Van Loan, C.F. (1983). Matrix Computations. Baltimore: Johns Hopkins University Press.         [ Links ]

12. Grosswald, E. (1966). Topics from the Theory of Numbers. New York: Macmillan.         [ Links ]

13. Gregory, R.T. & Karney, D.L. (1969). A Collection of Matrices for Testing Computational Algorithms. New York: Wiley-Interscience.         [ Links ]

14. CUDA 5 Learn More. Retrieved from http://www.nvidia.com/object/cuda_home_new.htm        [ Links ]

15. Strang, G. (1976). Linear Algebra and its Applications. New York: Academic Press.         [ Links ]

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License