Based on the analysis of the oil/gas transmission network, the matrix algorithms of shortcut in the chart theory are smartly used to select locations of petroleum gathering/transmission stations, which make the overall transmission distance of the petroleum products shortest and the logistic cost of the ‘down stream’operation of petroleum industry lowest.

Matrix arithmetic of network maximal flux and minimal separate set not only solves these problems,but also solves some other problems by the computer such as minimal fee,distribution and transport.

The concept of equivalent matrix, which expresses equivalent relation, is introduced; the relations between equivalent matrix and equivalent classification are discussed. Proposed algorithm for data cleaning and rule extraction in knowledge system based on matrix computation and its complexity of computation is analyzed.

Based on a Rough Attribute Vector Tree (RAVT),two kinds of fast matrix computation algorithms—Recursive Matrix Computation (RMC) method and Parallel Matrix Computation (PMC) method are proposed for data cleaning and rules extraction finished synchronously in rough information system.

A matrix algorithm for computing the stationary state probabilities of the system at arbitrary instants and at instants of arrival and completion of service of primary customers is obtained.

The matrix algorithm expansions of the vectors of these forms are calculated.

The governing equations were discretized in tridiagonal matrix form and were solved by using the tridiagonal matrix algorithm (TDMA) as well as the alternative direction implicit (ADI) solver.

A matrix algorithm for computing the free space distance of TCM signal sequences

The matrix algorithm is derived from the Viterbi algorithm, and is an implementation of Viterbi algorithm in the form of matrix.

We use the Euler, Jacobi, Poincaré, and Brun matrix algorithms as well as two new algorithms to evaluate the continued fraction expansions of two vectorsL related to two Davenport cubic formsg1 andg2.

Their periods and fundamental domains are found and the expansions of the multiple root vectors of these forms by means of the matrix algorithms due to Euler, Jacobi, Poincaré, Brun, Parusnikov, and Bryuno, are computed.

Their periods and fundamental domains are found and the expansions of the multiple root vectors of these forms by means of the matrix algorithms due to Euler, Jacobi, Poincaré, Brun, Parusnikov, and Bryuno, are computed.

A recursive approach to cell matrix algorithms is considered and applied to the QR factorization by the rotation method.

Parametric and matrix algorithms for calculating heterogeneous states in systems with an incongruently melting binary compound

Models are designed in a geometry language which supports vector and matrix arithmetic, transformations and instancing of primitive parts.

Hierarchical matrices provide a technique for the data-sparse approximation and matrix arithmetic of large, fully populated matrices.

This is obtained by trying to perform as much as possible of the matrix arithmetic associated with the solution of the linear equations at each step in advance of that step and in parallel with the integration of earlier steps.

The PE, which can be configured as a multiplier-accumulator or an inner product step processor, supports several of the most common systolic algorithms in signal processing and matrix arithmetic.

A Sparse Matrix Arithmetic Based on $\Cal H$-Matrices.

By means of the construction theory of self-adjoint operators and matrix computation, we obtain a sufficient and necessary condition to ensure that the product operator is self-adjoint, which extends the results in the second order case.

We present a mathematically rigorous and, at the same time, convenient method for systolic design and derive systolic designs for three matrix computation problems.

Different implementations for this method, both dense and sparse, have been developed, using a number of linear algebra software libraries (including sparse linear equation solvers) and optimized sparse matrix computation strategies.

Covariance matrix computation of the state variable of a stationary Gaussian process

Communication complexity of matrix computation over finite fields

Let A be any nxn matrix and J, its Jordan canonical form. A nonsingular matrix T which satisfies.T-1AT=J, is called a transformation matrix.In this paper, an algorithm is developed for obtaining the Jordan canonical form J of matrix A and for producing simultaneously a transformation matrix T when all the different eigenvalues of A are known.In [2], a different algorithm, is also proposed. Unfortunately, some mistakes have been found there. The basic idea of [2] is described as follows: suppose that V is the...

Let A be any nxn matrix and J, its Jordan canonical form. A nonsingular matrix T which satisfies.T-1AT=J, is called a transformation matrix.In this paper, an algorithm is developed for obtaining the Jordan canonical form J of matrix A and for producing simultaneously a transformation matrix T when all the different eigenvalues of A are known.In [2], a different algorithm, is also proposed. Unfortunately, some mistakes have been found there. The basic idea of [2] is described as follows: suppose that V is the linear space of n dimensions , is an eigenvalue of A and B = A-I.Denoting V0= { 0 }, W0= V, then one may successively construct spaces Vt and Wi in the way that Wi= V1+1 + Wi+1, Vi+1 Wi+ 1 = { 0 }. Vi+1 = {xWi| Bx Vi} and the process may terminate when dim Vm+1= 0. [2] says that V'm=V1+V2 +defines the subspace of V, which corresponds to the eigenvalue of A, and V can be written as the direct sum of invariant subspaces V= V'm Wm. This is not true. For example, if which satisfies all the above conditions, i.e.,We take W1 = V1+W1=V and V1 W1= {0}. From W1 we can obtain the space V2If we take W2 as the spaces and it satisfies the conditions that W1= V2+W2And V2 W2 = {0}, then dim V3= 0.In fact, V3= {x W2|Bx V2}. Since x W2, x must be of the form aand Bx = On another hand, any vector y V2, must be of the form Thus, if Bx V2, a must be zero, and it follows that dim Vs= 0.V2' = V1 + V2 = is an invariant subapace, but W2 = > is not.The algorithm proposed in this paper gives corrections to the mistakes of [2]. Furthermore it is proved that for any matrix A, the matrix T produced by the computations with our algorithm is indeed a transformation matrix.

In this paper the algorithms for geometric transformation of image data and the decomposition of the image matrix into submatrices are introduced. When the image is digitized in the line scanning mode, the geometric transformation of the two dimensional image matrix can be realized by two one-dimensional geometric transformations, i.e., first carrying out the geometric transformation along the scanning lines and then rotating the image matrix through 90 degrees and carrying out the geometric trans formation...

In this paper the algorithms for geometric transformation of image data and the decomposition of the image matrix into submatrices are introduced. When the image is digitized in the line scanning mode, the geometric transformation of the two dimensional image matrix can be realized by two one-dimensional geometric transformations, i.e., first carrying out the geometric transformation along the scanning lines and then rotating the image matrix through 90 degrees and carrying out the geometric trans formation in the other direction. These algorithms can be implemented in a minicomputer system with an appropriate software and a special-purpose hardwired device, so that it can reduce the computing time effectively and is adaptable in minicomputer digital image processing systems.

A sparse matrix algorithm for the modified nodal approach to network analysis is presented.It takes full advantage of both the diagonally dominant property of the nodal admittance matrix and the symmetric property of sparseness structure. Its storage requirements are small and execution time is less.