How to design and implement large data spatial overlay method based on the object-relational data model as well as keeping the advantage of the model is an important and hot technique problem in spatial data model and algorithmic research.

Because the marine data have the complicated spatio-temporal characteristics and comparatively the research of MGIS emerges recently. In result, there are some problems and difficulty on the manipulation of marine information when using traditional GIS, such as the basic expression of the marine spatio-temporal data, the storage of the large data set, and a series of questions to transmit the large data set and to display the dynamic data, and so on.

Especially on Internet, because of the restrict of the Internet bandwidth , how to transmit these large data becomes restricting the putting out of GIS information.

But, SVMs is high computation complexity , large memory demanding and very difficult to use for large scale data sets, however, remote sensing image classification usually is large data sets, in order to improve the disadvantages, this paper proposes that one should use LS-SVM(short for the least squares support vector machines )and its improved algorithms—weighted LS-SVM and sparseness LS-SVM to classify the multispectral remote sensing image, then good classification results are gotten.

(2) Based on traditional Mip Map pyramid texture mapping, this paper discusses to build a partition multi-resolution model for texture mapping to meet the need of large data size of digital geological logging system's exploded images.

Second, we study 3D mesh simplification algorithm and proposed a modeling method of simplification of terrain data based on prior knowledge (can be acquired by statistic method) which resolve the contradiction between bandwidth and large data.

Multi-scale representation and multi-scale spatial database plays an important role in such fields as the transmission of streaming media data over web, the self-adaptable visualization of spatial information, the navigation in spatial cognition, the scale match during inter-operation and other applications. Realizing this technology has to resolve the questions including the large data volume, the slow response, the conflicts between data representations and the steep change in scale range.

For the multiring and hypercube, a method of conflictless realization of an arbitrary permutation of "large" data items that can be divided into many "smaller" data blocks was considered, and its high efficiency was demonstrated.

A general method of conflictless arbitrary permutation of "large" data elements that can be divided into a multitude of "smaller" data blocks was considered for switches structured as the Cayley graphs.

The system unified the operation of various sets of equipment (radiation monitoring, radiometric, wave, materials science, and magnetic) and allowed the transfer of large data arrays from detectors located on the outer surface of the station.

To date, a large data set on the mitochondrial DNA (mtDNA) sequence variation in human populations has been accumulated.

The high-speed compression of large data streams in ultrasonic diagnostics

This paper put s forward algorithms for searching the expanding point of being expanded side from adjacent limited grids while computer links triangular net automatically.Itimproves and further perfects the algorithms discussed in a published paper￣[1].and is suitablefor the net joinning of a large data-area.

Error analysis and processing for spatial data is one of the key issues in GIS research.In the establishing of geographic information system databases,the map digitization including manual map digitization and scanned map digitization are the main capturing methods for spatial data.So,it is necessary to study the characteristic and processing of the error in map digitization.In the land and housing fundamental geographic information system,the cadastral parcel is one of the most important objects.According...

Error analysis and processing for spatial data is one of the key issues in GIS research.In the establishing of geographic information system databases,the map digitization including manual map digitization and scanned map digitization are the main capturing methods for spatial data.So,it is necessary to study the characteristic and processing of the error in map digitization.In the land and housing fundamental geographic information system,the cadastral parcel is one of the most important objects.According to the feature classifications in GIS,a cadastral parcel belongs to one kind of closed polygon objects composed of series of digitized vertexes.The area of a parcel is the key attribute with legal authorization.However,in cadastral parcel digitization for capturing data,it is unavoidable to have errors (including systematic error and random error).As a result,with the propagation of errors in the vertexes of a parcel,the digitized cadastral area is not equal to the authorized area usually calculated by higher accuracy surveying method.Therefore,it is one of the focus problems to minimize the effects of the digitized errors to upgrade the accuracy of digitized vertexes to ensure the precision of the area attribute in GIS database. In this paper,the error processing in area of the digitized parcel is discussed.The digitized data of a parcel can be treated as observations that are the coordinates in the ground system obtained from the digitized coordinates in a digitizer or scanner by orthogonal or affine transformation.In a parcel,the known authoried area,the rectangular angles and circular arcs constitute the constraints of the digitized vertexes.For more correlated parcels,the constraints also become more.As a result,the redundant observations and adjustment problems are put forward.The principles for adjusting the digitized parcel areas are first presented.The adjustment models are then derived,including the condition equations for areas,areas with arcs,rectangular angles and circular arcs.The methodologies to process multiareas are further presented.The first is the adjustment model and method to process a single and independent parcel area.The second is to adjust the parcel areas with "holes".The condition equations of parcel areas are combined with those of "holes" to resolve together.The third is to adjust integrally the multi_areas that are correlated with each other.The key problem is to ensure that the shared vertexes and boundaries among interrelated parcels are moved simultaneously,therefore,the topologies between parcel polygons remain undamaged.The fourth is used to process the multiareas with fixed vertexes and fixed parcels.These fixed vertexes and fixed parcels keep unchanged in adjustment processing.And the graded adjustment idea and method are lastly presented to solve multi_areas with the larger data volume.The outer boundaries of the adjusting areas are firstly processed to control the whole parcels,the areas are then divided into several parts to process. Based on the above theoretical discussion of the parcel area processing models and methods,an area adjustment system for digitial cadastral parcels is developed.The implementation of the models and the methodologies are illustrated through case studies.And the results are further discussed and analyzed,leading to conclusions that adjustment processing for digital cadastral areas is helpful to ensure the quality of the data in GIS data capturing and database establishment.

Terrain model is a kind of important models,which can be widely used in the fields of aerospace,aviation and military,such as war field simulation,flight visulation,characteristics matching,special effects on movies and televisions,etc.Because terrain model contains large_scale data,it is very difficult to realize fast_speed rendering of terrain model.Some approaches have been put forward to solve this problem,for instance,mesh simplification based on vertices clustering,polygon simplication based on triangular...

Terrain model is a kind of important models,which can be widely used in the fields of aerospace,aviation and military,such as war field simulation,flight visulation,characteristics matching,special effects on movies and televisions,etc.Because terrain model contains large_scale data,it is very difficult to realize fast_speed rendering of terrain model.Some approaches have been put forward to solve this problem,for instance,mesh simplification based on vertices clustering,polygon simplication based on triangular collapse,and levels of details based on simplification,etc.Based on geometry simplification,these approaches are effective and can keep a good geometry topology,however,they lose visual characteristics.As a result,they often lower visual accuracy.So,it is necessary to find out a new approach to solve this problem. This paper gives a new approach to terrain model simplification and fast_speed rendering.Firstly,this paper analyses the data characteristics of terrain model,especially for digital elevation model(DEM),and points out that this kind of data has a large quantity of redundance.According to the principles of computer graphics,it is unnecessary to derive from all terrain data each time.Secondly,this paper gives two judging criteria of simplifying terrain model,and then,based on these criteria,gives an approach of viewpoint_based extracting field data and normal_based simplifying detail model.According to this approach,terrain scope is defined and a terrain mesh will be extracted from a large_scale terrain data,then,this terrain mesh will be repartitioned according to viewpoint and image mesh.The gained new terrain mesh with unequal intervals has been simplified relative to original terrain mesh.Because image mesh can express the highest accuracy of the current viewing,this mesh extracting can keep high visual accuracy and at the same time avoid the loss of essential mesh data.Afterwards,according to the second criterion given by this paper,the extracted terrain mesh data still has redundancy.In order to avoid this redundancy,this paper gives a new approach of simplification based on normal,which firstly computes the average normal of the point of intersection of the mesh.This simplification approach can keep most of details of terrain models.Thus,it is an effective approach. At last,this paper gives two groups of experiment results.The first group are the results of data extracting and mesh reconstruction,and the other are the results of the model simplification based on normal,which include two rendered images so as to be compared with each other.The experiment results illustrate that the new approach of simplification and fast_speed rendering for terrain model has the characteristics of large data compression ratio,fast rendering speed and high accuracy.