助手标题  
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多
查询帮助
意见反馈
   large data 在 自然地理学和测绘学 分类中 的翻译结果: 查询用时:0.009秒
图标索引 在分类学科中查询
所有学科
自然地理学和测绘学
计算机软件及计算机应用
电力工业
互联网技术
工业通用技术及设备
自动化技术
更多类别查询

图标索引 历史查询
 

large data
相关语句
  海量数据
    Methods Usiny modeling way to solve large data real-time display is only way.
    方法基于四叉树的鄂尔多斯盆地地形三维实时细节层次模型(LOD)表示的方法,从建模的角度来解决海量数据地形的实时显示问题。
短句来源
    How to design and implement large data spatial overlay method based on the object-relational data model as well as keeping the advantage of the model is an important and hot technique problem in spatial data model and algorithmic research.
    因此,如何在保留对象关系模型的优势的同时,设计实现对象关系数据模型框架下的海量数据空间叠加方法,是当前空间数据模型和算法研究亟待解决的重要技术问题。
短句来源
    Because the marine data have the complicated spatio-temporal characteristics and comparatively the research of MGIS emerges recently. In result, there are some problems and difficulty on the manipulation of marine information when using traditional GIS, such as the basic expression of the marine spatio-temporal data, the storage of the large data set, and a series of questions to transmit the large data set and to display the dynamic data, and so on.
    由于海洋具有时空变化复杂性的特点,以及MGIS的研究起步比较晚,使得应用GIS处理海洋信息仍然存在一些问题和难点,如海洋时空数据的基本表达问题、海量数据的存储问题以及海量数据传输和海洋动态数据显示等一系列问题。
短句来源
    Especially on Internet, because of the restrict of the Internet bandwidth , how to transmit these large data becomes restricting the putting out of GIS information.
    特别是在Internet上,由于受到网络带宽的制约,海量数据的传输更是成了制约动态和实时发布GIS信息的瓶颈。
短句来源
  “large data”译为未确定词的双语例句
    Data mining is a process which extracting potential and valuable models or rules from large data sets.
    数据挖掘是从大量的数据中提取潜在、有价值的模式或规则的过程。
短句来源
    But, SVMs is high computation complexity , large memory demanding and very difficult to use for large scale data sets, however, remote sensing image classification usually is large data sets, in order to improve the disadvantages, this paper proposes that one should use LS-SVM(short for the least squares support vector machines )and its improved algorithms—weighted LS-SVM and sparseness LS-SVM to classify the multispectral remote sensing image, then good classification results are gotten.
    但是它计算复杂,内存需求量大,用于大规模数据分类时比较困难,而遥感影像分类数据量一般比较大,为了改善这些不足,本文提出将最小二乘支持向量机及其改进算法——加权最小二乘支持向量机和稀疏最小二乘支持向量机用于多光谱遥感影像分类中,并获得了较好的分类效果。
短句来源
    (2) Based on traditional Mip Map pyramid texture mapping, this paper discusses to build a partition multi-resolution model for texture mapping to meet the need of large data size of digital geological logging system's exploded images.
    (2)针对数字地质编录系统的展示影像数据量大的特点,在传统Mip_Map金字塔模型纹理映射的基础上,探讨建立分块多分辨率模型进行纹理映射。
短句来源
    Second, we study 3D mesh simplification algorithm and proposed a modeling method of simplification of terrain data based on prior knowledge (can be acquired by statistic method) which resolve the contradiction between bandwidth and large data.
    其次,研究了现有的三维网格简化算法,并根据网络应用和本文的数据组织方式,提出了一种统计指导的表面简化算法,很好的解决了网络带宽有限和三维景观数据量大的矛盾;
短句来源
    Multi-scale representation and multi-scale spatial database plays an important role in such fields as the transmission of streaming media data over web, the self-adaptable visualization of spatial information, the navigation in spatial cognition, the scale match during inter-operation and other applications. Realizing this technology has to resolve the questions including the large data volume, the slow response, the conflicts between data representations and the steep change in scale range.
    多尺度空间数据表达及数据库建立在流媒体网络数据传输、自适应动态可视化、空间认知导航、互操作尺度匹配多个领域都有贡献,但实现该技术面临着数据存储量大、操作响应慢、横向空间一致关系难以维护、尺度变化难以达到真正的连续性等诸多问题。
短句来源
更多       
查询“large data”译词为用户自定义的双语例句

    我想查看译文中含有:的双语例句
例句
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
  large data
For the multiring and hypercube, a method of conflictless realization of an arbitrary permutation of "large" data items that can be divided into many "smaller" data blocks was considered, and its high efficiency was demonstrated.
      
A general method of conflictless arbitrary permutation of "large" data elements that can be divided into a multitude of "smaller" data blocks was considered for switches structured as the Cayley graphs.
      
The system unified the operation of various sets of equipment (radiation monitoring, radiometric, wave, materials science, and magnetic) and allowed the transfer of large data arrays from detectors located on the outer surface of the station.
      
To date, a large data set on the mitochondrial DNA (mtDNA) sequence variation in human populations has been accumulated.
      
The high-speed compression of large data streams in ultrasonic diagnostics
      
更多          


This paper put s forward algorithms for searching the expanding point of being expanded side from adjacent limited grids while computer links triangular net automatically.Itimproves and further perfects the algorithms discussed in a published paper ̄[1].and is suitablefor the net joinning of a large data-area.

本文提出了在计算机自动联结三角网时从相邻有限矩形网格中寻找当前扩展边的扩展点的算法,该算法是对文献[1]中相应算法的改进和进一步完善,适合于大数据量区域的联网。

Error analysis and processing for spatial data is one of the key issues in GIS research.In the establishing of geographic information system databases,the map digitization including manual map digitization and scanned map digitization are the main capturing methods for spatial data.So,it is necessary to study the characteristic and processing of the error in map digitization.In the land and housing fundamental geographic information system,the cadastral parcel is one of the most important objects.According...

Error analysis and processing for spatial data is one of the key issues in GIS research.In the establishing of geographic information system databases,the map digitization including manual map digitization and scanned map digitization are the main capturing methods for spatial data.So,it is necessary to study the characteristic and processing of the error in map digitization.In the land and housing fundamental geographic information system,the cadastral parcel is one of the most important objects.According to the feature classifications in GIS,a cadastral parcel belongs to one kind of closed polygon objects composed of series of digitized vertexes.The area of a parcel is the key attribute with legal authorization.However,in cadastral parcel digitization for capturing data,it is unavoidable to have errors (including systematic error and random error).As a result,with the propagation of errors in the vertexes of a parcel,the digitized cadastral area is not equal to the authorized area usually calculated by higher accuracy surveying method.Therefore,it is one of the focus problems to minimize the effects of the digitized errors to upgrade the accuracy of digitized vertexes to ensure the precision of the area attribute in GIS database. In this paper,the error processing in area of the digitized parcel is discussed.The digitized data of a parcel can be treated as observations that are the coordinates in the ground system obtained from the digitized coordinates in a digitizer or scanner by orthogonal or affine transformation.In a parcel,the known authoried area,the rectangular angles and circular arcs constitute the constraints of the digitized vertexes.For more correlated parcels,the constraints also become more.As a result,the redundant observations and adjustment problems are put forward.The principles for adjusting the digitized parcel areas are first presented.The adjustment models are then derived,including the condition equations for areas,areas with arcs,rectangular angles and circular arcs.The methodologies to process multiareas are further presented.The first is the adjustment model and method to process a single and independent parcel area.The second is to adjust the parcel areas with "holes".The condition equations of parcel areas are combined with those of "holes" to resolve together.The third is to adjust integrally the multi_areas that are correlated with each other.The key problem is to ensure that the shared vertexes and boundaries among interrelated parcels are moved simultaneously,therefore,the topologies between parcel polygons remain undamaged.The fourth is used to process the multiareas with fixed vertexes and fixed parcels.These fixed vertexes and fixed parcels keep unchanged in adjustment processing.And the graded adjustment idea and method are lastly presented to solve multi_areas with the larger data volume.The outer boundaries of the adjusting areas are firstly processed to control the whole parcels,the areas are then divided into several parts to process. Based on the above theoretical discussion of the parcel area processing models and methods,an area adjustment system for digitial cadastral parcels is developed.The implementation of the models and the methodologies are illustrated through case studies.And the results are further discussed and analyzed,leading to conclusions that adjustment processing for digital cadastral areas is helpful to ensure the quality of the data in GIS data capturing and database establishment.

探讨以地图数字化为基础的地理信息系统建库中地籍宗地数字化的面积处理问题 ,阐述宗地面积平差的原理 ,并导出了相应的条件方程 ,讨论宗地面积处理的各种方法和实现 ,提出宗地面积处理的分级平差方法 ;最后通过实例分析 ,认为宗地面积的平差处理有助于实现GIS数据采集和建库过程中实施质量控制 ,保证地理信息基础数据的质量可靠性 ,为系统的数据开发应用奠定基础。

Terrain model is a kind of important models,which can be widely used in the fields of aerospace,aviation and military,such as war field simulation,flight visulation,characteristics matching,special effects on movies and televisions,etc.Because terrain model contains large_scale data,it is very difficult to realize fast_speed rendering of terrain model.Some approaches have been put forward to solve this problem,for instance,mesh simplification based on vertices clustering,polygon simplication based on triangular...

Terrain model is a kind of important models,which can be widely used in the fields of aerospace,aviation and military,such as war field simulation,flight visulation,characteristics matching,special effects on movies and televisions,etc.Because terrain model contains large_scale data,it is very difficult to realize fast_speed rendering of terrain model.Some approaches have been put forward to solve this problem,for instance,mesh simplification based on vertices clustering,polygon simplication based on triangular collapse,and levels of details based on simplification,etc.Based on geometry simplification,these approaches are effective and can keep a good geometry topology,however,they lose visual characteristics.As a result,they often lower visual accuracy.So,it is necessary to find out a new approach to solve this problem. This paper gives a new approach to terrain model simplification and fast_speed rendering.Firstly,this paper analyses the data characteristics of terrain model,especially for digital elevation model(DEM),and points out that this kind of data has a large quantity of redundance.According to the principles of computer graphics,it is unnecessary to derive from all terrain data each time.Secondly,this paper gives two judging criteria of simplifying terrain model,and then,based on these criteria,gives an approach of viewpoint_based extracting field data and normal_based simplifying detail model.According to this approach,terrain scope is defined and a terrain mesh will be extracted from a large_scale terrain data,then,this terrain mesh will be repartitioned according to viewpoint and image mesh.The gained new terrain mesh with unequal intervals has been simplified relative to original terrain mesh.Because image mesh can express the highest accuracy of the current viewing,this mesh extracting can keep high visual accuracy and at the same time avoid the loss of essential mesh data.Afterwards,according to the second criterion given by this paper,the extracted terrain mesh data still has redundancy.In order to avoid this redundancy,this paper gives a new approach of simplification based on normal,which firstly computes the average normal of the point of intersection of the mesh.This simplification approach can keep most of details of terrain models.Thus,it is an effective approach. At last,this paper gives two groups of experiment results.The first group are the results of data extracting and mesh reconstruction,and the other are the results of the model simplification based on normal,which include two rendered images so as to be compared with each other.The experiment results illustrate that the new approach of simplification and fast_speed rendering for terrain model has the characteristics of large data compression ratio,fast rendering speed and high accuracy.

在分析地景模型数据特点的基础上 ,提出了地景模型简化的判决准则 ,并根据该准则提出了基于视点的区域数据抽取与简化方法以及基于法矢量的细节模型简化方法。实验结果表明 ,使用该方法数据压缩量大 ,绘制速度快 ,且逼真度无明显变化

 
<< 更多相关文摘    
图标索引 相关查询

 


 
CNKI小工具
在英文学术搜索中查有关large data的内容
在知识搜索中查有关large data的内容
在数字搜索中查有关large data的内容
在概念知识元中查有关large data的内容
在学术趋势中查有关large data的内容
 
 

CNKI主页设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索
版权图标  2008 CNKI-中国知网
京ICP证040431号 互联网出版许可证 新出网证(京)字008号
北京市公安局海淀分局 备案号:110 1081725
版权图标 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社