助手标题  
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多
查询帮助
意见反馈
   信息损耗 的翻译结果: 查询用时:0.019秒
图标索引 在分类学科中查询
所有学科
新闻与传媒
更多类别查询

图标索引 历史查询
 

信息损耗
相关语句
  information consumption
     Information Consumption、Fusion and Manifestation on Science and Technology Special TV Program
     电视科技专题节目的信息损耗、溶注与表达
短句来源
  information-loss
     WILD: A Discretization Algorithm Based on Weighted Information-Loss
     WILD:基于加权信息损耗的离散化算法
短句来源
  “信息损耗”译为未确定词的双语例句
     Searching for the Origin of Time Arrow: Information Dissipation Principle
     时间单向性探源:信息损耗原理
短句来源
     Then it points out that three major changes take place in the course of communicating cultural information: misunderstanding, loss and addition of cultural information.
     指出,翻译过程中文化信息在传播过程中通常会发生变化,具体表现为:文化信息误解,文化信息损耗和文化信息附加三种情况。
短句来源
     Information Loss of Employee Turnover and Its Effects
     人员交替引起的信息损耗及其效应
短句来源
     finally, the optimal number of intervals is found by use of the Bayes Factor. The supervised algorithm WILD, Weighted Information Loss Discretization, can be considered as an extension of Decision Tree Discretization algorith, but uses a bottom-up paradigm as in ChiMerge algorithm.
     有监督离散化算法-力权信息损耗离散化算法,是决策树离散化算法的一种扩展,但采用了ChiMerge算法中的自底向上离散化方式。
短句来源
     On the base of the conclusion, combining with the system, journalism and communication theories and the practice experiences of journalism and communication, the article points out the way to settle information consumption - information fusion and information manifestation on the levers of microcosmic and macrocosmic.
     在此基础上,根据系统论原理,结合新闻传播理论与新闻工作实践,本文从微观和宏观两个层次上,提出了解决信息损耗的对策——信息溶注和信息表达。
短句来源
更多       
  相似匹配句对
     Information model of construction machinery CAID
     信息
短句来源
     INFO
     信息
短句来源
     Information Loss of Employee Turnover and Its Effects
     人员交替引起的信息损耗及其效应
短句来源
     WILD: A Discretization Algorithm Based on Weighted Information-Loss
     WILD:基于加权信息损耗的离散化算法
短句来源
查询“信息损耗”译词为用户自定义的双语例句

    我想查看译文中含有:的双语例句
例句
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
  information consumption
Except where noted, data referenced throughout this paper is from Copyright Clearance Center's Academic Information Consumption Study,August 2006.
      
However, the imbalance of information consumption has become a focus of attention.
      
RSS was the first effort to further reduce the granularity of the information consumption on the web that achieved widespread adoption.
      
This component is responsible for providing semantic information consumption mechanisms and tools to facilitate the use of service SMD.
      
The usage level of the intelligence portal and information consumption as well as user satisfaction has to be high.
      
  information-loss
It means that the information-loss due to PCA can be ignored on this dataset.
      
In order to resample the data into new locations we want to reduce the information-loss as well as keep the redundancy minimum.
      
More points in a cell will increase the information-loss since the cell at the end, after the interpolation, will get only one value.
      
To minimize information-loss the visual system applies an efficient compression-algorithm to the images, utilizing their inherent redundancies.
      


Many existing machine learning algorithms expect the attributes to be discrete. However, discretization of attributes might be difficult even for domain expert. This paper proposed a new discretization algorithm called WILD, which stands for Weighted Information Loss Discretization. This algorithm can be considered as an extended counterpart of Decision Tree Discretization algorithm. Firstly, WILD assumes that the attribute A to be discretized is ordinal, and initial intervals can be formed from different...

Many existing machine learning algorithms expect the attributes to be discrete. However, discretization of attributes might be difficult even for domain expert. This paper proposed a new discretization algorithm called WILD, which stands for Weighted Information Loss Discretization. This algorithm can be considered as an extended counterpart of Decision Tree Discretization algorithm. Firstly, WILD assumes that the attribute A to be discretized is ordinal, and initial intervals can be formed from different values of the attribute in the original data set, so as to each initial interval contains exactly one attribute value. Secondly, WILD algorithm uses a bottom up paradigm as in ChiMerge algorithm. Based on initial intervals, WILD repeatedly calculates some measure for every group of m adjacent intervals (m is a user specified parameter), and merges the group with the lowest measure, until some stopping criterion is satisfied. Thirdly, the measure in WILD is related to the damage associated with the merging process for every group of m adjacent intervals. The main improvement in WILD lies on the fact that weighted information loss is used as a measure as opposed to information gain in Decision Tree Discretization, and this adaptation seems more natural and easier to be implemented in a bottom up paradigm than in a top down paradigm. It should be noted that if the considered measure when merging is information loss, and the number of adjacent intervals for merging is set to 2, WILD can be thought of as the counterpart of Decision Tree Discretization algorithm. Actually, Decision Tree Discretization algorithm tries to separate intervals when much information can be gained, whereas, WILD tries to merge adjacent intervals when the information loss is less. WILD algorithm has two advantages. First, it can improve the speed of discretization since it can merge several intervals at a time rather than just two. Secondy, it uses weighted information loss to overcome the deficiencies of Decision Tree Discretization algorithm. In order to evaluate the performance of WILD algorithm, both WILD and decision tree discretization algorithm are implemented as a preprocessing step to a Naive Bayes classifier. So the predication accuracy of this classifier can reflect the relative performance of both discretization methods. The empirical results indicate that WILD is a promising discretization algorithm.

现实应用中常常涉及许多连续的数值属性 ,而目前许多机器学习算法则要求所处理的属性具有离散值 .基于信息论的基本原理 ,提出一种新的有监督离散化算法WILD ,该算它可以看成是决策树离散化算法的一种扩充 ,其主要改进在于考虑区间内观测值出现的频度 ,采用加权信息损耗作为区间离散化的测度 ,以克服决策树算法离散不均衡的问题 .该算法非常自然地采用了自底向上的区间归并方案 ,可以同时归并多个相邻区间 ,有利于提高离散化算法的速度 .实验结果表明该算法能够提高机器学习算法的精度 .

Information difference is unequal phenomena between the information transmission and reception in terms of verbal communication.It has two types:information redundance and information loss.Either encoding or decoding may cause the redundance and loss.Since positive and adverse effect exist in information difference simultaneously,we should make a valid adjustment according to the necessity of communicatinn.

信息差是言语交际中信息发送与接受间的不等值现象 ,它主要表现为信息冗余和信息损耗两种类型 ,无论哪一种都可能由编码和解码任何一方引起。信息差有正面作用也有负面作用 ,我们可以根据交际需要对其实行有效控制

Dynamic context is the crucial element in acquiring deeper-level information in language activities. Dynamic contex is always covert while deeper-level information is overt only with an awareness to dynamic contex. Successful language communication must ensure the complete acquisition of deeper-level information and thus calls for a more important work in language teaching: sharpening the students?sensitivity to dynamic contex.

动态语境是深层信息接收的关键,深层信息隐藏于动态语境之后,因动态语境而明朗清晰。成功的语言交际就是在语言信息解码和语言信息接收时减少信息损耗,确保语言深层信息的全接收。语言教学必须引导学生提高对动态语境的意识。

 
<< 更多相关文摘    
图标索引 相关查询

 


 
CNKI小工具
在英文学术搜索中查有关信息损耗的内容
在知识搜索中查有关信息损耗的内容
在数字搜索中查有关信息损耗的内容
在概念知识元中查有关信息损耗的内容
在学术趋势中查有关信息损耗的内容
 
 

CNKI主页设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索
版权图标  2008 CNKI-中国知网
京ICP证040431号 互联网出版许可证 新出网证(京)字008号
北京市公安局海淀分局 备案号:110 1081725
版权图标 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社