助手标题  
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多
查询帮助
意见反馈
   基于词 的翻译结果: 查询用时:0.074秒
图标索引 在分类学科中查询
所有学科
计算机软件及计算机应用
更多类别查询

图标索引 历史查询
 

基于词     
相关语句
  based on word
     Bilingual Chunk Alignment Based on Word Alignment~(12)
     基于词对齐的双语组块对齐
     A Kind of Automatic Text Keyphrase Extraction Method Based on Word Co-occurrence
     一种基于词共现图的文档主题词自动抽取方法
短句来源
     Research on the key techniques and typical methods of text categorization are being done, and the method of text categorization based on word vector space model is presented in the dissertation.
     本文对文本分类的关键技术及典型分类方法进行了研究,提出基于词向量空间模型的文本分类方法。
短句来源
     Selecting Initial Points Based on Word Distribution
     基于词分布的初始点选取方法
     Novel chinese text subject extraction method based on word clustering
     一种基于词聚类的中文文本主题抽取方法
短句来源
更多       
  word-based
     We compare the two methods and reach the conclusion that, the word-based language modeling gets more accurate estimate for the ultimate entropy than character-based simply computing, leading to the best result of 5.31 bit.
     在实验中我们比较了这两种方法并得出结论:基于词的语言模型估计方法比基于字的直接计算方法得到了汉字墒的更为精确的估计,其熵值为5.31比特。
     In statistical machine translation field, the phrase-based translation model outperforms the word-based translation model.
     在统计机器翻译领域,基于短语的翻译模型的性能优于基于词的翻译模型。
     The aim of this thesis is to construct a word-based context Chinese language model.
     本文研究的目的是建立基于词上下文的汉语统计语言模型。
短句来源
     (4) the better performance of OOV recognition the higher accuracy of the segmentation system in whole,and the accuracy of statistic segmentation systems with character-based tagging approach outperforms any other word-based system.
     (4)实验证明,能够大幅度提高未登录词识别性能的字标注统计学习方法优于以往的基于词(或词典)的方法,并使自动分词系统的精度达到了新高。
短句来源
     In this paper, we extend a word-based trigram modeling to Chinese word segmentation and Chinese named entity recognition, by proposing a unified approach to SLM.
     在本文中,我们提出了一种统一的统计语言模型方法用来汉语自动分词和中文命名实体识别,这种方法对基于词的三元语言模型进行了很好的扩展。
短句来源
更多       
  based on words
     An algorithm for re-ranking the result based on words relevance
     基于词间相关性分析的查询结果重排算法
     The speed of Chinese word segmentation is very important for many Chinese NLP systems, such as web search engines based on words.
     对于基于词的搜索引擎等中文处理系统 ,分词速度要求较高。
短句来源
  word based
     It has some special advantages such as verylow and almost fixed dimensionality (e.g. 102 features), no word segmentation and with highperformance (DCC based NB gets the similar performance as word based SVM). This is anovel promising feature representation method in Chinese text classification.
     提出了分布字聚类方法,该方法无需分词、具有低达10~2数量级的特征维数和高性能的特点,其与NB结合的性能接近基于词特征的SVM分类器,微平均准确率达到86%。
短句来源
     A hybrid semantic and word based language model is brought forward in this paper. The performance of the model is tested in semantic tagging and Mandarin speech recognition,and compared with traditional N gram and semantic language models.
     本文提出了一种基于词和词义混合的统计语言模型 ,研究了这个模型在词义标注和汉语普通话语音识别中的性能 ,并且与传统的词义模型和基于词的语言模型进行了对比。
短句来源
     In Mandarin speech recognition,this model shows a better performance and requires less memory space than the word based trigram model.
     在汉语普通话连续音识别中 ,这个词义模型的性能优于基于词的三元文法模型 ,并且需要较小的存储空间
短句来源
     Now the system LM -WSD was applied in a word based English Chinese machine translation system for car fittings field, and improved the performance of the system.
     目前 ,该词义自动消歧系统 L M-WSD已经应用于基于词层的英汉机器翻译系统 (汽车配件专业领域 )中 ,有效地提高了翻译性能 .
短句来源
     Then, word based bigram post processing is executed on small candidate sets to further improve the RRD.
     然后在较小的候选集上进行基于词 bigram模型的上下文处理 .
短句来源

 

查询“基于词”译词为其他词的双语例句

     

    查询“基于词”译词为用户自定义的双语例句

        我想查看译文中含有:的双语例句
    例句
    为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
      based on word
    These approaches are based on word-level descriptions as they are available on the RTL.
          
    A Completion Procedure for Finitely Presented Groups That Is Based on Word Cycles
          
    In this article DNA sequences have been analyzed based on word frequencies.
          
    Furthermore, a complete set of data-path operations is given that can formally be verified based on word-level decision diagrams (WLDDs).
          
    While recognition based on word models is limited to rather small vocabularies, subunit models open the door to large vocabularies.
          
    更多          
      word-based
    As a result, we provide a new optional fast component in the design of modern word-based stream ciphers.
          
    It is shown in a subset of the George Washington collection that such a word spotting technique can outperform a Hidden Markov Model word-based recognition technique in terms of word error rates.
          
    This paper describes an approach for word-based on-line and off-line recognition of handwritten cursive script composed of English lower-case letters.
          
    When utterances are analysed into sequences of word-based forms, however, these prosodic aspects of language disappear.
          
    Various word-based tools have been used for quantifying the similarities and differences between entire genomes.
          
    更多          
      based on words
    Is it to be essentially a narrative constructed from stories or a history of ideas based on words of the ongoing conversation amongst those engaged in it, about science teaching?
          
    In this paper, using several algorithms, we compare the categorization accuracy of classifiers based on words to that of classifiers based on senses.
          
    In general, retrieval based on characters has the best recall where as retrieval based on words or based on bigrams has the best precision.
          
    In summary, most information retrieval systems are based on words, and employ stopword removal and/or stemming to reduce the number of index terms.
          
    Most of the current search engines are based on words, not the concepts.
          
    更多          
      word based
    In the past, researchers have presented various language models, such as character based language models, word based language model, syntactical rules language model, hybrid models, etc.
          
    Ninety-five different features per word based upon the speech energy, fundamental frequency F0 and duration measures on words, pauses and voiced/voiceless sections were measured.
          
    This study examined generalization between from reading to spelling and from spelling to reading following whole word based instruction using a delayed prompt procedure.
          
    A good framework should be able to deal with each word based on its specific semantic in the target domain.
          
    A word based representation called the BIO representation is often used for that purpose.
          
    更多          


    This paper makes use of a hybrid statistical and rule approach to realize Chinese pinyin to text translation. With the help of Chinese grammar this paper raises two hybrid statistical language models based on word and parts of speech, and the experiments prove that by this model the accurate rate is improved. This paper puts forward an approach to correct some pinyin errors in the case of pinyin errors. When the accurate rate of pinyin is more than 85%, this approach gives a very satisfactory result in the...

    This paper makes use of a hybrid statistical and rule approach to realize Chinese pinyin to text translation. With the help of Chinese grammar this paper raises two hybrid statistical language models based on word and parts of speech, and the experiments prove that by this model the accurate rate is improved. This paper puts forward an approach to correct some pinyin errors in the case of pinyin errors. When the accurate rate of pinyin is more than 85%, this approach gives a very satisfactory result in the experiments.

    提出了一种基于统计和规则的混合方法来实现汉语音字转换。利用汉语的语法规则,在统计语言模型中采用了两种基于词和词性的混合语言模型。在实验中,将这两种混合语言模型与基于词的语言模型进行了比较。实验证明,在语言模型中引入词性后,提高了音字转换正确率。考虑了出现拼音错误时的音字转换问题,提出了一种拼音纠错方法来纠正错误。实验证明,当拼音正确率高于85%时,这种带纠错的音字转换方法可以提高音字转换正确率。

    Chinese Phonetic-Character Conversion(CPCC) is an important issue in speech recognition and Chinese sentence keyboard input system.The appoaches based on large amount of corpus statisties Markov models become more and more popular today,The CPCC based on Chinese character N-gram (C-CPCC) has the advantage Of having a smaller statistics data library and simple algorithm. but has the drawback of lower accuracy of conversion,while that based on Chinese word N-gram(W-CPCC) is on the contrary.This paper presents...

    Chinese Phonetic-Character Conversion(CPCC) is an important issue in speech recognition and Chinese sentence keyboard input system.The appoaches based on large amount of corpus statisties Markov models become more and more popular today,The CPCC based on Chinese character N-gram (C-CPCC) has the advantage Of having a smaller statistics data library and simple algorithm. but has the drawback of lower accuracy of conversion,while that based on Chinese word N-gram(W-CPCC) is on the contrary.This paper presents a word-self-made CPCC algorithm based on the Chinese Character Bigram. which not only has foe C-CPCC's advantage of having a smaller statistics data library, but also can take advantage of the W-CPCC. The experiment shows it can be easily realized with a higher accuracy of conversion.

    音字转换在语音识别和汉字语句键盘输入方面都占有很重要的地位.现在比较流行的方法是基于大语料统计的Markov模型的音字转换方法其中基于单字N元文法的音字转换算法具有数据量少、算法简单的优点.但转换准确率却较低;而基于词N元文法的音字转换算法则正好相反本文在基于单字统计Bigram算法的基础上提出了一种自组词的音字转换方法,不仅具有单字Brgram方法的占空间少的优点.而且又可充分利用基于词Bigram算法的优点,实验表明该方法容易实现而且具有较高的转换准确率.

    Abstract This paper presents a new method of automatic indexing and retrieval.The approach is to take advantage of terms with documents (“latent semanticstructure”)in order to improve the detection of relevent documents on the basis of terms found in queries.A particular technique used is singularvalue decomposition in which a large termdocument matrix is decomposed into a set of k orthogonal factors.The original matrix can be approximated by linear combination from the factors set.Documents and queries...

    Abstract This paper presents a new method of automatic indexing and retrieval.The approach is to take advantage of terms with documents (“latent semanticstructure”)in order to improve the detection of relevent documents on the basis of terms found in queries.A particular technique used is singularvalue decomposition in which a large termdocument matrix is decomposed into a set of k orthogonal factors.The original matrix can be approximated by linear combination from the factors set.Documents and queries are represented as vectors formed from weighted combinations of these factors.The relevancy prediction is achieved by computing the similarity of query and documents.

    介绍了一种基于词相依性的语义结构,被称为“潜在语义标引”的文献自动标引和检索技术。采用词频统计和奇值分解技术来捕捉文献的语义结构,得到标引词、提问和文献的向量表示,检索系统可以预测文献与提问之间的相关度,达到检索的目的

     
    << 更多相关文摘    
    图标索引 相关查询

     


     
    CNKI小工具
    在英文学术搜索中查有关基于词的内容
    在知识搜索中查有关基于词的内容
    在数字搜索中查有关基于词的内容
    在概念知识元中查有关基于词的内容
    在学术趋势中查有关基于词的内容
     
     

    CNKI主页设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索
    版权图标  2008 CNKI-中国知网
    京ICP证040431号 互联网出版许可证 新出网证(京)字008号
    北京市公安局海淀分局 备案号:110 1081725
    版权图标 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社