助手标题  
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多
查询帮助
意见反馈
   dynamic sliding window 的翻译结果: 查询用时:0.007秒
图标索引 在分类学科中查询
所有学科
电信技术
更多类别查询

图标索引 历史查询
 

dynamic sliding window
相关语句
  动态滑动窗口
     Dynamic Sliding Window Protocol for Short-Wave Communication
     一个应用于短波通信的差错控制协议——动态滑动窗口协议
短句来源
     Compared to the case using Wait-and-Stop, the efficiency of the file transfer protocol using dynamic sliding window is head and shoulders above.
     并在实际应用中把采用了动态滑动窗口的文件传输协议与采用了停等协议的文件传输协议的效率进行比较,前者的结果明显地优于后者。
短句来源
     In principal, dynamic sliding window make NAIDS have the ability of real-time detection;
     在工作原理上,动态滑动窗口技术的提出,保证了NAIDS可以做到实时监测;
短句来源
     The thesis puts forward the techniques of this file transfer protocol, and lays stress on the idea and the model of dynamic sliding window protocol. The thesis also gives out the realization of the whole file transfer protocol.
     本文阐述了整个文件传输协议中的各种技术,着重论述了动态滑动窗口的概念及模型,并提供了文件传输协议的实现。
短句来源
  相似匹配句对
     DYNAMIC
     动态
短句来源
     Dynamic
     地方科技动态
短句来源
     DYNAMIC ANALYSIS OF WEDGE SLIDING ON ROCK SLOPES
     岩体边坡楔形体动力学分析
短句来源
     Dynamic Identifying Stuck Banknotes on Sliding Windows
     滑动窗口纸币重张动态模糊识别
短句来源
     Dynamic Reaction Analysis of Sliding Isolated Structures
     滑移隔震结构的动力反应分析
短句来源
查询“dynamic sliding window”译词为用户自定义的双语例句

    我想查看译文中含有:的双语例句
例句
没有找到相关例句


Cache and Prefetch is two efficient ways to lower the delay of I/O request. The latency of physical I/O operations can be avoided by caching frequently referenced data in primary memory rather than in secondary memory. And latency tolerance can be achieved by prefetch's overlapping long I/O access with independent operations. But simple cache and prefetch algorithm is no use for parallel scientific application, and prefetching must be carefully balanced against caching. This paper proposes a new prefetch algorithm...

Cache and Prefetch is two efficient ways to lower the delay of I/O request. The latency of physical I/O operations can be avoided by caching frequently referenced data in primary memory rather than in secondary memory. And latency tolerance can be achieved by prefetch's overlapping long I/O access with independent operations. But simple cache and prefetch algorithm is no use for parallel scientific application, and prefetching must be carefully balanced against caching. This paper proposes a new prefetch algorithm faced to parallel scientific application: Properly Greedy Cache Prefetch Integrated Algorithm (PGI). PGI is based on the regularity of parallel scientific application's I/O access mode. Now, LRU _SP and TIP algorithm is too greedy when they execute prefetch operation. PGI can overcome the shortcomings of LRU _SP and TIP. PGI uses properly greedy dynamic slide window, which can adjust its prefetch size according to application's consume speed and I/O service delay. When replacing the cache blocks, PGI uses cache and prefetch integrated loss estimate algorithm to achieve the least loss and least I/O service time. In parallel file system environment, the workload of each I/O node is different, PGI thinks of it fully. When replacing cache blocks, PGI will first replace those cache blocks on light workload I/O nodes. So, the workload of each I/O node will be balance, and the total service time will be lower. The result shows that the PGI can bring higher hit ration, shorter I/O operation delay and balance workload.

传统文件系统中的Cache和预取技术是两种降低访问延迟的有效方法.在并行科学计算应用的I/O访问模式下,简单的Cache和预取技术已无法提供较高的Cache 命中率.该文在分析该I/O模式的基础上提出了适度贪婪的Cache 和预取一体化算法(PGI).该算法充分利用了并行文件系统环境的特点,采用了适度贪婪的动态滑窗技术,可以有效地消除预取时的抖动,降低系统处理开销;并同时采用了Cache 和预取一体化的淘汰损失估计算法,使淘汰的损失降到最低点,在整体上提供了较短的I/O 服务时间

 
图标索引 相关查询

 


 
CNKI小工具
在英文学术搜索中查有关dynamic sliding window的内容
在知识搜索中查有关dynamic sliding window的内容
在数字搜索中查有关dynamic sliding window的内容
在概念知识元中查有关dynamic sliding window的内容
在学术趋势中查有关dynamic sliding window的内容
 
 

CNKI主页设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索
版权图标  2008 CNKI-中国知网
京ICP证040431号 互联网出版许可证 新出网证(京)字008号
北京市公安局海淀分局 备案号:110 1081725
版权图标 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社