助手标题
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多
查询帮助
意见反馈
共[177]条 当前为第1条到20条[由于搜索限制,当前仅支持显示前5页数据]
 

相关语句
global optimum
However, the current layer optimization approach to power allocation cannot achieve the global optimum of the overall system performance.
      
Basing upon the frame of multi-level optimization, the intelligent agent technique was adopted to search for global optimum.
      
After a brief introduction of the subject, the paper focusses on the first step in any optimization procedure: the delineation of the parameter space, wherein the global optimum is to be found.
      
Finding the best possible conditions (the global optimum) is very difficult for chromatographers in practice.
      
By performing a set of nine pre-planned experiments conducted over the maximum working range for the system, global optimum separation conditions could be determined.
      
By conducting eleven preplanned experiments, global optimum conditions for the separation within an acceptable analysis time could be obtained.
      
After a pre-assay using the nine proposed solvents, twelve measurements are necessary to obtain the global optimum.
      
Nearly global optimum solutions are obtained since genetic algorithms are inherent stochastic optimization processes.
      
The proposed algorithm utilizes a real-coded genetic algorithm (GA) in the first stage to direct the search towards the global optimum region and a local search method in the second stage to do fine tuning.
      
Hence an efficient strategy is needed in searching for the global optimum.
      
Hence, the problem how to get the global optimum solution by using the nonlinearized optimum inverse method doesn't bother us by using the method in this paper.
      
The average convergence velocity of genetic algorithms is defined as the mathematical expectation of the mean absorbing time steps that the best-so-far individual transfers from any initial solution to the global optimum.
      
The results show that this advanced algorithm can converge to the global optimum at a great rate in a given range, the performance of optimization is improved effectively.
      
To find the global optimum, we apply an efficient genetic algorithm.
      
Finally, the quasi-Newton algorithm (QN) was used for optimization of the most significant parameters, near the global optimum region, as the initial values were already determined by the RGA global-searching algorithm.
      
Sufficient conditions for a global optimum are established and shown to generalize to the multi-response case.
      
In this case, a specialized algorithm (DRSALG) is shown to locate the global optimum in a finite number of steps.
      
The learning method leads to a linear programming problem and then: (a) the solution isobtained in a finite number of iterations, and (b) the global optimum is attained.
      
However, differential evolution has not been comprehensively studied in the context of training neural network weights, i.e., how useful is differential evolution in finding the global optimum for expense of convergence speed.
      
At the same time, this method can simultaneously searched in many directions, thus greatly increasing the probability of finding a global optimum.
      
 

首页上一页12345下一页尾页 

 
CNKI主页设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索
版权图标  2008 CNKI-中国知网
京ICP证040431号 互联网出版许可证 新出网证(京)字008号
北京市公安局海淀分局 备案号:110 1081725
版权图标 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社