This paper introduces a negotiation model based on Bayesian learning, called NMBL. Agent gets information of the negotiation opponents in every iteration by means of Bayesian learning, updates the pri- or knowledge of the negotiation opponents and then brings forward the offer of the next iteration according to negotia- tion strategies based on the conflicting point and un-compromising degree.

Aiming at the important issue of modular neural network(MNN)— the dynamic integration of the sub-nets,a novel integrated algorithm based on the improved Bayesian learning is presented.

Based on Bayesian learning,a sparse probabilistic model,termed as `relevance vector(machine′(RVM),) is introduced,which has the same functional form as support vector machine(SVM). RVM has an excellent ability in dealing with functional regression with noise. Compared with SVM,RVM has sparser solutions.

A system-based decision logic predicated on subjective and objective probabilities is developed incorporating the Bayesian learning process.

Besides allowing for exact Bayesian learning, these results permit us to formulate a new class of tractable latent variable models in which the likelihood of a data point is computed through an ensemble average over tree structures.

Tractable Bayesian learning of tree belief networks

This paper uses a Bayesian learning model to assess the respective influence of different risk measurements on mortality risk perceptions.

Also, the results suggest that the determinants of risk perception are consistent with the predictions of a Bayesian learning framework.

This research addressed an urban traffic intelligent control system,which adopts a multi agents coordination in urban traffic control to coordinate the signal of adjacent intersections for eliminating the congestion of traffic network.An agent represents a signal intersection control,and multi agents realize coordination of multiple intersections to eliminate congestion.Based on Recursive Modeling Method and Bayesian learning that enables an agent to select his rational action by examining with other...

This research addressed an urban traffic intelligent control system,which adopts a multi agents coordination in urban traffic control to coordinate the signal of adjacent intersections for eliminating the congestion of traffic network.An agent represents a signal intersection control,and multi agents realize coordination of multiple intersections to eliminate congestion.Based on Recursive Modeling Method and Bayesian learning that enables an agent to select his rational action by examining with other agents by modeling their decision making in conjunction with dynamic belief update.Based on this method,a simplified multi agent traffic control system is established and the results demonstrate its effectiveness.It is very important for ITS.

A multi agent coordination is addressed in urban traffic control, which uses the recursive modeling method(RMM) that enables an agent to select its[KG2/3]rational[KG2/3]action[KG2/3]by[KG2/3]examining[KG1/3]with[KG1/3]other agents by modeling their decision making in a distributed multi agent environment. Bayesian learning is used in conjunction with RMM for belief update. Based on this method, a multi agent traffic control system is established and the results demonstrated its effective.

Classification has been considered as a hot research area in machine learning, pattern recognition and data mining. Incremental learning is an effective method for learning the classification knowledge from massive data, especially in the situation of high cost in getting labeled training examples. Firstly, this paper discusses the difference between Bayesian estimation and classical parameter estimation and denotes the fundamental principle for incorporating the prior knowledge in Bayesian...

Classification has been considered as a hot research area in machine learning, pattern recognition and data mining. Incremental learning is an effective method for learning the classification knowledge from massive data, especially in the situation of high cost in getting labeled training examples. Firstly, this paper discusses the difference between Bayesian estimation and classical parameter estimation and denotes the fundamental principle for incorporating the prior knowledge in Bayesian learning. Then we provide the incremental Bayesian learning model. This model explains the Bayesian learning process that changes the belief with the prior knowledge and new examples information. By selecting the Dirichlet prior distribution, we show this process in detail. In the second session, we mainly discuss the incremental process. For new examples for incremental learning, there exist two statuses: with labels and without labels. As for examples with labels, it is easy to update the classification parameter with the help of conjunct Dirichlet distribution. So it is the key point to learn from unlabeled examples. Different from the method provided by Kamal Nigam, which learns from unlabeled examples using EM algorithm, we focus on the next example that would be selected in learning. This paper gives a method measuring the classification loss with 0 1 loss. We will select the examples that minimize the classification loss. Meanwhile, to improve the algorithm performance, the pool based technique is introduced. For each turn, we only compute the classification loss for examples in pool. Because the basic operations in learning are updating the classification parameters and classifying test instances incrementally, we give their approximate expressions. For testing algorithm's efficiency, this paper makes an experiment on mushroom data set in UCI repository. The initial training set contains 6 labeled examples. Then several unlabeled examples are added. The final experimental results show that this algorithm is feasible and effective.