The results show that the systematical error is small (fitness 2.6%)with the objective function of 0.2, the number of nodes of 5 in the hidden layer and a learn-rate of 0.1. A knowledge-based constitutive relation was constructed in this study.
The average precision of the color notation conversion of the model established in this paper is about 0.60CIELUV units, and the standard deviation 0.57, in contrast, they were 1.53 and 0.77 of the model with 4 hidden layers established in the past.
The others three factors mension above have some effects on model. The best preprocessing methods of UTS, YS and ELO model are [0.1, 0.8], [0.01, 0.99] and [0.01, 0.99], and best hidden layers are single hidden layer with 10, 11 and 12 neurons.
Moreover,as the eigenvectors of the ARMA model have concentrated all the information of the original time series signals,a non-linear mapping for the eigenvector parameters of the ARMA model from a p-dimensional Euclidean space to a two-dimensional one has been performed by using a multi-node input dual hidden-layer BP neutral network in order to conduct a diagnosis of the status of a turbine rotor vibration fault.
In the feedforward neural network model, the Sigmoid function is replaced with activation function to hide layer neuron by introducing two parameters to it, which can improve the neural network's respond property and reinforce its nonlinear approximation capability.
2. Since the convergence rate of the BP network is slow and node structure of it's hide layer is difficult to determine, an improvement of the BP network algorithm and a method to determine the node structure of the hide layer are suggested in this paper.
Principle component analysis can reduce the dimension of input variable. RBF neural network may achieve fast convergence of learn algorithm by adjusting the center of hide layer and the connective weigh with K-mean cluster algorithm.
dependent factors and antecedent coverage factors of rules are calculated from sample data. They are employed for constructing and configuring fuzzy neural networks, where the number of neurons of hide layer of networks equates to the number of rules and the initial weights of networks are configured by above factors. The input and output of artificial neural networks are fuzzicated and genetic algorithm is also utilized to optimize the fuzzy output parameters of networks.
The relationship between the order of approximation by neural network based on scattered threshold value nodes and the neurons involved in a single hidden layer is investigated.
In this paper, we propose a new directional multi-resolution ridgelet network (DMRN) based on the ridgelet frame theory, which uses the ridgelet as the activation function in a hidden layer.
The results obtained earlier are extended to the cases of a nonlinear regression and a feedforward neural network with one hidden layer.
Multilayer Perceptrons with different numbers of neurons in the hidden layer have been trained using different values of the signal-to-noise ratio to minimize the mean square error using the error back-propagation algorithm.
The best predictive power for the classification of soils from the fifteen regions was achieved using a network with seven hidden layer nodes and 2500 training epochs using the online back-propagation randomized training algorithm.
The attribute reduction algorithm of the discernibility matrix is used for the optimization design of reducing nodes of input and hidden layers.
Each network is a multi-layer perceptron with one or two hidden layers and a different number of hidden neurons.
The fuzzy neural network has six layers, including input layer, output layer and four hidden layers.
These variables served as input variables for neural network technique classification, taking 12 input variables, 2 hidden layers and 7 outcome variables for 0=Movement Time, 1=Wake, 2=REM, 3=S1, 4=S2, 5=S3 and 6=S4.
The design of an NN involves the choice of several parameters which include the network architecture, number of hidden layers, number of neurons in the hidden layers, training, learning and transfer functions.
In this paper, we give a constructive proof that a real, piecewise continuous function can be almost uniformly approximated by single hidden-layer feedforward neural networks (SLFNNs).
The dynamic and stationary properties of the population vector of the hidden-layer neurons, as obtained within the framework of the model in question, show a close similarity to the experimentally observed (Georgopoulos et al.
When the input layer is exposed to the learned pattern, the hidden-layer units show associative activation pattern.
A Coiflet wavelet decomposition and a pseudo smoothed Wigner-Ville distribution were used to extract features from the S2 sounds and train a one-hidden-layer NN using two-thirds of the data.
Heuristic configuration of single hidden-layer feed-forward neural networks