Chaudhari, Gaurav Uday and V, Manohar and Mohanty, Biswajit (2007) Function approximation using back propagation algorithm in artificial neural networks. BTech thesis.
| PDF 3568Kb |
Abstract
Inspired by biological neural networks, Artificial neural networks are massively parallel computing systems consisting of a large number of simple processors with many interconnections. They have input connections which are summed together to determine the strength of their output, which is the result of the sum being fed into an activation function. Based on architecture ANNs can be feed forward network or feedback networks. Most common family of feed-forward networks, called multilayer perceptron, neurons are organized into layers that have unidirectional connections between them. These connections are directed (from the input to the output layer) and have weights assigned to them. The principle of ANN is applied for approximating a function where they learn a function by looking at examples of this function. Here the internal weights in the ANN are slowly adjusted so as to produce the same output as in the examples. Performance is improved over time by iteratively updating the weights in the network. The hope is that when the ANN is shown a new set of input variables, it will give a correct output. To train a neural network to perform some task, we must adjust the weights of each unit in such a way that the error between the desired output and the actual output is reduced. This process requires that the neural network compute the error derivative of the weights (EW). In other words, it must calculate how the error changes as each weight is increased or decreased slightly. The back-propagation algorithm is the most widely used method for determining EW. We have started our program for a fixed structure network. It’s a 4 layer network with 1 input, 2 hidden and 1 output layers. No of nodes in input layer is 9 and output layer is 1. Hidden layer nodes are fixed at 4 and 3. The learning rate is taken as 0.07. We have written the program in MAT LAB and got the output of the network. The graph is plotted taking no of iteration and mean square error as parameter. The converging rate of error is very good. Then we moved to a network with all its parameter varying. We have written the program in VISUAL C++ with no. of hidden layer, no of nodes in each hidden layer, learning rate all varying. The converging plots for different structure by varying the variables are taken.
Item Type: | Thesis (BTech) |
---|---|
Uncontrolled Keywords: | Back propagation algorithm, ANNs, EW, MAT LAB, VISUAL C++ |
Subjects: | Engineering and Technology > Electrical Engineering |
Divisions: | Engineering and Technology > Department of Electrical Engineering |
ID Code: | 4215 |
Deposited By: | Hemanta Biswal |
Deposited On: | 26 Jun 2012 09:59 |
Last Modified: | 28 Jun 2012 10:52 |
Supervisor(s): | Subhashini, K R |
Repository Staff Only: item control page