Neural Network


How a Neural Network Works?

Neural network technology mimics the brain's own problem solving process. Just as humans apply knowledge gained from past experience to new problems or situations, a neural network takes previously solved examples to build a system of "neurons" that makes new decisions, classifications, and forecasts. Neural networks look for patterns in training sets of data, learn these patterns, and develop the ability to correctly classify new patterns or to make forecasts and predictions. Neural networks excel at problem diagnosis, decision making, prediction, classification, and other problems where pattern recognition is important and precise computational answers are not required.

The training data includes many sets of input variables (usually indicators) and a corresponding output variable (such as the percent change in open). If you’re familiar with statistics, the inputs are often called independent variables and the output (prediction) is called the dependent variable. Each set of corresponding independent variables and dependent variable is called an observation, example, or case. In more general terms, when the neural network trains, it is using historical examples to "learn" the patterns of the input variables and how they
correlate to the output variable (prediction).

The range of historical examples ("training set") used to train the network should include a representative set of problems likely to be encountered in the real world. For example, if you want to predict the selling price of a stock, you need to make sure your training set includes historical examples of when the price went up, when it went down, and when it stayed the same.

You will want to provide historical examples that are relevant to predicting the current market and avoid historical examples that do not represent current market behavior. The best way to do this is to limit the amount of past history upon which the neural network trains ("learns"). This means limiting the number of rows to between 300 and 2000 so that the neural network will learn data "relevant" to today’s market. On the other hand, you don’t want to use too few bars or overfitting may occur.

Additionally, it is important to understand that for each observation all the inputs must exist for the neural network to use that observation to "learn". If you are trying to predict a U.S. security and using an input that comes from a foreign market that has a holiday on a day that you are expecting an observation, then the observation will be missing data. Thus the neural network will be unable to use this observation for "learning" or predicting its value.

Finally, because of inflation and an overall rise in most markets, normalizing inputs and outputs over time (detrending) is extremely important. Most technical indicators do this automatically.


The results that you achieve will only be as good as the training data (inputs and outputs) that you select.

How Does a Neural Network Learn? 

The network begins by finding linear relationships between the inputs and the output. Weight values are assigned to the links between the input and output neurons. After those relationships are found, neurons are added to the hidden layer so that nonlinear relationships can be found. Input values in the first layer are multiplied by the weights and passed to the second (hidden) layer. Neurons
in the hidden layer "fire" or produce outputs that are based upon the sum of the weighted values passed to them. The hidden layer passes values to the output layer in the same fashion, and the output layer produces the desired results (predictions).

The network "learns" by adjusting the interconnection weights between layers. The answers the network is producing are repeatedly compared with the correct answers, and each time the connecting weights are adjusted slightly in the direction of the correct answers. Additional hidden neurons are added as necessary to capture features in the data set. Eventually, if the problem can be learned, a stable set of weights evolves and will produce good answers for all of the sample decisions or predictions. The real power of neural networks is evident when the trained network is able to produce good results for data that the network has never "seen" before.

Network Structure

The basic building block of neural network technology is the simulated neuron (depicted in Figure 1 as a circle). Independent neurons are of little use, however, unless they are interconnected in a network of neurons. The network processes a number of inputs from the outside world to produce an output, the network's predictions. The neurons are connected by weights, (depicted as lines).


Neurons are grouped into layers by their connection to the outside world. For example, if a neuron receives data from outside of the network, it is considered to be in the input layer. If a neuron contains the network's predictions, it is in the output layer. Neurons in between the input and output layers are in the hidden layer, which serves as a feature detector.