After a data-set is being loaded, the Neural Network then can be configured. Although the recommended Neural Network structure has been configured based on a given data-set structure, advanced users can still tweak to achieve the best result. The configuration interface is shown in Figure 3.4.


Figure 3.4: Configuring Neural Network


Neural Network Type


Based on the data-set structure being loaded in the data preparation step, correct neural network type is selected automatically. Users can tweak Neural Network Type by selecting Customised Type as shown in Figure 3.5.


Figure 3.5: Selecting Neural Network Type



Pre/Post Processing


Although the data-set has been processed in the data preparation step, pre-processing and post processing are still required to normalize data-set so that it values will fall within correct range (from Min to Max).


 

Figure 3.6: Pre/Post Processing data normalization methods.


Two normalization methods are supported in ANNHUB, including Min Max and Normalisation. If data-set has been normalized in the data preparation step, pre/post processing can be disabled by choosing Linear option.


Neural Network Structure


The number of input, hidden, and output nodes of the Neural Network are chosen based on data-set structure, users can only need to tweak number of hidden nodes in Hidden Nodes setting.


The activation functions for hidden layer and output layer are also chosen based on data-set structure, users also can tweak activation function for different layer via Activation Function drop-down list as shown in Figure 3.7.


Figure 3.7: Activation Function Type



There are four popular activation functions being supported in ANNHUB.


    • Purelin: Linear transfer function
    • ReLU:  Rectified Liner Unit transfer function
    • LogSig: Log-sigmoid transfer function
    • TanSig: Hyperbolic tangent sigmoid transfer function
    • Softmax: Softmax transfer function (often used in final layer of the Neural Network in pattern recognition applications)


Cost function


ANNHUB support 3 different cost function types.

    • Mean Squared Error
    • Sum Squared Error
    • Cross Entropy


Advanced users can tweak cost function via drop-down list shown in Figure 3.8.

Figure 3.8: Cost Function Type


Training algorithms

There are 6 advanced second-order training algorithms supported in ANNHUB, including Scaled Conjugate Gradient, Quasi-New and Bayesian Regularization. For given data-set, recommended training algorithm is selected by ANNHUB; however, advanced users can tweak by accessing 7 different types of training algorithm shown in Figure 3.9.


Figure 3.9: Training algorithms.


When a certain training algorithm is selected, its recommended parameters are also configured in Train Neural Network step.


When the training data ratio is specified, the ratio of validation set will be equal to test set. For example, if training data ratio =70%, the valuation set ratio = 15% and test set ratio =15%.


When training engine  is Bayesian Regularization, the validation set is not required. As a result, the test set ratio =30%  if the training data ratio =70%.


Tips:

    • For small data-set, Bayesian Regularization training algorithm is the best choice as it does not require validation set to avoid over-fitting issue.
    • For large data-set, Scaled Conjugate Gradient training algorithm provides better performance and appears to be fastest training algorithm that use small memory resource.
    • For the data-set that have input dimension greater than 50, it is recommended to use Scaled Conjugate Gradient or Quasi Newton BFGS/DFP training algorithms.