Youxi Wu, Dong Liu, He Jiang. LengthChangeable Incremental Extreme Learning Machine. Journal of Computer Science and Technology. 2017
Extreme Learning Machine
(ELM) is a learning algorithm for generalized SinglehiddenLayer Feedforward
Networks (SLFNs). In order to obtain a suitable
network architecture, Incremental Extreme Learning Machine (IELM) is a sort of
ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of
IELMclass algorithms were proposed to improve the convergence rate or to
obtain minimal training error, they do not change the construction way of IELM
or face the overfitting risk. Making the testing error converge quickly and
stably therefore becomes an important issue. In this paper, we proposed a new
incremental ELM which is referred to as LengthChangeable Incremental Extreme
Learning Machine (LCIELM). It allows more than one hidden node to be added to
the network and the existing network will be regarded as a whole in output
weights tuning. The output weights of newly added hidden nodes are determined
using a partial errorminimizing method. We prove that an SLFN constructed
using LCIELM has approximation capability on a universal compact input set as
well as on a finite training set. Experimental results demonstrate that LCIELM
achieves higher convergence rate as well as lower overfitting risk than some
competitive IELMclass algorithms.
Source code of LCIELM(MATLAB version) can be found here.
This code can be used to generate a training set
with random noise and a noisefree testing set for approximation of function SinC(x).
Youxi Wu, Dong Liu, He Jiang. LengthChangeable Incremental Extreme Learning Machine. Journal of Computer Science and Technology. 2017
[TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = LCIELM(TrainingInput, TrainingTarget, TestingInput, TestingTarget, k, lambda, MaxHiddenNeurons, epsilon, ActivationFunType)
The function is suitable for regression
problems with only one output(target), but it can be
modified easily to apply to multioutput regression
problems. All the inputs and outputs of the function are interpreted at
follows. For usage, the input of dataset are suggested to be normalized into
[1,1] and the output of dataset are suggested to be
normalized into [0,1].
Input: 


TrainingInput 
 
Input of training data set, a N*d matrix, where N is the size of training set and d is the number of input attributes 
TrainingTarget 
 
Target of training data set, a N*1 vector, where N is the size of training set 
TestingInput 
 
Input of testing data set, a M*d matrix, where M is the size of testing set and d is the number of input attributes 
TestingTarget 
 
Target of testing data set, a M*1 vector, where M is the size of testing set 
k and lambda 
 
parameters of LCIELM 
MaxHiddenNeurons 
 
maximum number of network hidden nodes, parameter for algorithm termination 
Epsilon 
 
expected training error, parameter for algorithm termination 
ActivationFunType 
 
Type of activation function: 0 for sigmoidal additive function and 1 for Gaussian radial basis function(more types can be appended if needed) 
Output: 


TrainingTime 
 
Time spent on training(seconds) 
TestingTime 
 
Time spent on testing(seconds) 
TrainingAccuracy 
 
Training RMSE for regression 
TestingAccuracy 
 
Testing RMSE for regression 