What is stacking?
Stacking is one of the three widely used ensemble methods in Machine Learning and its applications. The overall idea of stacking is to train several models, usually with different algorithm types (aka base-learners), on the train data, and then rather than picking the best model, all the models are aggregated/fronted using another model (meta learner), to make the final prediction. The inputs for the meta-learner is the prediction outputs of the base-learners.
|
Figure 1 |
How to Train?
Training a stacking model is a bit tricky, but is not as hard as it sounds. All it requires is some similar steps as k-fold cross validation. First of all, devide the original data set in to two sets: Train set and Test set. We wont be even touching the Test Set during our training process of the Stacking model. Now we need to divide the Train set in to k-number (say 10) of folds. If the original dataset contains N data points, then each fold will contain N/k number of data points. (its is not mandatory to have equal size folds.)
|
Figure 2 |
Keep one of the folds aside, and train the base models, using the remaining folds. The kept-aside fold will be treated as the testing data for this step.
|
Figure 3 |
Then, predict the valued for the remaining fold (10th fold), using all the M models trained. So this will result M number of predictions for each data point in the 10th fold. Now we have N/10 data points sets (prediction sets), each with M number of fields (predictions coming from the M number of models). i.e: matrix with N/10 * M.
|
Figure 4 |
Now, iterate the above process by changing the kept-out fold (from 1 to 10). At the end of all the iterations, we would be having N number of prediction results sets, which corresponds to each data point in the original training set, along with the actual value of the field we predict.
Data Point # |
prediction from base learner 1 |
prediction from base learner 2 |
prediction from base learner 3 |
prediction from base learner M |
actual |
1 |
y11̂ |
y12̂ |
y13̂ |
y1M̂ |
y1 |
2 |
y21̂ |
y22̂ |
y23̂ |
y2M̂ |
y2 |
... |
... |
... |
... |
... |
... |
N |
yN1̂ |
yN2̂ |
yN3̂ |
yNM̂ |
yN |
This will be the input data set to our meta-learner. Now we can train the meta learner, using any suitable native algorithm, by sending each prediction as an input field and the original value as the output field.
Predicting
Once all the base-learners and meta-learner are trained, prediction follows the same idea as the training, except the k-folds. Simply, for a given input data point, all we need to do is to pass it through the M base-learners and get M number of predictions, and send those M predictions through the meta-learner as inputs, as in the Figure 1.
5 comments
Eduwizz online training is one of the Best Online Training Institute in Hyderabad, Bangalore. Eduwizz provide courses Hybris , Guidewire, Adobe, RPA , Machine Learning, AWS, Statistics for Beginners, Commvault Training, Devops, Netapps, TSM, EMC, Data Science, Internet of Things , IBM Blue-Mix , Hybris ,Angular JS , Node JS , Express JS , Business Analyst, Selenium testing with webdriver.
ReplyDeleteThanks for Business Analyst course is good post BA Online Course
Great information thanks a lot for the detailed articleThat is very interesting I love reading and I am always searching for informative information like this.
ReplyDeleteRPA Training in Hyderabad
Smm Panel
ReplyDeleteSmm panel
iş ilanları
instagram takipçi satın al
Hırdavatçı Burada
BEYAZESYATEKNÄ°KSERVÄ°SÄ°.COM.TR
Servis
jeton hilesi indir
kartal toshiba klima servisi
ReplyDeleteümraniye toshiba klima servisi
pendik mitsubishi klima servisi
tuzla vestel klima servisi
tuzla bosch klima servisi
kadıköy beko klima servisi
üsküdar toshiba klima servisi
beykoz beko klima servisi
üsküdar beko klima servisi