import numpy as np
from sklearn import datasets, preprocessing
from sklearn.model_selection import train_test_split
from neupy import algorithms
import matplotlib.pyplot as plt
dataset = datasets.load_diabetes()
x_train, x_test, y_train, y_test = train_test_split(
preprocessing.minmax_scale(dataset.data),
preprocessing.minmax_scale(dataset.target.reshape(-1, 1)),
test_size=0.3,
)
nw = algorithms.GRNN(std=1, verbose=True)
nw.train(x_train, y_train)
y_predicted = nw.predict(x_train)
mse = np.mean((y_predicted - y_train) ** 2)
print(mse)
if your run the following code you will get mse value non zero while in the original GRNN, the training error in GRNN should be zero since
y_predicted= (exp(-distance (input,iw)**2)/2*sigma*sigma)*wo
since the exp term values to zero as the input-hidden weights are set to the training input during the training. Hence, the output should be one and thus the final network output is basically the hidden-output weights which are set to the training targets during training. Thus, the mse should be zero...
if your run the following code you will get mse value non zero while in the original GRNN, the training error in GRNN should be zero since
since the exp term values to zero as the input-hidden weights are set to the training input during the training. Hence, the output should be one and thus the final network output is basically the hidden-output weights which are set to the training targets during training. Thus, the mse should be zero...