Looking into the ZPTAE loss function again, we decided to slightly modify it by adding a “penalty” term for outliers to the loss function. This leaves the main functionality unchanged for reasonable inferences, but further penalises extremely large outliers (obviously unrealistic values). The aim is to make outliers more obvious in losses/regrets which should help with inference synthesis and allow the forecasters to better take outliers into account.
def power_tanh(x, alpha=0.25, beta=2):
return x / (1 + np.abs(x)**beta)**((1 - alpha) / beta)
def loss_zptae(y_true, y_pred, sigma, mean, alpha=0.25, beta=2, gamma=4, penalty_norm=0.01):
# Z power-tanh absolute error
z_true = (y_true - mean) / sigma
z_pred = (y_pred - mean) / sigma
pt_true = power_tanh(z_true, alpha=alpha, beta=beta)
pt_pred = power_tanh(z_pred, alpha=alpha, beta=beta)
main_term = np.abs(pt_pred - pt_true)
penalty_term = (penalty_norm * np.abs(z_pred - z_true))**gamma
return main_term + penalty_term
Visualisation of the ZPTAE loss function with (solid lines) and without (dotted lines) a penalty term.