alpha_regret is a free parameter chosen by topic creators. We need to understand how the network loss depends on alpha_regret so topic creators can choose a sensible, and ideally optimal, value.
The first stage of the study is to take the default simulated network and finely vary alpha_regret to see how the loss changes. Are there any trends, minima, or is there just scatter? The output metrics should focus on the final ~100 epochs to ensure the results are not skewed by the burn-in (getting rid of the cold-start values of regrets, etc.). It is important to use log-averages or medians of the loss.
Depending on how the study develops, possible extensions are to see how the results change with different compositions of network participants.
We begin with a preliminary study examining the relationship between network loss and the alpha_regret parameter while keeping other parameters at their default settings. Specifically, we train the network over 1000 epochs and compare the performance over the last 100 epochs. The alpha_regret parameter is varied between 0 and 1 in increments of 0.1.
As a first step, we plot the log-averages and log-medians of the data. We observe that alpha_regret = 0 (when all the regret is historical, i.e., the regret does not change) is an obvious outlier.
On the plot above, the shaded area around the medians represents the 95% CI’s, where the margin of error is given by the formula 1.96*std/sqrt(100).
Taking the results as they are, we observe a decrease in loss when we introduce current regret (a jump from alpha_regret being 0 to 0.1). Beyond this point, the differences are not statistically significant.
For more finely grained alpha_regret sampling, training the network over 500 epochs:
It seems that the optimal choice of alpha_regret lies somewhere between 10^(−2.8) and 10^(−2.9) at the default network parameters.
When varying other network parameters, we observe that the transition typically occurs between 10^(−3.5) and 10^(−3.0), similar to before, except for an outlier network with n_predictors=3:
Nice! Could the transition at alpha_regret = 1e-3 be because we run for 1000 epochs? In other words, for alpha<1e-3 == 1/n_epochs, the network would not get rid of the cold start signal before the end of the run. I think that’s what we’re seeing. So basically it does not matter for overall network performance what alpha_regret we use, but it does matter for how quickly we get rid of the initial conditions?
Indeed, our observations demonstrate that the time constant (= the amount of time it takes for the exponential moving average to reflect approximately 63.2% of a step change in the input data) is proportional to 1/α_regret. Hence, when running the network over 1000 epochs with smaller α_regret values, the influence of a cold start remains significant till the end, and the weights assignment is suboptimal. However, once we reach equilibrium, the initial conditions’ impact diminishes, and overall network performance doesn’t depend on the exact value of α_regret.
OK great – this shows that the role of alpha_regret is really to handle changing worker performance. Imagine a worker suddenly gets much better or much worse. The network shouldn’t overreact if that is incidental, but should nonetheless catch it early if it is systematic.
So maybe we want to start thinking about some objective standard to dynamically change alpha_regret as a function of changing worker performance. This should improve network performance for non-static worker properties.
Additionally, we test that over multiple seeds to show statistical significance of our results. A box shows 10 median log(combintor_loss) values from a single network run: