Determine guidelines for choosing alpha_regret

alpha_regret is a free parameter chosen by topic creators. We need to understand how the network loss depends on alpha_regret so topic creators can choose a sensible, and ideally optimal, value.

The first stage of the study is to take the default simulated network and finely vary alpha_regret to see how the loss changes. Are there any trends, minima, or is there just scatter? The output metrics should focus on the final ~100 epochs to ensure the results are not skewed by the burn-in (getting rid of the cold-start values of regrets, etc.). It is important to use log-averages or medians of the loss.

Depending on how the study develops, possible extensions are to see how the results change with different compositions of network participants.

Preliminary analysis: varying alpha_regret; other parameters at default

Recall that alpha_regret is a parameter for estimating the workers’ regret, defined as:

R_il = alpha_regret * current_regret + (1-alpha) * historical_regret

We begin with a preliminary study examining the relationship between network loss and the alpha_regret parameter while keeping other parameters at their default settings. Specifically, we train the network over 1000 epochs and compare the performance over the last 100 epochs. The alpha_regret parameter is varied between 0 and 1 in increments of 0.1.

As a first step, we plot the log-averages and log-medians of the data. We observe that alpha_regret = 0 (when all the regret is historical, i.e., the regret does not change) is an obvious outlier.
image_480


On the plot above, the shaded area around the medians represents the 95% CI’s, where the margin of error is given by the formula 1.96*std/sqrt(100).

Taking the results as they are, we observe a decrease in loss when we introduce current regret (a jump from alpha_regret being 0 to 0.1). Beyond this point, the differences are not statistically significant.

For more finely grained alpha_regret sampling, training the network over 500 epochs:


The trend is flat for alpha_regret > 0, and there seems to be no dependence of std on alpha_regret.

1 Like

Since alpha_regret is a parameter in the exponential moving average, let’s vary alpha_regret on a logarithmic scale. We obtain the following:


Zooming in on the range between 10^-2 and 10^-4, the dependence turns out to be piecewise linear:

It seems that the optimal choice of alpha_regret lies somewhere between 10^(−2.8) and 10^(−2.9) at the default network parameters.

When varying other network parameters, we observe that the transition typically occurs between 10^(−3.5) and 10^(−3.0), similar to before, except for an outlier network with n_predictors=3:


1 Like

Nice! Could the transition at alpha_regret = 1e-3 be because we run for 1000 epochs? In other words, for alpha<1e-3 == 1/n_epochs, the network would not get rid of the cold start signal before the end of the run. I think that’s what we’re seeing. So basically it does not matter for overall network performance what alpha_regret we use, but it does matter for how quickly we get rid of the initial conditions?

Indeed, our observations demonstrate that the time constant (= the amount of time it takes for the exponential moving average to reflect approximately 63.2% of a step change in the input data) is proportional to 1/α_regret​. Hence, when running the network over 1000 epochs with smaller α_regret values, the influence of a cold start remains significant till the end, and the weights assignment is suboptimal. However, once we reach equilibrium, the initial conditions’ impact diminishes, and overall network performance doesn’t depend on the exact value of α_regret.

1 Like

OK great – this shows that the role of alpha_regret is really to handle changing worker performance. Imagine a worker suddenly gets much better or much worse. The network shouldn’t overreact if that is incidental, but should nonetheless catch it early if it is systematic.

So maybe we want to start thinking about some objective standard to dynamically change alpha_regret as a function of changing worker performance. This should improve network performance for non-static worker properties.

Very interesting. I agree, dynamically varying alpha_regret seems like a good approach here.

We now removed the EMA from the reputers in the simulator, so I redid the test and this is what it looks like now — the default alpha=0.1 is great:


Additionally, we test that over multiple seeds to show statistical significance of our results. A box shows 10 median log(combintor_loss) values from a single network run:

This is great! Thank you for sharing.

1 Like