# Multirate algorithm for updating the coefficients

*08-May-2020 04:59*

You may use a model $$y_t = \beta_t\cdot x_t \varepsilon_t, \ \beta_t = \beta_ \eta_t$$ to allow your regression coefficients to slowly vary with time and KF will re-estimate them on each step (with constant time cost) based on most recent data. Welford’s online algorithm for standard deviation which also calculates the mean. Tony Finch in 2009 provides a method for an exponential moving average and standard deviation: init(): mean X = 0, mean Y = 0, var X = 0, cov XY = 0, n = 0, mean XY = 0, var Y = 0, desired Alpha=0.01 #additional variables for correlation update(x,y): n = 1 alpha=max(desired Alpha,1/n) #to handle initial conditions dx = x - mean X dy = y - mean Y dxy = (x*y) - mean XY #needed for cor var X = ((1-alpha)*dx*dx - var X)*alpha var Y = ((1-alpha)*dy*dy - var Y)*alpha #needed for cor XY cov XY = ((1-alpha)*dx*dy - cov XY)*alpha #alternate method: var X = (1-alpha)*(var X dx*dx*alpha) #alternate method: var Y = (1-alpha)*(var Y dy*dy*alpha) #needed for cor XY #alternate method: cov XY = (1-alpha)*(cov XY dx*dy*alpha) mean X = dx * alpha mean Y = dy * alpha mean XY = dxy * alpha get A(): return cov XY/var X get B(): return mean Y - get A()*mean X cor XY(): return (mean XY - mean X * mean Y) / ( sqrt(var X) * sqrt(var Y) ) In the above "code", desired Alpha could be set to 0 and if so, the code would operate without exponential weighting.Alternatively, you can set them constant $$\beta_t = \beta_$$ and KF will still re-estimate them on each step but this time assuming they are constant and just incorporating new observed data to produce better and better estimates of same coefficients values. It can be suggested to set desired Alpha to 1/desired Window Size as suggested by Modified_moving_average for a moving window size.Here is the pseudo code you are probably looking for: init(): mean X = 0, mean Y = 0, var X = 0, cov XY = 0, n = 0 update(x,y): n = 1 dx = x - mean X dy = y - mean Y var X = (((n-1)/n)*dx*dx - var X)/n cov XY = (((n-1)/n)*dx*dy - cov XY)/n mean X = dx/n mean Y = dy/n get A(): return cov XY/var X get B(): return mean Y - get A()*mean X For your two specific examples: Linear Regression The paper "Online Linear Regression and Its Application to Model-Based Reinforcement Learning" by Alexander Strehl and Michael Littman describes an algorithm called "KWIK Linear Regression" (see algorithm 1) which provides an approximation to the linear regression solution using incremental updates.

You may need to experiment a bit to find which approaches make the best tradeoffs for your particular kinds of problem instances.

For this setting it is known that simple algorithms (gradient descent and exponentiated gradient, among others) achieve sublinear regret.