Approach 1: Quadratic regression
Quadratic function is defined as
f=w0+w1x+w2x2
Then, cost function to be minimized is defined as
J(w)=12n∑i=1(yi−fi)2
J(w)=12n∑i=1(yi−w0+w1xi+w2x2i)2
Optimal weights can be found by differentiating the cost function and setting them to zero.
∂J(w)∂w0=0
−n∑i=1(yi−w0+w1xi+w2x2i)=0
w0n∑i=11+w1n∑i=1xi+w2n∑i=1x2i=n∑i=1yi
Similarly, differentiating with w1 and w2 gives,
w0n∑i=1xi+w1n∑i=1x2i+w2n∑i=1x3i=n∑i=1yixi
w0n∑i=1x2i+w1n∑i=1x3i+w2n∑i=1x4i=n∑i=1yix2i
When these three equations are written in matrix form.
[∑ni=11∑ni=1xi∑ni=1x2i∑ni=1xi∑ni=1x2i∑ni=1x3i∑ni=1x2i∑ni=1x3i∑ni=1x4i][w0w1w2]=[∑ni=1yi∑ni=1yixi∑ni=1yix2i]
AW=B
W=A−1B
Approach 2: Three equations
In our case, we can use only the last three points and we can get three equations from these three points.
y1=w0+w1x1+w2x21
y2=w0+w1x2+w2x22
y3=w0+w1x3+w2x23
[1x1x211x2x221x3x23][w0w1w2]=[y1y2y3]
AW=B
W=A−1B
Approach 3: Two equations
If we define x1=-1, x2=0, and x3=1, y2 is equal to w0. And,
y1=y2−w1+w2
y3=y2+w1+w2
[−1111][w1w2]=[y1−y2y3−y2]
[w1w2]=[−0.50.50.50.5][y1−y2y3−y2]
Then, we have
w0=y2
w1=−0.5(y1−y2)+0.5(y3−y2)
w2=0.5(y1−y2)+0.5(y3−y2)
I have tested the above three approaches using MatLab and it is shown below.
%------------------------------------------------------------------------- clc; close all; clear all; %------------------------------------------------------------------------- % y= w0 + w1*x + w2* x^2; %------------------------------------------------------------------------- %Got x and y x=[-1 0 1]'; Wo=[4 3 2]'; y=Wo(1)+Wo(2)*x+Wo(3).*x.*x; %------------------------------------------------------------------------- %Approach 1 %Polynomial regression of order 2 %For n=3 S1=3; Sx=sum(x); Sx2=sum(x.*x); Sx3=sum(x.*x.*x); Sx4=sum(x.*x.*x.*x); Sy=sum(y); Syx=sum(y.*x); Syx2=sum(y.*x.*x); P=[S1 Sx Sx2; Sx Sx2 Sx3; Sx2 Sx3 Sx4]; B=[Sy Syx Syx2]'; %P1=P^(-1); W1=P\B %------------------------------------------------------------------------- %Approach 2 %Linear equations A=[1 x(1) x(1)*x(1);1 x(2) x(2)*x(2); 1 x(3) x(3)*x(3)]; W2=A\y %------------------------------------------------------------------------- %Approach 3 %Only 2 linear equations w0=y(2); w1=-0.5*( y(1)- y(2))+0.5*( y(3)- y(2)); w2=0.5*( y(1)- y(2))+0.5*(y(3)- y(2)); W3=[w0 w1 w2]' %-------------------------------------------------------------------------The following figure shows the result of using this method (blue color plot) compare to ordinary zero order hold (black color plot). This method gives much more smoother result but it should be noted that it introduces one sample delay. The implementation of this method in LabVIEW using C code is shown in the following figure. The first two approaches involve finding inverse of a 3x3 matrix and I have developed a C program as shown below.
#include#include main() { float M[3][3]={{3,0,2},{0,2,0},{2,0,2}}; //initialize a 3x3 matrix float N[3][3]={{0,0,0},{0,0,0},{0,0,0}}; //allocate for inverse int i,j; float d; //------------------------------------------------------------------------- N[0][0]=(M[1][1]*M[2][2]-M[2][1]*M[1][2]); N[1][0]=-(M[1][0]*M[2][2]-M[2][0]*M[1][2]); N[2][0]=(M[1][0]*M[2][1]-M[1][1]*M[2][0]); d=M[0][0]*N[0][0]+M[0][1]*N[1][0]+M[0][2]*N[2][0]; N[0][0]/=d; N[1][0]/=d; N[2][0]/=d; N[0][1]=-(M[0][1]*M[2][2]-M[0][2]*M[2][1])/d; N[1][1]=(M[0][0]*M[2][2]-M[0][2]*M[2][0])/d; N[2][1]=-(M[0][0]*M[2][1]-M[0][1]*M[2][0])/d; N[0][2]=(M[0][1]*M[1][2]-M[0][2]*M[1][1])/d; N[1][2]=-(M[0][0]*M[1][2]-M[0][2]*M[1][0])/d; N[2][2]=(M[0][0]*M[1][1]-M[0][1]*M[1][0])/d; //------------------------------------------------------------------------- //print 3x3 matrix for(i=0;i<3;i++) { for(j=0;j<3;j++) printf("%3.4f ",N[i][j]); printf("\n"); } getch(); return 0; }
No comments:
Post a Comment
Comments are moderated and don't be surprised if your comment does not appear promptly.