Approach 1: Quadratic regression
Quadratic function is defined as
$$f=w_0+w_1 x + w_2 x^2$$
Then, cost function to be minimized is defined as
$$J(w)=\frac{1}{2}\sum_{i=1}^{n}(y_i-f_i)^2$$
$$J(w)=\frac{1}{2}\sum_{i=1}^{n}(y_i-w_0+w_1 x_i + w_2 x_i^2)^2$$
Optimal weights can be found by differentiating the cost function and setting them to zero.
$$\frac{\partial J(w)}{\partial w_0}=0$$
$$-\sum_{i=1}^{n}(y_i-w_0+w_1 x_i + w_2 x_i^2)=0$$
$$w_0\sum_{i=1}^{n}1+w_1\sum_{i=1}^{n}x_i+w_2\sum_{i=1}^{n}x_i^2=\sum_{i=1}^{n}y_i$$
Similarly, differentiating with w1 and w2 gives,
$$w_0\sum_{i=1}^{n}x_i+w_1\sum_{i=1}^{n}x_i^2+w_2\sum_{i=1}^{n}x_i^3=\sum_{i=1}^{n}y_i x_i$$
$$w_0\sum_{i=1}^{n}x_i^2+w_1\sum_{i=1}^{n}x_i^3+w_2\sum_{i=1}^{n}x_i^4=\sum_{i=1}^{n}y_i x_i^2$$
When these three equations are written in matrix form.
$$ \begin{bmatrix} \sum_{i=1}^{n}1 & \sum_{i=1}^{n}x_i & \sum_{i=1}^{n}x_i^2 \\ \sum_{i=1}^{n}x_i & \sum_{i=1}^{n}x_i^2 & \sum_{i=1}^{n}x_i^3 \\ \sum_{i=1}^{n}x_i^2 & \sum_{i=1}^{n}x_i^3 & \sum_{i=1}^{n}x_i^4 \end{bmatrix} \begin{bmatrix} w_0 \\ w_1 \\ w_2 \end{bmatrix} = \begin{bmatrix} \sum_{i=1}^{n}y_i \\ \sum_{i=1}^{n}y_i x_i \\ \sum_{i=1}^{n}y_i x_i^2 \end{bmatrix} $$
$$\mathbf{A}\mathbf{W}=\mathbf{B}$$
$$\mathbf{W}=\mathbf{A}^{-1}\mathbf{B}$$
Approach 2: Three equations
In our case, we can use only the last three points and we can get three equations from these three points.
$$y_1=w_0+w_1 x_1 + w_2 x_1^2$$
$$y_2=w_0+w_1 x_2 + w_2 x_2^2$$
$$y_3=w_0+w_1 x_3 + w_2 x_3^2$$
$$ \begin{bmatrix} 1 & x_1 & x_1^2 \\ 1 & x_2 & x_2^2 \\ 1 & x_3 & x_3^2 \end{bmatrix} \begin{bmatrix} w_0 \\ w_1 \\ w_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ y_2 \\ y_3 \end{bmatrix} $$
$$\mathbf{A}\mathbf{W}=\mathbf{B}$$
$$\mathbf{W}=\mathbf{A}^{-1}\mathbf{B}$$
Approach 3: Two equations
If we define x1=-1, x2=0, and x3=1, y2 is equal to w0. And,
$$y_1=y_2-w_1+ w_2$$
$$y_3=y_2+w_1+ w_2$$
$$ \begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} = \begin{bmatrix} y_1-y_2 \\ y_3-y2 \end{bmatrix} $$
$$ \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} = \begin{bmatrix} -0.5 & 0.5 \\ 0.5 & 0.5 \end{bmatrix} \begin{bmatrix} y_1-y_2 \\ y_3-y2 \end{bmatrix} $$
Then, we have
$$w_0=y_2$$
$$w_1=-0.5(y_1-y_2)+0.5(y_3-y2)$$
$$w_2=0.5(y_1-y_2)+0.5(y_3-y2)$$
I have tested the above three approaches using MatLab and it is shown below.
%------------------------------------------------------------------------- clc; close all; clear all; %------------------------------------------------------------------------- % y= w0 + w1*x + w2* x^2; %------------------------------------------------------------------------- %Got x and y x=[-1 0 1]'; Wo=[4 3 2]'; y=Wo(1)+Wo(2)*x+Wo(3).*x.*x; %------------------------------------------------------------------------- %Approach 1 %Polynomial regression of order 2 %For n=3 S1=3; Sx=sum(x); Sx2=sum(x.*x); Sx3=sum(x.*x.*x); Sx4=sum(x.*x.*x.*x); Sy=sum(y); Syx=sum(y.*x); Syx2=sum(y.*x.*x); P=[S1 Sx Sx2; Sx Sx2 Sx3; Sx2 Sx3 Sx4]; B=[Sy Syx Syx2]'; %P1=P^(-1); W1=P\B %------------------------------------------------------------------------- %Approach 2 %Linear equations A=[1 x(1) x(1)*x(1);1 x(2) x(2)*x(2); 1 x(3) x(3)*x(3)]; W2=A\y %------------------------------------------------------------------------- %Approach 3 %Only 2 linear equations w0=y(2); w1=-0.5*( y(1)- y(2))+0.5*( y(3)- y(2)); w2=0.5*( y(1)- y(2))+0.5*(y(3)- y(2)); W3=[w0 w1 w2]' %-------------------------------------------------------------------------The following figure shows the result of using this method (blue color plot) compare to ordinary zero order hold (black color plot). This method gives much more smoother result but it should be noted that it introduces one sample delay. The implementation of this method in LabVIEW using C code is shown in the following figure. The first two approaches involve finding inverse of a 3x3 matrix and I have developed a C program as shown below.
#include#include main() { float M[3][3]={{3,0,2},{0,2,0},{2,0,2}}; //initialize a 3x3 matrix float N[3][3]={{0,0,0},{0,0,0},{0,0,0}}; //allocate for inverse int i,j; float d; //------------------------------------------------------------------------- N[0][0]=(M[1][1]*M[2][2]-M[2][1]*M[1][2]); N[1][0]=-(M[1][0]*M[2][2]-M[2][0]*M[1][2]); N[2][0]=(M[1][0]*M[2][1]-M[1][1]*M[2][0]); d=M[0][0]*N[0][0]+M[0][1]*N[1][0]+M[0][2]*N[2][0]; N[0][0]/=d; N[1][0]/=d; N[2][0]/=d; N[0][1]=-(M[0][1]*M[2][2]-M[0][2]*M[2][1])/d; N[1][1]=(M[0][0]*M[2][2]-M[0][2]*M[2][0])/d; N[2][1]=-(M[0][0]*M[2][1]-M[0][1]*M[2][0])/d; N[0][2]=(M[0][1]*M[1][2]-M[0][2]*M[1][1])/d; N[1][2]=-(M[0][0]*M[1][2]-M[0][2]*M[1][0])/d; N[2][2]=(M[0][0]*M[1][1]-M[0][1]*M[1][0])/d; //------------------------------------------------------------------------- //print 3x3 matrix for(i=0;i<3;i++) { for(j=0;j<3;j++) printf("%3.4f ",N[i][j]); printf("\n"); } getch(); return 0; }
No comments:
Post a Comment
Comments are moderated and don't be surprised if your comment does not appear promptly.