《python高維數(shù)據(jù)分析》課件-第2章_第1頁
《python高維數(shù)據(jù)分析》課件-第2章_第2頁
《python高維數(shù)據(jù)分析》課件-第2章_第3頁
《python高維數(shù)據(jù)分析》課件-第2章_第4頁
《python高維數(shù)據(jù)分析》課件-第2章_第5頁
已閱讀5頁,還剩58頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

Chapter2

The

Solution

of

Least

Squares

Problems2.1Linear

Least

Squares

Estimatio2.2AGeneralized“Pseudo-Inverse”

Approachto

Solvingthe

Least-squaresProblem2.1.1Example:AutoregressiveModelling

Anautoregressive(AR)processisarandomprocesswhichistheoutputofanallpole-filterwhenexcitedbywhitenoise.Thereasonforthisterminologyismadeapparentlater.Inthisexample,wedealindiscretetime.Anall-polefilterhasatransferfunctionH(z)givenbytheexpression

2.1LinearLeastSquaresEstimation

where

ziare

the

poles

of

the

filter

and

hiare

the

coefficients

of

the

correspondingpolynomial

in

z.

Let

W(z)

and

Y(z)

denote

the

z-transforms

of

the

input

and

outputsequences,

respectively.

If

W(z)=σ2(corresponding

to

a

white

noise

input)

then

or

for

this

specific

case,

Thusequation(2.1.2)maybeexpressedas

Wenowwishtotransformthisexpressionintothetimedomain.Eachofthetime-domainsignalsofequation(2.1.3)aregivenbythecorrespondinginversez-transformrelationshipas

and

the

input

sequence

corresponding

to

the

z-transform

quantity

σ2

is

where

wn

is

a

white

noise

sequence

with

power

σ2.The

left-hand

side

of

equation(2.1.3)

isthe

product

of

z-transforms.

Thus,the

time-domain

representation

of

the

left-han

dside

ofequation

(2.1.3)

is

the

convolution

of

the

respective

time-domain

representations.

Thususing

equation(2.1.3)

to

(2.1.6)we

have

or

Repeatingthisequationformdifferentvaluesoftheindexiwehave

Soagain,itmakessensetochoosetheh’sinequation(2.1.5)sothatthepredictingtermYhisascloseaspossibletoypinthe2-normsense.Hence,asbefore,wechoosehtosatisfy

Noticethatiftheparametershareknowntheautoregressiveprocessiscompletelycharacterized.

2.1.2The

Least-Squares

Solution

We

define

our

regression

model

corresponding

to

equation(2.1.11)

as

and

we

wish

to

determine

the

value

xLS

which

solves

where

A∈Rm×n,m>n,b∈Rm.The

matrix

A

is

assumedf

ull

rank.

WenowdiscussafewrelevantpointsconcerningtheLSproblem:

·Thesystemequation(2.1.12)isoverdeterminedandhencenosolutionexistsinthegeneralcaseforwhichAx=bexactly.

·Ofallcommonlyusedvaluesofpforthenorm‖·‖pinequation(2.1.12),p=2istheonlyoneforwhichthenormisdifferentiableforallvaluesofx.Thus,foranyothervalueofp,theoptimalsolutioninnotobtainablebydifferentiation.

·NotethatforQorthonormal,wehave(onlyforp=2)

Thisfactisusedtoadvantagelateron.

·Wedefinetheminimumsumofsquaresoftheresidual‖AxLS-b‖22asρ2LS.

·Ifr=rank(A)<n,thenthereisnouniquexLSwhichminimizes‖Ax-b‖2.However,thesolutioncanbemadeuniquebyconsideringonlythatelementofset{xLS∈Rn|‖AxLS-b‖2=min}whichhasminimumnorm.

2.1.3Interpretation

of

the

Normal

Equations

Equation(2.1.23)

can

be

written

in

the

form

or

where

is

the

leastsquares

error

vector

between

AxLS

and

b,

rLSmust

be

orthogonal

to

R(A)

forthe

LS

solution

xLS.Hence,

the

name“normal

equations”.This

fact

gives

an

importantinterpretation

to

least-square

sestimation,

whichwe

now

illustrate

for

the

3×2case.

Equation

(2.1.11)

may

be

expressed

as

Thisinterpretationmaybeaugmentedasfollows.Fromweseethat

HencethepointAxLSwhichisinR(A)isgivenby

WherePistheprojectorontoR(A).Thus,weseefromanotherpointofviewthattheleast-squaressolutionistheresultofprojectingb(theobservation)ontoR(A).

Thereisafurtherpointwewishtoaddressintheinterpretationofthenormalequations.Substitutingequation(2.1.26)into(2.1.25)wehave

Thus,rLSistheprojectionofbontoR(A)⊥.Wecannowdeterminethevalueρ2LS,whichisthesquared2-normoftheLSresidual:

2.1.4Properties

of

the

LS

Estimate

Here

we

consider

the

regression

equation(2.1.11)

again.

It

is

reproduced

below

forconvenience.

InordertodiscussusefulandinterestingpropertiesoftheLSestimatewemakethefollowingassumptions:

A1:

nisazeromeanrandomvectorwithuncorrelatedelements;i.e.,E(nnT)=σ2I.

A2:Aisaconstantmatrixwhichisknownwithnegligibleerror.Thatis,thereisnouncertaintyinA.

UnderA1andA2,wehavethefollowingpropertiesoftheLSestimategivenbyequation(2.1.26).

XLS

is

an

Unbiased

Estimate

of

X0

the

True

Value

To

show

this,we

have

from

equation

(2.1.26)

Butfromtheregressionequation(2.1.29),werealizethattheobserveddata

baregeneratedfromthetruevaluesx0ofx.Hencefromequation(2.1.29)

ThereforetheE(x)isgivenas

whichfollowsbecauseniszeromeanfromassumptionA1.ThereforetheexpectationofxisitstruevalueandxLSisunbiased.

CovarianceMatrixofxLS

Thedefinitionofthecovariancematrixcov(xLS)ofthenon-zeromeanprocessxLSis:

ForthesepurposeswedefineE(xLS)as

Substitutingequation(2.1.34)and(2.1.26)in(2.1.33),wehave

FromassumptionA2wecanmovetheexpectationoperatorinside.Therefore,

xLSisaBLUE

Accordingtoequation(2.1.26),weseethatxLSisalinearestimatesinceitisalineartransformationofb,wherethetransformationmatrixis(ATA)-1AT.FurtherfromSectionweseethatxLSisunbiased.Withthefollowingtheorem,weshowthatxLSisthebestlinearunbiasedestimator(BLUE).

ProbabilityDensityFunctionofxLS

ItisafundamentalpropertyofGaussian-distributedrandomvariablesthatanylineartransformationofaGaussiandistributedquantityisalsoGaussian.Fromequation(2.1.26)weseethatxLSisalineartransformationofb,whichisGaussianbyhypothesis.SincetheGaussianpdfiscompletelyspecifiedfromtheexpectationandcovariance,givenrespectivelybyequation(2.1.32)and(2.1.36),thenxLShastheGaussianpdfgivenby

WeseethattheellipticaljointconfidenceregionofxLSisthesetofpointsψdefinedas

where

k

is

some

constant

which

determines

the

probability

level

that

an

observation

willfall

within

ψ.

Note

that

if

the

joint

confidence

region

becomes

elongated

in

any

direction,then

the

variance

of

the

associated

components

of

xLSbecome

large.

Let

us

rewrite

thequadratic

form

in

equation(2.1.44)

as

Theorem2

TheleastsquaresestimatexLSwillhavelargevariancesifatleastoneoftheeigenvaluesofATAissmallwheretheassociatedeigenvectorshavesignificantcomponentsalongthex-axes.

Maximum-LikelihoodProperty

Inthisvein,theleast-squaresestimatexLSisthemaximumlikelihoodestimateofx0.Toshowthisproperty,wefirstinvestigatetheprobabilitydensityfunctionofn=Ax-b,givenforthemoregeneralcasewherecov(n)=Σ:

2.1.5Linear

Least-Squares

Estimation

and

the

Cramer

Rao

Lower

Bound

In

this

sectionwe

discuss

the

relationship

between

the

cramer

rao

lower

bound(CRLB)

and

the

linear

least-squares

estimate.

We

first

discussthe

CRLB

itself,

and

thengo

onto

discuss

the

relationship

between

the

CRLB

and

linear

leastsquares

estimation

inwhite

and

coloured

noise.

The

Crame

rRao

Lower

Bound

Here

we

assume

that

the

observed

data

b

is

generated

from

the

model(2.1.29),forthe

specific

case

when

the

noise

n

is

a

joint

Gaussian

zero

mean

process.In

order

to

addressthe

CRLB,

we

consider

a

matrix

J

defined

by

Inourcase,Jisdefinedasamatrixofsecondderivativesrelatedtoequation(2.1.45).Theconstanttermsprecedingtheexponentinarenotfunctionsofx,andsoarenotrelevantwithregardtothedifferentiation.Thusweneedtoconsideronlytheexponentialtermofequation(2.1.45).Becauseoftheln(·)operationreducestothesecondderivativematrixofthequadraticformintheexponent.ThissecondderivativematrixisreferredtoastheHessian.Theexpectationoperatorofequation(2.1.46)isredundantinourspecificcasebecauseallthesecondderivativequantitiesareconstant.Thus,

UsingtheanalysisofSectionand,itiseasytoshowthat

Least-Squares

Estimation

and

the

CRLB

for

White

Noise

Using

equation(2.1.45),

we

now

evaluate

the

CRLB

for

data

generated

according

tothe

linearreg

ression

model

of

(2.1.11),

for

the

specific

case

of

white

noise

where

Σ=σ2I.That

is,if

we

observe

data

which

obey

the

model

(2.1.11),

what

is

the

lowest

possiblevariance

on

the

estimates

given

by

equation

(2.1.26)

from

(2.1.48),

Least-SquaresEstimationandtheCRLBforColouredNoise

Inthiscase,weconsiderΣtobeanarbitrarycovariancematrix,i.e.,E(nnT)=Σ.Bysubstitutingequation(2.1.45)andevaluating,wecaneasilyshowthattheFisherinformationmatrixJforthiscaseisgivenby

WenowdeveloptheversionofthecovariancematrixoftheLSestimatecorrespondingtoequation(2.1.36)forthecolourednoisecase.Supposeweusethenormalequation(2.1.23)toproducetheestimatexLSforthiscolourednoisecase.UsingthesameanalysisasinSection,exceptusingE(b-Ax0)(b-Ax0)T=Σinsteadofσ2Iasbefore,weget:

Noticethatinthecolourednoisecasewhenthenoiseispre-whitenedasinequation(2.1.53),theresultingmatrixcov(xLS)isequivalenttoJ-1inequation(2.1.51)whichisthecorrespondingformoftheCRLB;i.e.,theequalityoftheboundisnowsatisfied,providedthenoiseispre-whiten.

Hence,inthepresenceofcolourednoisewithknowncovariancematrix,pre-whiteningthenoisebeforeapplyingthelinearleast-squaresestimationprocedurealsoresultsinaMVUEofx.Wehaveseenthisisnotthecasewhenthenoiseisnotpre-whitened.

2.2AGeneralized“Pseudo-Inverse”Approach

toSolving

the

Least-squares

Problem

2.2.1Least

Squares

Solution

Using

the

SVD

Previously

we

have

seen

that

the

LS

problemmay

be

posed

as

where

the

observation

b

is

generated

from

the

regression

model

b=Ax0+n.

For

the

casewhere

A

is

full

rankwe

saw

that

the

solution

xLSwhich

solves

is

given

by

the

normalequation

WearegivenA∈Rm×n,m>nandrank(A)=r≤n.IfthesvdofAisgivenasUΣVT,thenwedefineA+asthepseudo-inverseofA,definedby

ThematrixΣ+isrelatedtoΣinthefollowingway.If

then

Theorem

WhenAisrankdeficienttheuniquesolutionxLSminimizingsuchthat‖x‖2isminimumisgivenby

whereA+isdefinedbyequation(2.2.3).Further,wehave

2.2.2Interpretation

of

the

Pseudo-Inverse

Geometrical

Interpretation

Let

us

now

take

an

other

look

at

the

geometry

of

least

squares.It

sho

wsa

simple

LSproblem

for

the

case

A∈R2×1.We

again

see

that

xLS

is

the

solution

which

corresponds

toprojecting

b

onto

R(A).In

fact,substituting

into

the

expression

AxLS,we

get

But,forthespecificcasewherem>n,weknowfromourpreviousdiscussiononlinearleastsquares,that

wherePistheprojectorontoR(A).Comparingequation(2.2.18)and(2.2.19),andnotingtheprojectorisunique,wehave

Thus,thematrixAA+isaprojectorontoR(A).

ThismayalsobeseeninadifferentwayasfollowsUsingthedefinitionofA+,wehave

WhereIristher×ridentityandUr=[u1,…,ur

].Fromourdiscussiononprojectors,weknowUrUrTisalsoaprojectorontoR(A)whichisthesameasthecolumnspaceofA.

RelationshipofPseudo-InverseSolutiontoNormalEquations

SupposeA∈Rm×n,m>n,thenormalequationsgiveus

butthepseudo-inversegives:

Inthefullrankcase,thesetwoquantitiesmustbeequal.Wecanindeedshowthisisthecaseasfollows:

Welet

betheEDofATAandwelettheSVDofATbedefinedas

Usingtheserelationswehave

asdesired,wherethelastlinefollowsfrom.Thus,forthefull-rankcaseform>n,A+=(ATA

)

-1AT.Inasimilarway,wecanalsoshowthatA+=A(ATA

)

-1forthecasem<n.

The

Pseudo-Inverse

as

a

Generalized

Linear

System

Solver

If

we

are

willing

to

accept

the

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論