蘋果首份人工智能論文-Learning from Simulated and Unsupervised Images through Adversarial training_第1頁(yè)
蘋果首份人工智能論文-Learning from Simulated and Unsupervised Images through Adversarial training_第2頁(yè)
蘋果首份人工智能論文-Learning from Simulated and Unsupervised Images through Adversarial training_第3頁(yè)
蘋果首份人工智能論文-Learning from Simulated and Unsupervised Images through Adversarial training_第4頁(yè)
蘋果首份人工智能論文-Learning from Simulated and Unsupervised Images through Adversarial training_第5頁(yè)
已閱讀5頁(yè),還剩11頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

ThispaperhasbeensubmittedforpublicationonNovember15,2021.LearningfromSimulatedandUnsupervisedImagesthroughAdversarialTrAshishShrivastava,TomasPfister,OncelTA{a_shrivastava,tpf,otuzel,jsaininguzel,JoshSusskind,WendaWang,RussWebbppleInc.usskind,wenda_wang,rwebb}@appleAbstractWith

recent

progress

in

graphics,

it

has

become

more

tractable

to

train

models

on

synthetic

images,poten-

tially

avoiding

theneedfor

expensive

annotations.

How-

ever,

learning

from

synthetic

images

may

not

achieve

the

desired

performance

due

to

a

gap

between

synthetic

and

real

image

distributions.

To

reduce

this

gap,

we

pro-

pose

Simulated+Unsupervised

(S+U)

learning,

where

the

task

is

to

learn

a

model

to

improve

the

realism

of

a

simulator’s

output

using

unlabeled

real

data,

while

preserving

the

annotation

information

from

the

simula-tor.

We

develop

a

method

for

S+U

learning

that

uses

an

adversarial

network

similar

to

Generative

Adversar-ial

Networks

(GANs),

but

with

synthetic

images

as

in-puts

instead

of

random

vectors.

We

make

several

key

modifications

to

the

standard

GAN

algorithm

to

pre-

serve

annotations,

avoid

artifacts

and

stabilize

training:(i)

a

‘self-regularization’

term,

(ii)

a

local

adversarial

loss,

and

(iii)

updating

the

discriminator

usinga

history

of

refined

images.

We

show

that

this

enables

genera-tion

of

highly

realistic

images,

which

we

demonstrate

both

qualitatively

and

with

a

user

study.

We

quantita-

tively

evaluate

the

generated

images

by

training

mod-

els

for

gaze

estimation

and

hand

pose

estimation.

We

show

a

significant

improvement

overusingsyntheticim-

ages,

and

achieve

state-of-the-art

results

on

the

MPI-

IGaze

datasetwithoutany

labeled

real

data.1.

IntroductionLarge

labeled

training

datasets

are

becoming

increas-

ingly

important

with

the

recent

rise

in

high

capacitydeep

neural

networks

[4,

18,

44,

44,

1,

15].

However,

labeling

such

large

datasets

isexpensive

and

time-consuming.

Thus

the

idea

of

training

on

synthetic

instead

of

real

im-ages

has

become

appealing

because

the

annotations

are

automatically

available.

Human

pose

estimation

withKinect

[32]

and,

more

recently,

a

plethora

of

other

tasks

have

been

tackled

using

synthetic

data

[40,

39,

26,

31].RefinerUnlabeledRealImagesSynthetic RefinedFigure

1.

Simulated+Unsupervised

(S+U)

learning.

The

task

istolearn

a

model

that

improves

the

realism

of

synthetic

images

from

a

simulator

using

unlabeled

real

data,

while

preserving

the

annotation

information.However,

learning

from

synthetic

images

can

be

prob-lematic

due

to

a

gap

between

synthetic

and

real

im-age

distributions

synthetic

data

is

often

not

realistic

enough,leading

the

network

to

learn

details

only

present

in

synthetic

images

and

fail

to

generalize

well

on

real

images.

One

solution

to

closing

this

gap

is

to

improve

the

simulator.

However,

increasing

the

realism

is

often

computationally

expensive,

the

renderer

design

takes

a

lot

of

hard

work,

and

even

top

renderers

may

still

fail

to

model

all

the

characteristics

of

real

images.

This

lack

of

realism

may

cause

models

to

overfit

to

‘unrealistic’

details

in

the

synthetic

images.In

this

paper,

we

propose

Simulated+Unsupervised(S+U)

learning,

where

the

goal

is

to

improve

the

real-

ism

of

synthetic

images

from

a

simulator

using

unla-

beled

real

data.

The

improved

realism

enables

the

train-

ingofbettermachinelearningmodelsonlargedatasets

without

any

data

collection

or

human

annotation

effort.

In

addition

to

adding

realism,

S+U

learning

should

pre-

serve

annotation

information

for

training

of

machine

learning

models

e.g.

the

gaze

direction

in

Figure

1

should

be

preserved.

Moreover,since

machine

learning

models

can

be

sensitive

to

artifacts

in

thesynthetic

data,

S+U

learning

should

generate

images

without

artifacts.We

develop

a

method

for

S+U

learning,

which

we

sim-

refinerhod:

a

mulatorreal-orithm

loss,

Ns)

[7],from

nd,

tocom-

zationterm

SimGAN,

that

refines

syntheticimages

from

aulator

using

a

neural

network

which

we

call

the

network’.

Figure

2

gives

an

overview

of

our

met

synthetic

image

is

generated

with

a

black

box

si

and

is

refined

using

the

refiner

network.

To

add

ism

the

first

requirement

of

an

S+U

learning

alg–

we

train

our

refiner

network

using

an

adversarialsimilar

to

Generative

Adversarial

Networks

(GA

such

that

the

refined

images

are

indistinguishable

real

ones

using

a

discriminative

network.

Seco

preserve

the

annotations

of

synthetic

images,

weplement

the

adversarial

loss

with

a

self-regulariloss

that

penalizes

large

changes

between

the

syntheticand

refined

images.

Moreover,

we

propose

to

usea

fully

convolutional

neural

network

that

operates

on

a

pixel

level

andpreservestheglobal

structure,

rather

than

holistically

modifying

the

image

content

as

in

e.g.

a

fully

connected

encoder

network.

Third,

theGAN

framework

requires

training

two

neural

networks

with

competing

goals,

which

is

known

to

be

unstable

and

tends

to

in-troduce

artifacts

[29].

To

avoid

drifting

and

introduc-

ing

spurious

artifacts

while

attempting

to

fool

a

single

stronger

discriminator,

we

limit

the

discriminator’s

re-

ceptive

field

to

local

regions

instead

of

the

wholeimage,resulting

in

multiple

local

adversarial

losses

per

image.

Moreover,

we

introduce

a

method

for

improving

the

sta-

bility

of

training

by

updating

the

discriminator

using

a

history

of

refined

images

rather

than

the

ones

from

the

current

refiner

network.Contributions:We

propose

S+U

learning

that

uses

unlabeled

real

data

to

refine

the

synthetic

images

generated

by

a

simulator.We

train

a

refiner

network

to

add

realism

to

syn-

thetic

images

using

a

combination

of

anadversarial

loss

and

a

self-regularizationloss.We

make

several

key

modifications

to

the

GAN

training

framework

to

stabilize

training

and

preventthe

refiner

network

from

producing

artifacts.We

present

qualitative,

quantitative,

and

user

study

experiments

showing

that

the

proposed

frameworksignificantly

improves

the

realism

of

the

simulator

output.

We

achieve

state-of-the-art

results,

without

any

human

annotation

effort,

by

training

deep

neu-

ral

networks

on

the

refined

output

images.1.1.

RelatedWorkThe

GAN

framework

learns

two

networks

(a

gener-SimulatorDiscriminatorDSyntheticRefinedUnlabeledrealFigure

2.

Overview

of

SimGAN.

We

refinetheoutput

ofthe

simulator

with

a

refiner

neural

network,

R,

that

mini-mizes

the

combination

of

a

local

adversarial

loss

and

a

‘self-regularization’

term.

The

adversarial

loss

fools

a

discrimi-ator

and

a

discriminator)

with

competing

losses.

Themethod,

the

generated

images

do

not

have

any

annota-–Refiner

RRealvsRefinednator

network,

D,

that

classifies

an

image

as

real

or

refined.

The

self-regularization

term

minimizes

the

image

differencebetween

the

synthetic

and

the

refined

images.

This

preservesthe

annotation

information

(e.g.

gaze

direction),

making

the

refined

images

useful

for

training

a

machine

learning

model.

The

refiner

network

R

and

the

discriminator

network

D

are

updated

alternately.goal

of

the

generatornetwork

is

to

map

a

random

vectorto

a

realistic

image,

whereas

the

goal

of

the

discrimina-

tor

is

to

distinguish

the

generated

and

the

real

images.

The

GAN

framework

was

first

introduced

by

Goodfel-

low

et

al.

[7]

to

generate

visually

realistic

images

and,

since

then,

manyimprovements

and

interesting

applica-tions

have

been

proposed

[29].

Wang

and

Gupta

[38]use

a

Structured

GANtolearnsurface

normals

and

then

combine

it

with

a

Style

GAN

to

generate

natural

indoor

scenes.

Im

et

al.

[12]

propose

a

recurrent

generative

model

trained

using

adversarial

training.

The

recently

proposed

iGAN

[45]

enables

users

to

change

the

im-age

interactively

on

a

natural

image

manifold.

CoGAN

by

Liu

et

al.

[19]

uses

coupled

GANs

to

learn

a

jointdistribution

over

images

from

multiple

modalities

with-out

requiring

tuples

of

corresponding

images,

achiev-ing

this

by

a

weight-sharing

constraint

that

favors

the

joint

distribution

solution.

Chen

et

al.

[2]

propose

Info-GAN,

an

information-theoretic

extension

of

GAN,

that

allows

learning

of

meaningful

representations.

Tuzel

et

al.

[36]

tackled

image

superresolution

for

face

images

with

GANs.

Li

and

Wand

[17]

propose

a

Markovian

GAN

for

efficient

texture

synthesis.

Lotter

et

al.

[20]

use

adversarial

loss

inanLSTM

network

for

visualsequence

prediction.

Yu

et

al.

[41]

propose

the

SeqGAN

frame-

work

that

uses

GANs

for

reinforcement

learning.

Manyrecent

works

have

explored

related

problems

in

the

do-mainofgenerativemodels,

suchasPixelRNN[37]that

predicts

pixels

sequentially

with

an

RNN

with

a

softmax

loss.

The

generative

networks

focus

on

generating

im-ages

using

a

random

noise

vector;

thus,

in

contrast

to

ourtherefinedimages.

Inthefollowing

sections,

weexpandthetic

and

the

refined

image.

Thus,

the

overall

refinertioninformationthat

can

be

used

fortraininga

machine

learning

model.Many

efforts

have

explored

using

synthetic

data

forvarious

prediction

tasks,

including

gazeestimation[40],textdetection

and

classificationinRGB

images

[8,

14],font

recognition

[39],object

detection

[9,

24],

hand

pose

estimation

in

depth

images

[35,

34],

scene

recog-

nition

in

RGB-D

[10],

semantic

segmentation

of

urban

scenes

[28],

and

human

pose

estimation

[23,

3,

16,

13,

25,

27].

Gaidon

et

al.

[5]

show

that

pre-training

a

deep

neural

network

on

synthetic

data

leads

to

improved

per-formance.

Our

work

is

complementary

to

these

ap-

proaches,where

we

improve

the

realism

of

the

simulator

using

unlabeled

real

data.Ganin

and

Lempitsky

[6]

use

synthetic

data

in

a

domain

adaptation

setting

where

the

learned

features

are

invariant

to

the

domain

shift

between

synthetic

and

real

images.

Wang

et

al.

[39]

train

a

Stacked

Con-volutional

Auto-Encoder

on

synthetic

and

real

data

to

learn

the

lower-level

representations

of

their

font

detec-

tor

ConvNet.

Zhang

et

al.

[42]

learn

a

Multichannel

Au-

toencoder

to

reduce

the

domain

shift

between

real

andsynthetic

data.

In

contrast

to

classical

domain

adaptationmethods

that

adapt

the

features

with

respect

to

a

specific

prediction

task,

we

bridge

the

gap

between

image

dis-

tributions

through

adversarial

training.

This

approach

allows

us

to

generate

very

realistic

images

which

can

beusedto

train

any

machine

learning

model,

potentially

formultiple

tasks.2.

S+U

Learning

with

SimGANThe

goal

of

Simulated+Unsupervised

learning

is

to

use

a

set

of

unlabeled

real

images

yi

Y

to

learn

a

refiner

(x)thatrefinesasynthetic

imagex,

where

θare

the

function

parameters.

Let

the

refined

image

be

denoted

by

x?,

thenx?

:=Rθ

(x).The

key

requirement

for

S+U

learning

is

that

the

re-

fined

image

x?

should

look

like

a

real

image

in

appear-ance

while

preserving

the

annotation

information

from

the

simulator.To

this

end,

we

propose

to

learn

θ

by

minimizing

a

combination

of

two

losses:LR(θ)

=

γ

Rreal(θ;

x?i,

Y)

+

λRreg(θ;

x?i,

xi), (1)iwhere

xi

is

the

ith

synthetic

training

image,

and

x?i

is

the

corresponding

refined

image.

The

first

part

of

thecost,

Rreal,

adds

realism

to

the

synthetic

images,

while

the

second

part,

Rreg,

preserves

the

annotation

information

byminimizingthe

difference

between

the

synthetic

andthis

formulation

and

provide

an

algorithm

to

optimize

for

θ.2.1.

Adversarial

LosswithSelf-RegularizationTo

add

realism

to

the

synthetic

image,

we

need

to

bridge

thegap

between

the

distributions

of

synthetic

and

real

images.

An

ideal

refiner

will

make

itimpossible

toclassify

a

given

image

as

real

or

refined

with

high

confi-

dence.

Thismotivates

the

use

of

an

adversarial

discrim-

inator

network,

Dφ,

that

is

trained

to

classify

images

as

real

vs

refined,

where

φ

are

the

the

parameters

ofthe

discriminator

network.

The

adversarial

loss

used

intraining

the

refiner

network,R,is

responsible

for‘fool-ing’

the

network

D

into

classifying

the

refined

images

as

real.

Following

the

GAN

approach

[7],

we

model

this

as

a

two-player

minimax

game,

and

update

the

refinernetwork,

,

andthediscriminatornetwork,

Dφ,

alter-nately.

Next,

we

describe

this

intuition

more

precisely.The

discriminator

network

updates

its

parameters

by

minimizing

the

following

loss:LD

(φ)

=

?

γ

log(Dφ(x?i))

?

γ

log(1

?

Dφ(yj

)).i j(2)This

is

equivalent

to

cross-entropy

error

for

a

two

class

classification

problem

where

Dφ(.)isthe

probability

ofthe

input

being

asyntheticimage,

and

1

?

Dφ(.)

that

ofa

real

one.

We

implement

as

a

ConvNet

whose

last

layer

outputs

the

probability

of

the

sample

being

a

re-

fined

image.

For

training

this

network,each

mini-batch

consists

of

randomly

sampled

refined

synthetic

images

x?i’sandreal

images

yj

’s.

The

target

labels

for

the

cross-

entropy

losslayerare

0

for

every

yj

,and

1

for

every

x?i.Then

φ

for

a

mini-batch

is

updated

by

taking

a

stochas-

tic

gradient

descent

(SGD)

step

on

the

mini-batch

loss

gradient.In

our

implementation,

the

realism

loss

function

Rrealin

(1)

uses

the

trained

discriminator

D

as

follows:Rreal(θ;

x?i,

Y)

=

?

γ

log(1

?

Dφ(Rθ

(xi))). (3)iBy

minimizing

this

loss

function,

the

refiner

forces

the

discriminator

to

fail

classifying

the

refined

images

as

synthetic.

In

addition

to

generating

realistic

images,

therefiner

network

should

preserve

the

annotation

informa-

tion

of

the

simulator.

For

example,

for

gaze

estimation

the

learned

transformation

should

not

change

the

gaze

direction,

and

for

hand

pose

estimation

the

location

ofthe

joints

should

not

change.

This

is

an

essential

ingredi-

ent

to

enable

training

a

machine

learning

model

that

uses

the

refined

images

with

the

simulator’s

annotations.

Toenforce

this,

we

propose

using

a

self-regularization

loss

that

minimizes

the

image

difference

between

the

syn-Algorithm

1:

Adversarial

trainingof

refiner

net-i

X

,

and

realmber

of

steps

(T

),etwork

updates

enerativeKg

).synthetic

imagesGD

step

onwork

RθInput:

Sets

of

synthetic

images

ximages

yj

Y,

max

nunumber

of

discriminator

nper

step

(Kd),

number

of

g

network

updates

per

step

(Output:

ConvNet

model

.fort

=

1,

.

.

.

,

T

dofork

=

1,

.

.

.

,

Kg

do1.

Sample

a

mini-batch

ofxi.2.

Update

θby

taking

a

Smini-batch

loss

LR(θ)

in

(4)

.endfork

=

1,

.

.

.

,

Kd

do1.

Sample

a

mini-batch

of

synthetic

imagesxi,

and

real

images

yj

.2.

Compute

x?i

=

(xi)

with

current

θ.3.

Update

φ

by

taking

a

SGD

step

onmini-batch

loss

LD

(φ)

in

(2).endend

Discriminator

DInput

image Probability

mapFigure

3.

Illustration

of

local

adversarial

loss.

The

discrimina-

tor

network

outputs

a

w

×

h

probabilitymap.

The

adversarial

loss

function

is

the

sum

of

the

cross-entropy

losses

over

the

local

patches.whlossfunction(1)

used

in

our

implementation

is:LR(θ)

=

?

γ

log(1

?

Dφ(Rθ

(xi)))i+λ

(xi)

?

xi

1,(4)where

.

1

is

R1

norm.

We

implement

as

a

fully

con-volutional

neural

net

without

striding

or

pooling.This

modifies

the

synthetic

image

on

a

pixel

level,

ratherthan

holistically

modifying

the

image

content

as

in

e.g.a

fully

connected

encoder

network,

and

preserves

the

global

structure

and

the

annotations.

We

learn

the

refiner

and

discriminator

parameters

by

minimizing

LR(θ)

and

LD

(φ)

alternately.

While

updating

the

parameters

ofRθ

,

wekeep

φ

fixed,

andwhile

updating

Dφ,

we

fix

θ.We

summarize

this

training

procedure

in

Algorithm

1.2.2.

Local

Adversarial

LossAnother

key

requirement

for

the

refiner

network

is

thatitshouldlearntomodeltherealimagecharacteris-Bufferof

refined

imagesRefined RealRefined

images

with

current

Rtics

without

introducing

any

artifacts.

When

we

train

abuffer

and

b

be

the

mini-batch

size

used

in

Algorithm

1.Mini-batchforDFigure

4.

Illustration

of

using

a

history

of

refined

images.

See

text

for

details.single

strong

discriminator

network,

the

refiner

network

tends

to

over-emphasize

certain

image

features

to

foolthe

current

discriminator

network,

leading

to

driftingand

producing

artifacts.

A

key

observation

is

that

anylocal

patch

we

sample

from

the

refined

image,

should

have

similar

statistics

to

a

real

image

patch.

Therefore,

rather

than

defining

a

global

discriminator

network,

we

can

define

discriminator

network

that

classifies

all

local

image

patches

separately.

This

not

only

limits

the

re-

ceptive

field,

and

hencethecapacity

ofthediscriminator

network,but

also

provides

manysamples

per

imageforlearning

the

discriminator

network.

This

also

improves

training

of

the

refiner

network

because

we

have

multiple

‘realismloss’values

per

image.In

our

implementation,

we

design

the

discriminator

D

tobeafullyconvolutionalnetworkthatoutputsw

×

h

dimensional

probability

map

of

patches

belonging

to

fake

class,

where

w

×

h

are

the

number

of

local

patches

in

the

image.

While

training

the

refiner

network,

we

sumthe

cross-entropy

loss

values

over

hlocal

patches,

asillustratedin

Figure

3.2.3.

Updating

Discriminator

using

a

History

of

Refined

ImagesAnother

problem

of

adversarial

training

is

that

the

discriminator

network

only

focuses

on

the

latest

refined

images.

This

may

cause

(i)

diverging

of

the

adversar-ial

training,

and

(ii)

the

refiner

network

re-introducing

the

artifacts

that

the

discriminator

has

forgotten

about.

Any

refined

image

generated

by

the

refiner

network

at

any

time

during

the

entire

training

procedure

is

a

‘fake’image

for

the

discriminator.

Hence,

the

discriminator

should

be

able

to

classify

all

these

images

as

fake.

Based

on

this

observation,

we

introduce

a

method

to

improve

the

stability

of

adversarial

training

by

updating

the

dis-

criminator

using

a

history

of

refined

images,

rather

than

only

the

ones

in

the

current

mini-batch.

We

slightlymodify

Algorithm

1

to

have

a

buffer

of

refined

images

generated

by

previous

networks.

Let

B

be

the

size

of

thecitehtSyndefineRUnlabeled

Real

ImagesFigure5.Exampleoutput

of

SimGAN

for

the

UnityEyes

gazeesti

refiner

network

does

not

use

any

label

information

from

MPIIGaze

The

skin

texture

and

the

iris

region

in

the

refined

synthetic

image

thantothe

synthetic

images.

More

examples

are

included

in

the

suSimulated

imagesdataset

[40].

(Left)realimages

from

MPIIGaze

[43].

Our

dataset

at

training

time.

(Right)

refinementresultson

UnityEye.

s

are

qualitatively

significantly

more

similar

to

the

real

images

pplementary

material.mationnvxnnvxnturemapsLUConv

f@nxnConv

f@nxn+ReLUReLUInputFeaturesOutputFeatures

Figure

6.

A

ResNet

block

with

twon

×

n

convolutional

layers,

each

with

f

feature

maps.thetic

images

from

eye

gaze

synthesizer

UnityEyes

[40]thetic

data

to

that

of

another

CNN

trained

on

refinedAt

each

iteration

of

discriminator

training,

we

compute

thediscriminatorlossfunctionbysamplingb/2images

fromthe

current

refiner

network,

and

sampling

an

addi-tional

b/2

images

from

the

buffer

to

update

parameters

φ.

We

keep

the

size

of

the

buffer,

B,

fixed.

After

each

training

iteration,

we

randomly

replace

b/2

samples

inthe

buffer

with

the

newly

generated

refined

images.

This

procedureisillustratedin

Figure

4.ExperimentsWe

evaluate

our

method

for

appearance-based

gaze

estimation

in

the

wild

on

the

MPIIGaze

dataset

[40,

43],

and

hand

pose

estimation

on

the

NYU

hand

pose

dataset

of

depth

images

[35].

We

use

fully

convolutional

refinernetwork

with

ResNet

blocks

(Figure

6)

for

all

our

exper-iments.Appearance-based

Gaze

EstimationGazeestimationisakeyingredientformanyhuman

computer

interaction

(HCI)

tasks.

However,

estimat-

ing

the

gaze

direction

from

an

eye

image

is

challeng-ing,

especially

when

the

image

is

of

low

quality,

e.g.from

a

laptop

or

a

mobile

phone

camera

annotating

the

eye

images

with

a

gaze

direction

vector

is

challenging

even

for

humans.

Therefore,

to

generate

large

amounts

of

annotated

data,

several

recent

approaches

[40,

43]

train

their

models

on

large

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論