您好,欢迎来到飒榕旅游知识分享网。
搜索
您的当前位置:首页Application of artificial neural networks to load identification S-F

Application of artificial neural networks to load identification S-F

来源:飒榕旅游知识分享网
PERGAMON

ComputersandStructures69(1998)63±78

Applicationofarti®cialneuralnetworkstoload

identi®cation

X.Caoa,b,*,Y.Sugiyamac,Y.MitsuidCollegeofEng.,ShinshuUniv.,500Wakasato,Nagano380,Japan

bChinaFlightTestEstablishment,Xi'an7100,ChinacCollegeofEng.,Univ.ofOsakaPrefecture,Osaka593,Japan

dDept.ofCivilEng.,CollegeofEng.,ShinshuUniv.,500Wakasato,Nagano380,Japan

Received17December1996;accepted5February1998

aAbstract

Theintendedaimofthestudyistodevelopeanapproachtotheidenti®cationoftheloadsactingonaircraftwings,whichusesanarti®cialneuralnetworktomodeltheload-strainrelationshipinstructuralanalysis.

Asthe®rststepofthestudy,thispaperdescribestheapplicationofanarti®cialneuralnetworktoidentifytheloadsdistributedacrossacantileveredbeam.Thedistributedloadsareapproximatedbyasetofconcentratedloads.Thepaperdemonstratesthatusinganarti®cialneuralnetworktoidentifyloadsisfeasibleandawelltrainedarti®cialneuralnetworkrevealsanextremelyfastconvergenceandahighdegreeofaccuracyintheprocessofloadidenti®cationforacantileveredbeammodel.#1998ElsevierScienceLtd.Allrightsreserved.

Keywords:Arti®cialneuralnetwork;Loadidenti®cation;Inverseproblem

1.Introduction

Accurateandreliabledataonaircraftwingloadsarehighlynecessarynotonlytodesignanddevelopanair-craftbutalsotodrawupstrengthandrigidityspeci®-cationsforaircraft.However,duetothecomplexityofwingstructureandloadingconditions,itisdi󰂁culttodetermineaccurateandreliableaircraftwingloadsonlybymeansofwind-tunneltestsortheoreticalanalysis.Therefore,itisdesirabletoobtainwingloadsthrough¯ightteststosupplementandcon®rmtheresultsfromwind-tunneltestsandtheoreticalanalysis[1].Unlikeparameterssuchas¯ightheightorvelocity,¯ightloadscannotbedirectlyobservedandmeasuredin¯ight.Consequently,itraisesaninverseproblem,i.e.itisrequiredtoidentify¯ightloadsact-ingonaircraftwingsonthebasisofsomekindofstructuralresponseofthewing,suchasthestrainre-

*Correspondingauthor.

sponse,whichiscausedby¯ightloadsandcanbemeasuredin¯ight.

Eventhoughtherelationbetweenloadsandthestructuralresponseofthewingonlydependsonthewingstructureitself,duetothecomplexityofthewingstructure,therelationcannoteasilybeformulated.Inmostcases,whiledirectproblemsmaybeeasilyformu-lated,theirinverseproblemsareusuallydi󰂁cultorevenimpossibletobeformulated.Thesolutiontotheproblemisfocusedon®ndingameansofestablishingaload-strainrelationshipthatrepresentsdynamicallymechanicalcharacteristicsofthewingstructure.

Arti®cialneuralnetworkshaveattractedconsider-ableattentionandshownpromiseformodelingcom-plexnonlinearrelationships.Arti®cialNeuralNetworksarederivedthroughamodelingofthehumanbrainandarecomposedofanumberofinter-connectedunits(arti®cialneurons)[2].

Asigni®cantbene®tofusinganANN-basedmodelisitsabilitytolearnrelationshipsbetweenvariableswithrepeatedexposuretothosevariables.Therefore,insteadofusingananalyticalrelationshipderivedfrom

0045-7949/98/$±seefrontmatter#1998ElsevierScienceLtd.Allrightsreserved.PII:S0045-7949(98)00085-6

X.Caoetal./ComputersandStructures69(1998)63±78

mechanicalneuraladaptivenetworkprinciplestraininglearnstoprocess.

themodelrelationshipasystem,thethrougharti®cialanneuralAmongvarioustypesofarchitectureofarti®cialsomeconstructattractivenetworks,featuresthemultilayer[3]:(1)neuralnetworkshaveinputindataatonon-linearmappingOnefunctioncanfromautomaticallymultipleTheadistributedmultiplemanneroutputthroughdataawithinthenetwork`generalization',trainednetworkhasafeaturetrainingoftheprocess;so-called(2)thedatawell-trainedi.e.networkakindestimatesofinterpolation,appropriatesuchthatworkevenoperatesforuntrainedquicklyinpatterns;anapplication(3)Theprocess.

trainedoutputnet-wellANNshavetheabilitytoconsiderbothdiscreteprocessingascontinuousneuralisoneofvariables.theMassivelyparalleldataasabilitynetworks.Arti®cialmainneuralfeaturesnetworksofhavearti®cialtheconcepttocanembeddedabstract.inSuchasequencenetworkscanextractthecoreizedthereforemodel[4].

beusedtoconstructoforinputidentifypatterns,anideal-andthisTakingadvantageoftheANNsdescribedabove,loadspaperworkactingproposesloadforconstructingonawinganautilizingapproachtoidentify¯ightmodelofANNstherelationshipastheframe-ofshownandprocesses.

instrainFig.1onandthecontainswingstructure.thefollowingTheapproachfoursub-is1.Groundcalibrationtest

beforeAnactualcalibrationa¯ightaircraftwingisusedasacalibratedobjecttheoreticalsamplestestandthatdistributedareselectedloadsareapplyedasaanalysisandthedesignloadonenvelopethebasisoverofdistributedwholeregion,areloadsinvaries.whichStraintheaerodynamiccenterofusedthenappliedinthemeasurednextphaseinorderaslearningtoresponsesgaindatadataofalongthatthewingwithwillthebe®rstternsphaseloads,dataforANNs.istoforpreparethearti®cialneuralnetworks.TheEventhoughvastamountstheobtainedoflearningpat-characteristicsetsaredatasets.

ofdiscretethewingmeasuredstructuredata,iscontainedthemechanicallearninginthe2.Trainingthearti®cialneuralnetwork

Usingunitsthestrainsasinputsignalsgiventotheinput(desiredofisoutputs),thenetwork,trainingandtheloadsarti®cialasteachingsignalsandperformedthecon®gurationdesiredoutputsiterativelyuntiltheerrorneuralbetweennetworkactualofreachesthearti®cialanacceptableneuralnetworklevel.Then,for

Fig.ingarti®cial1.Anapproachneuralnetworks.

fortheidenti®cationof¯ightloadsutiliz-identifyingwingloadsactingontheinvestigatedaircraftation''Inisthedecided.

theabilitysecondofphase,theANNtheis``learning''invoked.andThe``generaliz-tureload-strainautomaticallybutisdi󰂁cultthatexistsinherentlyinthewingrelationstruc-ofandembeddedtoformulateinthewillwell-trainedbeconstructedANN.3.Flighttest

Measureloadstests.

inthethestrainmainresponsespartsofthecausedwingbythroughaerodynamic¯ight4.Identi®cationof¯ightloads

Inputtrained¯ight-measuredloadsarti®cialneuralstrainsnetworkoftheandwingidentifytothe¯ightwell-notedBybasedsynthesizingontheoutputstheaboveoftheneuralnetwork.

maywithbethatreplacedacomplicatedstructuraldescription,analysisitcanmodulebeblem,thisapproach.byanMoreover,arti®cialforneuralthenetworkdiscussedmodelpro-possiblethisobjectivetopropositionperformgroundhascalibrationtheadvantagethatitisandcientlyANNsaircraftareaswellcantoobtainlearningdatatestsforusingANNs,theaslearnaccuratelyfromtheevenlearningthoughpatternsthee󰂁-catedAsthestructures,discreteactually-measuredinsomecases,aredataextremely[3].

patternsbetweenandexternalthemechanicalexcitementandpropertiesstructuralorcompli-responses

relationsX.Caoetal./ComputersandStructures69(1998)63±7865

aredi󰂁culttoformulate,thearti®cialneuralnetworkmodelbecomesamoreadequateoneavoidingcompli-catedorevenimpossibletheoreticalanalysis.Inthissense,ANNsarequiteusefulandpresentanon-negli-gibleadvantage.

Asthe®rststepofthestudy,thefeasibilityofusinganarti®cialneuralnetworktoidentifyloadsisinvesti-gatedinthepaper.Anaircraftwingissimpli®edintoacantileveredbeam.Distributedloadsareapproximatedbyasetofconcentratedloads.Thepaperdemonstratestheapplicabilityofanarti®cialneuralnetworktotheloadidenti®cationforacantileveredbeammodel.2.EstablishmentofaMechanicalModelforLoadIdenti®cation

2.1.Mechanicalmodelforloadidenti®cation

Becauseidentifying¯ightloadsactingonanaircraftwingutilizingarti®cialneuralnetworksisarelativelynewstudyandtherearemanyfactorsthatneedtobestudiedbeforeconducting¯ightloadidenti®cationaccordingtotheproposedapproach,theapplicabilityofarti®cialneuralnetworkstoloadidenti®cationis®rststudiedinthework.Sincethestudyisasimu-lationconductedatalaboratory,insteadofutilizinganactualwingasanobject,acantileveredbeamisinvestigated.Inotherwords,anaircraftwingissimpli-®edtoacantileveredbeam(Fig.2).Nonuniformdis-tributedloadsactingonthewingareapproximatedbyasetofconcentratedloadsthatareappliedinaverti-caldirectiontothebeamaxisasshowninFig.3.Somesetsofconcentratedloadsactingonthebeamwillbeidenti®edbasedonitsstrainresponsesafteraccomplishingthetrainingofarti®cialneuralnet-works.

InordertoverifytheapplicabilityofANNstoloadidenti®cation,theidenti®edresultsneedtobeexam-inedbycomparingthemtotheclosedformresultsfromsomekindoftheoreticalcalculation.Theclosedformresultsforthismodelcanbeobtainedfromthetheoreticalcalculationsbasedonstaticmechanics.

Thematerialofthebeamisassumedtobeakindofalloy,whichislinearlyelasticEanditsmod-ulesoflongitudinalelasticityisassumedtobe

5821Â10MPa(10kN/m)andthebendingrigidityEIis5625kNm2.

Fig.2.Mechanicalmodelforwingloads.

66X.Caoetal./ComputersandStructures69(1998)63±78

Fig.3.Modelforloadidenti®cation.

2.2.PreparationoflearningdataforANNs

Sincethestudyisconductedatalaboratoryinsteadofagroundcalibrationtest,thelearningdatatobeadoptedtotrainANNsarepreparedbytheoreticalanalysis.Meanwhile,inordertoexaminetheaccuracyoftheidenti®edresultsbyatrainedANN,anumberofdatasetsthatwillbeusedascheckpatternsarealsoprepared.

Thirteensetsofsimulationpatternsareestablished.Ineachpattern,11concentratedloadsand11strainresponsescausedbytheloadsarecontained.Thepos-itionsofactingpointsoftheconcentratedloadsandmeasuringpointsofstrainsarelocatedonthecantilev-eredbeamwhichhasbeenequallydividedinto10por-tions(Fig.3).

Sincethereisnomechanicalmeansofapplyingaconcentratedloadatbeam-root,forthecontinuityandsmoothnessofthecurveshapedbyconnectingthebeginningendsoftheloadvectors,loadP11isappliedatthepointwherethecoordinateinx-axisisnearzerobutnotzero.Here,itisgiventhatxP11=H=0.002masshowninFig.3.TheloadP11doesn'tin¯uencetheresultsmuch.InadditionconcerningE1,ifanymeasuredpointbetweenpoints1and2isadded,thestrainatthemeasuredpointwillin¯uencetheresultsandisnotzero.Theintermediatevaluesbetweenthestrainmeasuredpointsareobtainablebyinterpolation.Aseriesofcurves,whichcontainstraight-lines,obli-que±lineswithdi󰂀erentgradientslopes,conecurved-lines,halfcircular-linesandellipse-lineswithdi󰂀erentradiuslengths,parabola-linesandsoon,canbeobtainedifthebeginningendsoftheconcentratedloadvectorsaresmoothlyconnected.

SevenpatternsshowninFig.4areusedaslearningdataforarti®cialneuralnetworks,inwhichstrainsareusedasinputsignalsandloadsasteachingsignalsthatrepresentdesiredoutputs.Theremainingsixpatterns(Fig.13)areusedtochecktheaccuracyoftheident-i®edresultsbyatrainedANN.3.ComputationalPrincipleofANNs

Thefollowingsectionsgiveabriefoverviewofthecomputationalprincipleandlearningalgorithm.3.1.Computationalprincipleofmultilayerneuralnetworks

Tobuildanarti®cialneuralnetworktoperformsometask,onemust®rstdecidewhatkindofANNistobechosen,howmanyunitsaretobeusedandwhatkindsofunitsareappropriate.Boththestructureofthenetworkandtheconnectivityofitsprocessorhaveasigni®cantin¯uenceonitsoverallbehavior.

Duetotheattractivefeaturesmentionedpreviously,amultilayerneuralnetworkwithinput,outputandinsertedhiddenlayersofneurons,whichisshowninFig.5,isadoptedinthestudy.

Multilayerneuralnetworkscanrepresentanyfunc-tion,whengivenenoughunits[5].Thistypeofarti®-cialneuralnetworkisconsideredafully-connectednetwork,ofwhicheachinputwillin¯uencealloutputelements.Themultilayerneuralnetworkcana󰂀ordabroadfoundationonwhichthenumberoflayersandneuronsineachlayercanbeoptionallyalteredaccord-ingtoagivenproblem.Theneuronnumbersininput

X.Caoetal./ComputersandStructures69(1998)63±7867

Fig.4.Patternsusedtotrainarti®cialneuralnetworks.(Verticalcorrdinates-leftaxis:concentratedload,N;rightaxis:strain,Â10À5.Horizontalcoordinate-coordinatesofactingpointsofloadsandmeasuringpointsofstrains,M).

andoutputlayersareusuallydecidedaccordingtoaninvestigatedaim.Thenumbersofhiddenlayersandneuronsineachhiddenlayeraredeterminedaftercon-sideringthetypeofproblemandthecomputationalspeedandaccuracy,whichwillbestudiedduringthetrainingoftheANNanddiscussedlater.

Eachneuronisafundamentalcomputationalel-ement.Fig.6showstwotypicalneuronsselectedfrom

68X.Caoetal./ComputersandStructures69(1998)63±78

Fig.5.Schematicdiagramofmultilayerneuralnetwork.

inputlayersandhidden,oroutput,layersofamulti-layerneuralnetwork.Fortheinputlayer,itsneuronsoutputtheincomingsignalxidirectly:yi󰂈xi

󰂅1󰂆

wherethesubscriptidenotestheithneuronintheinputlayer.

Forthehiddenoroutputlayer,eachneuronformsaweightedsum

n󰁘i󰂈1

ijdenotesthatthewijisweightonthelinkfromuniti

inthepreviouslayertounitjintheJthlayer(currentlayer).Hiddenlayersdonotinteractwiththeexternalenvironmentandsimplyoutputorinputinformationtoorfromtheneuronswithinthesystem.Theoutputofanindividualneuronmaythenbecometheinputtotheotherneuronsormaysimplybeoneoftheoutputsofthenetwork.Theformerbelongstotheneuronsinhiddenlayersandthelatterbelongstotheneuronsintheoutputlayer.

Di󰂀erentmodelsareobtainedbyusingdi󰂀erentmathematicalfunctionsofF.Threecommonchoicesarethestep,sign,andsigmoidfunctions[5].Thechoiceofactivationfunctionmaysigni®cantlyin¯u-encetheapplicabilityofatrainingalgorithm.Inthisresearch,thesigmoidfunctiongivenbyF󰂅Uj󰂆󰂈

1

1󰂇EXP󰂅ÀUjaT󰂆󰂅4󰂆

wijxi

ofninputsfrompreviouslayer,andabiasyjisadded.Uj󰂈

n󰁘i󰂈1

wijxi󰂇yji󰂈1Y2FFFYnX󰂅2󰂆

Thenthesumbecomestheinputsignaloftheproces-singunit.Computationalelements(units)processandpasstheresultsthroughanactivationfunctionFtoobtainitsoutputyjasfollows:

23n󰁘

yj󰂈F󰂅Uj󰂆YF󰂅Uj󰂆󰂈Fi󰂈1Y2YFFFYnXwijxi󰂇yj

i󰂈1

󰂅3󰂆

Thecoe󰂁cientwijistermedtheweightedcoe󰂁cientandthecoe󰂁cientyjistermedthebias.Thesubscript

isadoptedasanactivationfunctionforthecalculationofunits.ThesigmoidfunctionhastheconvenientpropertythatthederivativeF'(Uj)=F(Uj)[1ÀF(Uj)]iseasytoformsothatlittleextracalculationisneededto®ndF'(Uj).

Theactivationfunctionistheonlysourceofintro-ducingnonlinearitiesintotheinput-outputrelation.TheapplicationofthesigmoidfunctiontothelinearsummationUjproducesanoutputthatisanonlinearfunctionoftheinputvariables.Itcanbeshownthat,intheabsenceofthenonlinearactivationfunctions,amultilayerlinearnetworkcanbereducedtoanequiv-alentsinglelayernetwork.Thisagainunderscorestheimportanceofthenonlinearactivationfunctionbuiltintothesingleneuron.

Inthetheoryofprobability,thesigmoidfunctionrepresentsthe®reprobabilityofaunit.Instatistics,Tcorrespondstotemperature,buthereTisabiaspar-ameterusedtomodulatetheunitoutput.Fig.7showsthattheoutputfeatureofthesigmoidfunctionwillvarywiththevalueofT.

Theprincipaladvantageofthisfunctionisitsabilitytohandlebothlargeandsmallinputsignals[4].Theslopeofthisfunctionisrepresentativeoftheavailable

Fig.6.Twotypicalneurons.

X.Caoetal./ComputersandStructures69(1998)63±7869

Fig.7.SigmoidfunctionF(Uj)=1/[1+EXP(ÀUj/T)].

gain.Forbothlargepositiveandnegativevaluesoftheinputsignal,thegainisquitesmallandforthein-termediatevaluesoftheinputsignal,thegainis®nite.Hence,anappropriatelevelofgainisobtainedforawiderangeofinputsignals.

Anotherneuronparameteristhebiasyjortheo󰂀setinputintotheunit.Althoughthiscouldbeachievedsimplybyaddingaconstantinputwithanappropriateweight,oftenthebiasisconsideredseparately.Biasesmaybeused,forexample,toselectivelyinhibittheac-tivityofcertainneurons[2].Thebiasvariablea󰂀ectsthedegreeofnonlinearitybyshiftingtheinputawayfromthelinearregionofthesigmoidfunction.3.2.LearningalgorithmformultilayerneuralnetworkTheneuralnetworkmustbetrainedtorecognizetheknownpatternsandtoextrapolateexactresultsfromtheseknownpatternswhennewinformationisinput.Formultilayerneuralnetworks,thetrainingisper-formedbycontinuallyupdatingbiasandsynapseweights.Thetrainingprocessisdirectedtowardsmini-mizingthemeansquarederrorbetweenthedesiredandtheactualoutputsoftheneuronsintheoutputlayer.Thetrainingoftheneuralnetworkendsandthecon®gurationiscon®rmedassoonaseacherroroftheunitsintheoutputlayerreachesanacceptablelevel.Althoughmanykindsofalgorithmsontrainingneuralnetworkshavebeenproposedandutilizedsofar[4,6-9],theErrorBackPropagationalgorithm,whichwasproposedbyRumelhartetalin1986[10]andisabbreviatedastheEBPalgorithminthispaper,isverybroadlyadopted[11,12].TheEBPalgorithmisaniterativegradientalgorithmdesignedtominimizethemeansquarederrorbetweentheactualandthedesiredoutput[6],whichfocusesonquicklyreducingtheerrors.InRef.[8,9]and[13,14],theBackpropagationNeuralNetworks(BNNs)areverypreciselydescribed.

BasedontheEBPalgorithm,manyothere󰂁cientlearningmethodsarederived[15±18].InRef.[15],trainingisperformedbytheEBPalgorithmusingamodi®edversion.Ahyperbolictangentfunctionwasusedasthenodenonlinearity.InRef.[16],anintuitiveandstraightforwardmodi®cationofEBPisintroduced.Thepaperdescribesthatifexpectedvaluesofsourceunitsareusedforupdatingweights,theEBPalgorithmconvergessigni®cantlyfaster.Amodi®edQuickpropalgorithmissuggestedinRef.[17],whichrequiresonlyonemajorparameter(thelearningrate).InRef.[18],anewEBPalgorithmwithcoupledneuronsisproposed.Theneuroniscalledasaturatinglinearcoupledneur-on.ComparingitwiththeconventionalEBPalgor-ithm,theproposedrulehasahighconvergencerateinlearning.

Inthisstudy,anImprovedErrorBackPropagationalgorithmisadoptedtotrainANNs.SinceBackpropagationNeuralNetworks(BNNs)arewell-knownanddescribedpreciselyinRef.[8,9]and[13,14],andtheirapplicationinstructuralmechanicsisgiveninRef.[4],detaileddescriptionsonEBPsareomittedhere.Instead,abriefintroductionoftheIEBPalgor-ithmisgiven.

Takingthekthneuronintheoutputlayerasanexample(Fig.5),itserrorcanbeexpressedasek󰂈󰂉DkÀyk󰂊

󰂅5󰂆

whereDkandykaredesiredandactualoutputsofneuronkrespectively.MeansquareerrorEkisde®nedasanerrorfunctionandisexpressedas1Ek󰂈e2Y

2k

1

Ek󰂈󰂉DkÀyk󰂊2X

2󰂅6󰂆

Itshouldbenotedthattheerrorfunctioncanbegivenonlyforneuronsintheoutputlayer.Fortheneuronsinotherlayers,suchacomparisonisimpossible.Thatiswhytheweightedcoe󰂁cientsandbiasesmustbe

70X.Caoetal./ComputersandStructures69(1998)63±78

adjustedandworkingbystartingbacktofromthosethenodesintheoutputlayermulti-dimensionalTheerrorfunctionEinhiddenlayers.

kcanbeconsideredtobeacoe󰂁cientsItswandquadraticbiasesyfunctionofconnectionjkkintheparameterspace.totaldimensionalHence,numberofnumber,adjustablehereweightsexpressedandbyM,istheMsometimes-dimensionaltheerroropposedmaynotspace.surfaceisanonlinearsurfacenodebiases.inanbetheThedesiredminimumglobalofminimumEk,whichaswherethetofollowingalocalequationsminimum,hold:appearsatthepointdEk

dwjk󰂈0󰂅7󰂆

dEk

dyk

󰂈0X󰂅8󰂆

Itworksisobviousthatthelearningalgorithmofneuralwisyasubstantialprocessofgraduallyadaptingnet-jkandkaccordingtokyields:

thDe®ningneuron[inDEqs.(7)and(8).

ktheÀykoutput]yk[1Àylayerk]/Tandasandenotingerrortermitusingofthedkdk󰂈󰂉DkÀyk󰂊yk󰂉1Àyk󰂊1

TX

󰂅9󰂆

Bymeanusingactualsquaredagradientdi󰂀erencedescentbetweentechniquetominimizethebiasesnetworkpressions:

areadjustedoutputs,respectivelytheweightedthedesiredandbythecoe󰂁cientsfollowingandex-wjk󰂅t󰂇1󰂆󰂈wjk󰂅t󰂆󰂇Zdkxj󰂅10󰂆yk󰂅t󰂇1󰂆󰂈yk󰂅t󰂆󰂇gdk

󰂅11󰂆

whereandwgaretdenotestermedthelearninglearningratescycle.ofweightedThecoe󰂁cientscoe󰂁cientsZexpressedjkandbiasesgorithm.

byEqs.ykrespectively.(10)and(11)TheistermedlearningthealgorithmEBPal-momentumThetrainingtermsalgorithmmaybeimprovedbyaddingaDwjk󰂅tÀ1󰂆󰂈a󰂉wjk󰂅tÀ1󰂆󰂊󰂅12󰂆bDyk󰂅tÀ1󰂆󰂈b󰂉yk󰂅t󰂆Àyk󰂅tÀ1󰂆󰂊󰂅13󰂆

intoEqs.(10)and(11),then

wjk󰂅t󰂇1󰂆󰂈wjk󰂅t󰂆󰂇Zdkxj󰂇a󰂉wjk󰂅t󰂆Àwjk󰂅tÀ1󰂆󰂊󰂅14󰂆yk󰂅t󰂇1󰂆󰂈yk󰂅t󰂆󰂇gdk󰂇b󰂉yk󰂅t󰂆Àyk󰂅tÀ1󰂆󰂊

󰂅15󰂆TheislearningalgorithmexpressedbyEqs.(14)and(15)ithm,calledwhichtheImprovedisabbreviatedErrorasBackthePropagationIEBPalgorithmalgor-in

thistumpaper.Thecoe󰂁cientsaandbaretermedmomen-smoothedThecoe󰂁cientsvariancesrespectivelyofweightsfortheandweightsandbiases.Thebymeansofaddingthemomentumbiasescanterms.beithmspeedlearningareincreased.ofconvergenceInandstabilityofthealgor-guidefromconvergingaddition,ataitlocalmayhelptoprevent(Forconvergencetheneurontowardsjtheglobalone.

optimumandkd,inhiddenlayerJ,itserrortermd(jd)canbedeterminedusingtheerrortermofneuronk)intheoutputlayerorneighboringhiddenlayerifkisalreadygained.dj󰂈hj󰂉1Àhj󰂊

󰁘Kdkwjk

1

k󰂈1

T󰂅16󰂆

wherehjdenotestheoutputtheThecomputerNNcodeofusedtheforjthneuron.

computationsFORTRANresearchprogrammecomputeriswrittenlanguage.bytheauthorsusinginbrie¯yillustratedforANNinFig.learningThe8.

and¯ow-chartcomputationsoftheisFig.8.Flow-chartforarti®cialneuralnetworklearning.

X.Caoetal./ComputersandStructures69(1998)63±7871

4.TrainingANNsandDiscussiononResults4.1.Trainingarti®cialneuralnetworks

eachAinputmultilayerandoutputneurallayernetworkisadoptedwith11neuronsincalSevenpatterns,calculation,setsoftraininginthisstudy.loadsinwhichshownpatternsobtainedbytheoreti-strainsinareFig.used4,areasusedastrainingexpressedTrainingasteachingANNssignals.

inputsignalsandisperformedbytheIEBPtationalAsdescribedbyEqs.previously,(14)and(15).

algorithmeachunitisaandBecausepasseselement[0,1]thethethatformsaweightedsumofncompu-inputsoutputresultrangethroughoftheasigmoidsigmoidfunctionfunction.ismustasgivenbeshowntotothenormalizedinFig.7,outputsoftrainingpatternsnetworkintoasteachingaunitrangesignals.of[0,1]beforesionlessAstrainindicatedthenetworkItisdi󰂁cultni®cantlyvariablesinwithoutsuchnormalization[3].andRef.scaling[19,20],ofintroductiontheirvaluesshouldofdimen-strainsonlyareimproveoriginallythedimensionlesse󰂁ciencyofBNNs.sig-variablesTheinputtostraintheextendedbymultiplyingby105beforebeingandgivenarewhichizedisvaluesneuralnetworktoosmall.is[0HBut1.307237becauseinputstrainsÂthe10Àoriginal5],therangescaleofofTheintoaunitrangeof[0,1]asoutputaredatanotnormal-verteddimensionzationintodimensionlessofloadsisN(Newton),whichareisdone.con-utedTheintonetworkweightsnetworkaunitvariablesduringthenormali-andis®rstrangeof[0,1].

biasesinitializedwhenstartingwithrandomlytotraindistrib-neuralofsincenothingisknownabouttheoptimalsetemployedInweightsthisinvestigation,andbiases.

terns.asaconvergencetheallowancecriterionforerrorof0.05islayerisTheexpressedrelativeaserrorEPUofandeachiscomputedneuronintrainingbytheEq.outputpat-(17):EPU󰂈

jactualoutputÀdesiredoutputj

desiredoutputX

󰂅17󰂆

WhenalllearningtheEPUpatternssofallisreducedneuronstoin0.05,theoutputi.e.layerforEPUs󰁒EA󰂈0X05

󰂅18󰂆

itprocessisconsidereddenotesofANNthatlearningconvergenceends(Fig.isachievedandthetheSincethe8).InEq.(18),EAaniterativeallowancegradienterror.

algorithmachievedprocessnotstepofbytrainingANNsandconvergenceisemployedinisexpressedbereducedstep,byEq.toat(18),0.05.theinitialstage,allEPUscananInsteaderrorfunction,ofthecriterionEP,is

adoptedtrainingANNs,asareferencewhichisexpressedyardstickbyintheprocessofEP󰂈

󰁘PNUT󰁘󰂅S󰂆EPU󰂅19󰂆

1

1

EPoutputs(teachingisthesumoftheerrorsbetweenactualanddesired

layerneuronforalltrainingsignals)patterns.ofallHere,neuronsUT(intheoutputnetwork.numberPNInthisininvestigation,theoutputlayerSisof3andanSs)-layerdenotestheUT(S)neuralispatternsisthearenumberadoptedoflearningpatterns,only7training11.cycleThethenumber,trainingCprocessandPNis7.

,automaticallystopswhenthelearningallowanceerrorreachestoavoid10000wastingwithoutCPUtime.consideringThen,cesseswillareparametersrepeated.Inarethechangedprocess,andtheerrorlearningpro-ANNsgraduallytheisleadingdrop,whichimpliesthatthetrainingsumEPofisconsideredcriteriongiventowardsachievedbyEq.and(18)convergence.learningissatis®ed,Finally,ends.

convergencewhen4.1.1.convergence

E󰂀ectofneuronnumbersinhiddenlayersonnetworkNospeci®cbertopology,guidelinesthenumberexistonoflayershowtochoosetheneuralofnodes[15].At®rst,anumberandofmultilayerthenum-tried.denButnetworkstheresultswithdidmorenotthanshow2hiddenlayerswereANNs.layers,reducedTherefore,thefastertheconvergencethattheofmoretraininghid-adopted.to1,i.e.theanumber3-layerofneuralhiddennetworklayerswasandidenti®edoutputEvenlayersthoughcanthebenumberdeterminedofunitsaccordinginthetoinputwasthenumberobject,theproblemofchoosingtherightunderstoodofhiddenunitsinadvanceisstillnotwell-theDuringgencenumberthe[5].

ofprocessnodesinofthetraininghiddenANNs,layeronthee󰂀ectofThesummarizednumberofthewastrainingsuch:inTablechangedalgorithmwas®rstinvestigated.theconver-1.fromOther20parametersto1.Theresultsare®xedaretumThiscoe󰂁cients:sigmoidtemperatures:theimpliesthata=b=1;Tlearning(2)=T(3)rates:=1;Z=momen-g=1.training.

convergencethearein¯uencesexcludedofatothertheinitialparametersstageonofestTable2showsthatthevalueoferrorsumisthelow-forwhencussions,othergiventheneuron4-11)suchaparameters.numberofthehiddenlayeris4networkisFordescribedconvenienceashavinginthea(11-dis-ANNThen,architecturethee󰂀ectsinoftheotherpaper.

parametersincludedintheNNwithtraininga(11-4-11)algorithmarchitectureonconvergencewereinvestigated.

fora3-layer72X.Caoetal./ComputersandStructures69(1998)63±78

Table1

E󰂀ectofneuronnumbersinhiddenlayeronconvergenceNEPNEP

2014.4621014.748

1911.62491.673

1817.70188.194

1714.52577.801

168.02162.207

1511.92651.629

147.87440.843

137.87638.933

128.23922.767

1111.3101

13.287

Table2

E󰂀ectofsigmoidtemperaturesonconvergenceT(2),T(3)0.6EP7.673

0.7

4.968

0.84.469

0.91.945

1.00.843

1.10.726

1.20.819

1.30.909

1.40.973

1.51.032

4.1.2.E󰂀ectofsigmoidtemperaturesonconvergenceThee󰂀ectofsigmoidtemperaturesonconvergenceisshowninTable2andotherparametersare®xedas:momentumcoe󰂁cients:a=b=1;learningrates:Z=g=1.

Table2showsthattheconvergenceisthefastestwhenT(2)=T(3)=1.1forothergivenconditions.Intheinvestigation,thesigmoidtemperaturesforneuronsinhiddenandoutputlayersarechangedinthesameincrementandaregiventhesamevalues.

4.1.3.E󰂀ectofmomentumcoe󰂁cientsonconvergenceThee󰂀ectofmomentumcoe󰂁cientsaandboncon-vergenceisshowninFig.9,whichischangedinthesameincrement0.05.Otherparametersare®xedas:

allowanceerror:EA=0.05;sigmoidtemperatures:T(2)=T(3)=1.1;learningrates:Z=g=1.

ThecycleinFig.9denotesthenumberoftrainingcyclesinwhichthetrainingconvergesandtheerrorofeachneuronintheoutputlayerreachestheallowanceerror(EA)foralllearningpatterns.Thetraining,how-ever,iscompulsorilystoppedifitdoesnotconvergein10000cycles.

Theresultshowsthattheconvergenceisthefastestwhena=b=À0.20forothergivenconditions.4.1.4.E󰂀ectoflearningratesonconvergence

Thee󰂀ectoflearningratesZandgoncon-vergenceisshowninFig.10andotherparametersarelimitedas:allowanceerror:EA=0.05;sigmoidtem-

Fig.9.E󰂀ectofmomentumcoe󰂁cientsonconvergence.

X.Caoetal./ComputersandStructures69(1998)63±7873

Fig.10.E󰂀ectoflearningratesonconvergence.

peratures:T(2)=T(3)=1.1;momentumcoe󰂁cients:a=b=À0.20.

ItisclearthatconvergenceisthefastestwhenZ=g=1.7forothergivenconditions.

4.1.5.Relationshipbetweenlearningcycleandallowanceerror

Finally,therelationbetweenthenumberoflearningcycleandtheallowanceerrorwasinvestigated.Itisapparentthatthelowertheallowanceerror,thelargerthenumberoflearningcycles.TheresultsareshowninFig.11andotherparametersaregivenas:sigmoidtemperatures:T(2)=T(3)=1.1;momentumcoe󰂁-cients:a=b=À0.20;learningrates:Z=g=1.7.FromFig.11,itcanbeobservedthatwhentheallowanceerrorislower,thenumberoflearningcyclesneededforconvergencebecomesextremelylarge.4.2.Resultsanddiscussion

Inthisstudy,thee󰂀ectsoftheneuronnumberinahiddenlayer,sigmoidtemperature,learningpar-ameters,includingmomentumcoe󰂁cientsandlearningratesontheconvergenceofthelearningalgorithmareinvestigatedbyalocalsearchmethod.Theparametersarechangedincertainrangesstepbysteptotryto®ndabettercombinationoftheseparameterstoattainconvergencefaster.Intheseekingprocess,theweightsandbiasesareconstantlyadjusted.ThevalueofatypeofparameterthatmakestheerrorsumEPorthenum-beroftrainingcyclesthelowestinthesearchedrange,i.e.,convergencerelativelyfaster,is®xedandgiventothetypeofparameter.Then,thenexttypeofpar-ameterisinvestigated.

Thelearningparametersaregraduallydeterminedintheprocess.Atypeofparameterinvestigatedearliermaye󰂀ectthedeterminationoftheparametersinvesti-gatedafterwards.Butthelatterdoesn'te󰂀ectthefor-mer.Thesearchedrangeisnotsolargebecausetheconcernisfocusedon®ndingabettercombinationoftheseparameterstomaketrainingconvergencequicker,thentoobtaindesiredidenti®edresultswithatrainedANN.

Intheinvestigationintothee󰂀ectofmomentumcoe󰂁cients,aandb,onconvergence,thelearningrates,Zandg,aregiven1,whichmeansthelatterdon'tin¯uencethelearningatthisstage.Then,themomentumcoe󰂁cients,aandb,arechangedinthesameincrementaroundvalue1stepbystep.However,thefartherthevaluesofaandbdeviatefromÀ0.20,themorecyclesareneededtoreachconvergenceandthelargerthesumoferrorEPbecomes(Fig.9).ItshouldbenotedwhenaandbarelessthanÀ0.15,thereductionofcyclenumberandtheEPissmoother.Then,whengivenhighervalue,ascillatorybehavioroccurs.

InRef.[21],itisindicatedthatthevalueofmomen-tumcoe󰂁cientshouldbetakenaV0.9.EventhoughinFig.9,therangeinwhichthemomentumcoe󰂁cientschangeis[À0.35H0.10],duringthetrainingprocess,theyarealwayschangedinalargerrange,inbiggerincrementaroundvalue1,anda=b=0.9werealso

74X.Caoetal./ComputersandStructures69(1998)63±78

.rorreecnawolladnaelcycgninraelneewtebnoitaleR.11.giFinvestigatedresult,cyclesvalueofthetrainingtrainingsincethewhendidvalueanotwasnear1.But,asa=bconverge=0.9.InevenRef.in[3],10000ZIntheofmomentuminvestigationfactorintoisthegivene󰂀ectastheof0.001.

andandÀmomentumg,onconvergence,learningrates,coe󰂁cients,sigmoidatemperaturesare1.1in¯uence0.20,whichmeansthatthetwoandkindsb,areofparametersgivenasfasterObservingtheconvergenceandtheFig.training.

needed10,itisnotedthattrainingconvergesrates,However,Zanddecreasesnumbercyclewheng,whentheoflearningvalueofcycleslearningfortheishigher,learninginratesthearerangefurtherofrisen,[1.0±1.7].pointnumbertrainingwheretrainingjumpsfromdoesn'ttheminimumtheconvergetoadivergencepapersThecycles.valuesThisofbehaviorlearningisevenin10000ratesworthgivennoting.

in[3,11,15,21]arenotashighastheonesinothergiven0.002.ourasymptoticInpaper.Ref.In[11],Ref.it[3],istrainingratesaregivenasithmsprobablywithconstanttheoryoflearningneuralconsideredthatageneralratesnetworklearningalgor-ratesnotareneverwill.Itisindicateddoesthatifnottheexistlearningandvestigationbeguaranteedkeptconstant,ingeneral.meaningfulTheresultsconvergencefromourcan-in-ingInnumberrates.Ref.[15],supportThee󰂀ectinterestingthisviewpoint.

resultsarereportedonlearn-wasofiterationsofandtheonlearningthespeedrateZonthetotal0.01,examined.will0.03,0.04,FivevaluesofZweretested:ofconvergenceZ=0.005,reducegiveaslower0.10.convergenceItwasobservedrate,andthataasmallerZsmallertheofZreduceserrorintheanoscillatorymanner,highwhereasZwillaobservedZreducestheerrorerrorverysmoothly.fast.TheThetrendhighcanvaluebegation.

fromtheresultsobtainedinourinvesti-Ref.Anexplanationforsuchbehaviorisgivenbydimensionalthe[15].errorItisfunctionconsideredisthattheerrorsurfaceshapedintobiases.thetotalspace.Theadimensionalnonlinearsurfacenumberinaismulti-equalandThenumbererrorsurfaceofthecontainsadjustablemanyweightslocalandminimanodeEBPitthealgorithmispreciselyeventuallyoneofthoseconvergeslocalminimato.thattheonevaluenationlocalofminimumZistoohigh,thealgorithmbouncesHowever,fromifingInRef.isreasonable.

toanother.Wethinktheexpla-[21],itisconsideredthatTherateshouldbetakenfromtherange:thevalueZ=(0.0,oflearn-0.7].possiblerangethattowascompare.notinvestigatedAlso,itinthisstudyandisim-takenlearningHowever,intheitcanrangewillconvergewhencannotlearningbeguaranteedratesarebeestimatedfortheapplicationthatthevalueinthisoflearning

paper.X.Caoetal./ComputersandStructures69(1998)63±7875

rates,whichmaketheconvergencethefastest,prob-ablyisnotinthisrangefortheapplicationreportedinthepaper.

Accordingtoouropinion,thereareanumberoffactorsthatin¯uenceANNtraininganditsconver-gence,whichincludestheANNarchitecture,trainingTable3

Errorsoftheidenti®edresultsRelativeerror<0.08r0.08

Neuronnumber633

Ratiooftotal95%5%

patterns,on.exceptInthetrainingtrainingalgorithm,algorithmtrainingadoptedprocedureandsoingforthesigmoidtemperature,thereintheareresearch,4learn-learningparameters,ameterscallyonrates,convergenceZi.e.,andmomentumg.Thecoe󰂁cients,aandb,shoulde󰂀ectbeofconsideredthelearningsyntheti-par-procedure.butindependently,learningofrates,TheZreasonandg,isthatespeciallyhigherthemayselectedforthistrainingbevalueoftheviouslymomentumcoe󰂁cients,aandb,whichthatwerethevaluepre-maylearninghelpdetermined,topayattentionisconsiderablytothecombinationlow.Therefore,oftheitselyTheconvergenceparameters.

ofneuralnetworklearningisclo-algorithmrelatedTheandtothetheneuronparametersincludedinthelearningdivergencelearningwilltakelongernumbertimeintheorevenhiddenlayer.suitable.

ifthecombinationoftheparametersleadisnottonationAlthoughitpossibleoftheparametersisdi󰂁cultbytothe®ndtheoptimalcombi-thatscopesoftheparameters,localitsearchcanbemethodsupposedinuniquethesimilarandbestmumtoseekingthecombinationcombinationoftheoptimalcanthetrainingparametersispointbefoundforabygivenaprocessthelongerconvergenceproblem.Asadoptedtimeto®ndwillthebeoptimalcombinationisused,opti-it.theWithfastest.theHowever,ittakesasatisfactoryinthestudy,abettercombination,localsearchwhichmethodisisgainedinenougharelativelyforsolvingshorterthetime.

problemgivenhere,appropriateThereareAlthoughnumbersnorigorousofhiddenmethodslayersinselectingthetopicoptimalandseveraltherestudieshavebeenperformedandonunits.thisrobustnetworkaresizes,alsotrialsomeandempiricalerrorrulesto®ndhiddenmethodinpracticalcases[3].areThestillnumberthemostofblemseveraldependent.layersandHowever,nodesinnumericalsuchlayersisagainpro-berbeminimizedrepresentativeforproblemssuggestsexperiencethatthisnum-withlayersItisnotlikelythatcomputationalitisbettertoe󰂁ciency.

havemorestudy.andthisInaddition,neuronsinaccordinghiddenlayersaccordinghiddentothisworkinvestigation,itisreasonabletotoourdetermineexperiencethefromnet-containedarchitecturesetinthelearning®rst,thenalgorithm.toselectOnotherparametersgenceoflearningthecontrary,aresultinfasterparameters,thatmaketrainingconver-divergenceforoneforarchitectureanotherarchitectureofANN,ofprobablyANN.

selectedNeedlessgencetrainingtosay,patternsbothstronglyqualityandquantityofcapability.

ofatrainingprocessaswellasin¯uenceageneralizationconver-areInthetrainingprocess,weights(wjk)andbiases(ytheconstantlyoutputactualandadjustedtominimizetheerrorbetweenj)theworkbestlayer.combinationTherethedesiredareoutputsoftheunitsintheofalsoweightsseveralandstudiesbiasesonand®ndingduringIntopologynet-thepaper,simultaneously.

thevariationsofweightsandbiasesbeofreportedthetrainingthatofANNarenotshown.Itshouldbiasesinappropriateatrandomthebeginninginitialofthetraining,asetdiscoveringalwaysresultedinthetrainingvaluesdiverging.ofweightsBeforeandsomewastedmistakesthisagain.alotofinvolvedreason,timecheckinginwethesuspectedcomputerthattherewerethecodeNNagaincodeandandbiasesThisthein¯uencesshowsthethattrainingthecombinationofweightsand®xedtrainingwasover,theweightsprocessandstrongly.biasesWhenwereandantoANNthevalueswasdetermined.obtainedbyAsthearesultIEBPofalgorithmtraining

Fig.12.Neuralnetworkadoptedtoidentifyloads.

76X.Caoetal./ComputersandStructures69(1998)63±78

ANN,thearchitectureoftheANNwith4neuronsinahidden-layer,T(2)=T(3)=1.1,a=b=À0.20andZ=g=1.7wasformedandthelearningofANNwasaccomplished.Therelativeerrorsofeachneuronintheoutputlayerwasreducedto0.05in4151cyclesoftraining.The®nalcon®gurationofthearti®cialneuralnetworkusedtoidentifyloadsisshowninFig.12.Theprocessofidentifyingloadsissmoothlyandide-allycompletedandsixsetsofoutputsareobtainedassoonassixsetsofstraindataaregivenintothetrained

Fig.13.Comparingtheidenti®edresultswiththeoreticalvalues.(Verticalcoordinates-leftaxis:concentratedload,N;rightaxis:strain,Â10À5.Horizontalcoordinates-coordinatesofactingpointsofloadsandmeasuringpointsofstrains,M).

X.Caoetal./ComputersandStructures69(1998)63±7877

neuralFig.networkinorder.TheresultsareshowninwhichThe13.

beam.causetheoreticalthesamevaluesstrainexpressvaluesaccurateloadvalueswhichWith6setsoflearningpatterns,whenappliedintotheobtained.11values,theByoutputsresultscomparingarecontained,66outputseachareofaregiventheseinTableoutputs3.

withtheoretical5.SummaryandConclusions

beam.Anaircraftwingissimpli®edtoacantileveredwingAnareNonuniformapproximateddistributedbyaloadsactingontheadoptedImprovedsetofconcentratedloads.oftoErrorBackPropagationalgorithmislishedlearningtraintothroughdatamultilayertheoreticalandsixsetsneuralnetworks.Sevensetscomputation,ofcheckdatawhichareareestab-usedresults.

trainANNandchecktheaccuracyoftheidenti®edingThethealgorithme󰂀ectofistheinvestigated.parametersAcontainedinthelearn-cedure,learningparametersisfoundbetterbyalocalcombinationsearchof4151outputcycles.whichlayerThemakesthetrainingofANNconvergepro-inisreducedrelativetoerrorsoftheneuronsinthetoWhenANN,thethestrainsfromallowancesixcheckerror.

patternsaregiventhetheunitsarti®cialintheneuralinputlayerofthewell-trainedsmoothlyloads.Theprocessnetworkofloadimmediatelyidenti®cationoutputismeetwithaccuracyandideallydemandscompleted.bycomparingTheidenti®edtheseresultsarti®cialtheoreticalinneuralnetworksvalues.Thecanpaperoutputsbedemonstratesthatanloadcuracyextremelyidenti®cation.Thewell±trainedusedveryANNe󰂀ectivelyrevealsmodel.

inthefastloadconvergenceidenti®cationandaforhighadegreecantileveredofac-vergenceThereareseveralkeyfactorsthatin¯uencetecture,ofprocedure,trainingANNlearning,thecon-patterns,whichincludeANNarchi-toTheconvergenceandsoon.

trainingalgorithmsandofANNlearninge󰂀ectparametersincludedinthelearningisalgorithm.closelyrelatedTheshouldoftakebetheconsideredlearningsynthetically.parametersTheonlearningconvergencetheparameterslongertimeorevendivergeifthecombinationwillofANNItisreasonableisnottosuitable.

determinethearchitectureoftheameterslearning®rst,thenalgorithm.toselectOnthetheparameterscontainedinarchitecturethatmaketrainingconvergencecontrary,fasterasetforofpar-anotherarchitectureofANNofprobablyANN.

resultsindivergenceoneforgationThelocallearningmaysearchprocedureadoptedintheinvesti-parametershelpto®ndinarelativelyasuitableshortercombinationtime.oftheAcknowledgements

sionsTheopportunitywith®rstauthorDrofthankingR.hasIsida.everbene®tedfromthediscus-him.ShewouldliketotakethisReferences

[1]TangVerticalJ,CaoX,MitsuiY.AStudyPICAST'95TailLoadsofanAircraftbyonDeterminingthe

Conference(ProceedingsStrainMethod.In:onofSecondPaci®cInternational[2]Australia,AerospaceScience&Technology).Schalko󰂀1995;159±62.

McGRAW-HILLRJ.Arti®cialCompanies,NeuralInc.,Network.Singapore,The

[3]1997;ISBNYoshimura0-07-1155-6.

RegularizationS,BasedIdenti®cation.InverseAnalysesbyMatsudaTransformationA,YagawaG.New

InternationalandItsJournalApplicationforNeuraloftoNumericalStructureNetwork[4]MethodsHajelaEngineering1996;39:3953±68.

ModelsinP,StructuralBerkeL.AnalysisNeurobiologicalandComputational

[5]StructuresRussellApproach.SJ,1991;41(4):657±67.

Design.Computers&NorvigP.Arti®cialIntelligence,AModern

[6]1995;ISBNLippmann0-13-360124-2.

Prentice-HallInternational,Inc.,USA,[7]NeuralAsouH.Nets.RP.AnIntroductiontoComputingwith

ProcessingIEEEASSPInformationMagazine,with1987:4±22.

[8]IndustryHertz[9]ofHecht-NielsenNeuralJ,KroghBooksPress,Tokyo,1992.

NeuralNetwork.

Computation.A,PalmerAddisonR.IntroductionWesley,1991.

totheTheory

[10]1991.

R.Neurocomputing.AddisonWesley,

RumelhartRepresentationsDE,byHintonBack-propagatingGE,WilliamsErrors.RJ.Learning

Nature[11]1986;323(9):533±6.

KuanAlgorithmsC,HornikK.Convergence[12]TransactionswithKatayamaonNeuralConstantLearningRates.ofLearning

IEEEofJSMENeuralDynamicsNetworkT,SugiyamaNetworksandtoPreliminaryY,IshidaR,1991;2(2):484±8.

DesignConference'92.StructuralCaoX.Application

Design.In:[13]1992;688±.

Japan,ZuradaJ.IntroductiontoArti®cialNeuralSystems.

[14]WestFaussetPubl.L.Co.,FundamentalsUSA,1992.

ofNeuralNetworks.Prentia

[15]Hall,Masri1994.

ofNonlinearSF,ChassiakosDynamicAG,CaugheyTK.Identi®cation

[16]JournalSamadofAppliedMechanicsSystems1993;60:123±33.

UsingNeuralNetworks.Values.NeuralT.BackNetworksPropagation1991;4:615±618.

withExpectedSource

78X.Caoetal./ComputersandStructures69(1998)63±78

[19]VanlucheneRD,SunR.MicrocomputersinCivil

Engineering1990;5:207±15.

[20]GunaratnamDJ,GeroJS.MicrocomputersinCivil

Engineering1994;9:97±108.

[21]HegazyTetal.Microcomp.inCivilEng.1994;9:145±59.

[17]VeitchAC,HolmesG.AModi®edQuickprop

Algorithm.NeuralComputation1991;3:310±1.

[18]FukumiM,OmatuS.ANewBack-Propagation

AlgorithmwithCoupledNeuron.IEEETransactionsonNeuralNetworks1991;2(5):535±8.

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- sarr.cn 版权所有 赣ICP备2024042794号-1

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务