Dataset Viewer
text
stringlengths 907
4.83k
| source
stringclasses 1
value |
---|---|
Journalof Computationaland Applied Mathematics427(2023)115166 Contents lists available at Science Direct Journalof Computationaland Applied Mathematics journal homepage: www. elsevier. com/locate/cam Approaching STEPfileanalysisasalanguageprocessingtask: Arobustandscale-invariantsolutionformachiningfeature recognition Victoria Miles∗,Stefano Giani,Oliver Vogt Department of Engineering, Durham University, Stockton Road, Durham, DH1 3LE, United Kingdom a r t i c l e i n f o Article history: Received27October2022 Receivedinrevisedform9February2023 Keywords: 3DCAD STEPfiles Artificialintelligence Recursiveneuralnetwork Recurrentneuralnetworka b s t r a c t Machiningfeaturerecognitionisakeytaskintheintelligentanalysisof3DCADmodels asitrepresentsabridgebetweenapartdesignandthemanufacturingprocessesrequired formanufactureandcan,therefore,increaseautomationinthemanufacturingprocess. As3Dmodelfilesdonotnaturallyconformtothefixedsizenecessaryastheinput to most varieties of neural network, most existing solutions for machining feature recognition rely on either transforming CAD models into a fixed shape representa-tion, accepting some loss of information in the process, or employ rigid rules-based featureextractiontechniquespriortoapplyinganylearning-basedalgorithm,resulting insolutionswhichmaydisplayhighperformanceforspecificapplicationsbutwhich lackintheflexibilityprovidedbyapurelylearning-basedapproach. Inthispaper,we presentanovelmachiningfeaturerecognitionmodel,whichiscapableofinterpreting the data present in a STEP (standard for the exchange of product data) file using purely learning-based algorithms, with no need for human input. Our model builds onthebasicframeworkforfeatureextractionfrom STEPfiledataproposedin Miles etal. (2022),withthedesignofadecodernetworkcapableofusingextractedfeatures toperformthecomplextaskofmachiningfeaturerecognition. Modelperformanceis evaluatedbasedonaccuracyatthetaskofidentifying24classesofmachiningfeaturein CADmodelscontainingbetweentwoandtenintersectingfeatures. Resultsdemonstrate thatoursolutionachievescomparableperformancewithexistingsolutionswhengiven datasimilartothatusedduringtrainingandsignificantlyincreasedrobustnesswhen comparedtoexistingsolutionswhenpresentedwith CADmodelswhichvaryfromthose seenduringtrainingandcontainsmallfeatures. ©2023The Author(s). Publishedby Elsevier B. V. Thisisanopenaccessarticleunderthe CC BYlicense(http://creativecommons. org/licenses/by/4. 0/). 1. Introduction Withtheincreasinglywideavailabilityofhigh-performingartificialintelligencetechnology,themanufacturingindustry iscurrentlyundergoingamassiveshiftintoanewageofproduction. Industry4. 0isatermwhichreferstotheenvisioned near-futureofmanufacture,inwhichthewidescaleimplementationofintelligentmanufacturingsolutionsrepresentsthe fourthindustrialrevolution. Thegeneralconceptofintelligentmanufactureaspartof Industry4. 0isofamanufacturing industryinwhichsmartsolutionsaidinscheduling,monitoringandcontrollingtheoperationsofsmartmachinesand ∗Correspondingauthor. E-mail address: victoria. s. miles@durham. ac. uk(V. Miles). https://doi. org/10. 1016/j. cam. 2023. 115166 0377-0427/ ©2023The Author(s). Publishedby Elsevier B. V. Thisisanopenaccessarticleunderthe CCBYlicense(http://creativecommons. org/ licenses/by/4. 0/). | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Fig. 1. Segmentofa STEPfile. inwhichartificialintelligenceisusedtoaidinthedesignprocessbyinterfacingbetween CAD(computeraideddesign) modelsandphysicalmanufacturingprocesses[1]. There are many potential applications for intelligent solutions which can effectively interpret 3D CAD models. Identifyingthefeaturespresentinamodelcanleadtoincreasedautomationbetweendesignandmanufacturingprocesses. Smartanalysisofdatabasesof CADmodelscouldbeutilisedineffectivelysortingorsearchingthroughlargedatabases foreasieraccesstoamanufacturingcompany'slistofexistingpartdesigns. Smartanalysisoffeatureswhencomparedto existing CADmodelshasthepotentialtoleadtoincreasedstandardisationoffeatures,forgreatermanufacturingefficiency. Intelligentanalysisof3Ddata,suchas CADmodels,hastraditionallybeendominatedbymethodswhichutilise2D imagesof3Dshapes,3Dgridsofvoxels,orpointclouddata. Eachofthesemethodsnecessitatesthetranslationofaccurate 3Dmodeldataintolessaccuratevisualformswhichareeasierforneuralnetworkstoprocess,introducingissuessuch aslimitedresolutionandgeometricambiguity. Solutionsfor CADmodelanalysiswhichdotakemodeldatadirectlyas inputtendtoincluderules-basedelements,limitingtheadaptabilityofthesystem. Inthispaperwepresentanintelligent solutionforthekeytaskofmachiningfeaturerecognition,whichutilisespurelylearning-basedtechniquesandrequires notransformationawayfromactual CADmodeldata. STEP(standardfortheexchangeofproductdata)isan ISOstandardmodelformat[2]inwhich3Dgeometriesare representedusingahierarchicalstructurewheresimplecomponents,suchaspoints,arecombinedtoformincreasingly complexcomponents,fromedgestofacestocompletemodelshells. Thewideuseofthe STEPformatinthemanufacturing industrymakesitsuitableasinputforagenericintelligentsystem,andas STEPisatext-basedformat,itissuitablefor directinterpretationusingartificialintelligencetoolsdevelopedforlanguageprocessingtasks. Asegmentofa STEPfileisshownin Fig. 1. Eachlinerepresentsasingleelement,suchasapointoredgeandconsists ofaline ID,category,andalistofparameterswhichmaybethe IDsofother STEPlines,coordinatevaluesoradditional flags. Acomplexhierarchicalstructureisformedbyconnectingeachlinetoeachotherlineitdependson,withmany nodesatthelowestlevelandasinglenodeatthehighestlevelwhichrepresentstheentiremodelgeometry. Abasicframeworkforintelligentanalysisof STEPfiledataispresentedin[3],inwhicharecursiveencodernetwork isusedtointelligentlycompressthegeometricdatafroma STEPfileintoasingle-vectorrepresentation. Inthispaper webuildonthisframeworktodevelopanintelligentsolutionforthedetectionofcommonmachiningfeatures,and demonstratethecompetitiveperformanceofourmodel. 1. 1. Machining feature recognition Meaningfulanalysisof3Ddataisataskwhichposessignificantchallengesforartificialintelligencemodels. 3Ddatais inherentlycomplexanddoesnotsimplyconformtothefixedinputshaperequiredbymanyneuralnetworkarchitectures. Therefore,mostexistingapproachestotheanalysisof3Ddatausingneuralnetworksrelyontransformingthedatainto afixedsize,withinevitablelossof3Ddataassociated. withcommonapproachesincludingthemulti-viewapproach,the voxelapproachandthepointcloudapproach. Inthemulti-viewapproach[4,5],multiple2Dimagesofa3Dobjectaretakenasinputtoaneuralnetwork. Theoutputs ofthenetworkacrosseachimageareinsomewaycombined,toproducepredictionsregardingtheshapeofthe3Dobject. Thisapproachreliesonallrelevantfeaturesofthe3Dobjectbeingvisibleintheimagesselected,withfeatureswhichare smallinsizeorlocatedininconvenientpositionsposinggreaterchallengesfortheneuralnetwork. 2 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Fig. 2. 24classesofmachiningfeature,from[3]. Thevoxel-basedapproach[6-8]involvesconverting3Dobjectsinto3Dgridsof'voxels'andusingtheseasinputto a3Dneuralnetwork,suchasa3DCNN. Thismethodhasasevereresolutionissue,asthe3Dneuralnetworksusedto interpretthevoxelgridsbecomeimpracticallylargeunlessinputdimensionsarekeptrelativelysmall. Thenecessarylow resolutionofinputdataresultsinmodelswhichcanachievesuccesswhenrecognisinggeneralshapesbutwhichstruggle withfinedetailsandsmallfeatures. Thepointcloudapproach[9,10]improvesonthisresolutionissuebyrepresentinga3Dobjectmoreefficientlyusing selectedpointsfromsurfacefaces. However,thereremainsalimitationtoresolutionbasedonthetotalnumberofpoints usedtorepresenteachmodel. Machiningfeatures(ormanufacturingfeatures)aresimplefeatures,suchasholesandslots,whichcanbeproduced viaasinglemanufacturingprocess. Throughthecombinationofseveralmachiningfeatures,complex3Dpartdesignscan berealised. Therecognitionofthesefeatureswithina3Dmodelis,therefore,akeytaskforintelligentsolutionswhich analyse CADmodels,asidentifyingthefeaturespresentinamodelrepresentsadirectbridgebetweena CADdesignand themanufacturingprocessesnecessarytoproducethepartitrepresents. Fig. 2showsexamplesof24classesofmachining feature,whichwillbereferencedthroughoutthispaper. There are existing machining feature recognition models based on each of the approaches outlined above. Fea-ture Net[11]usesa3DCNNtoclassifymachiningfeaturesbasedon3Dvoxelgridswithdimensionsbetween16 ×16×16 and64 ×64×64. In Msv Net[12],voxelisedmodelsarerepresentedusingmultiple2Dsectionalviews,createdby makingrandomcutsintothemodeltoallowforrepresentationoftheinsideofthemodel. The2Dimagesproducedare segmentedtoisolateindividualfeaturesusingtheselectivesearchalgorithm,thenindividualfeaturerepresentationsare usedastheinputtoa2DCNNwhichperformsfeaturerecognition. Thisarchitecturewasimprovedonin[13],which presented Ssd Net,inwhichsegmentationandfeaturerecognitionarecombinedintoasingleprocessbasedonthesingle shotmultiboxdetector(SSD)[14]. In[15],Point Net ++[10],ahierarchicalnetworkwhichmakesuseofpointclouddata, isusedtoperformbothsingle-featureclassificationandmulti-featurerecognition. In[16],Associatively Segmentingand Identifying Network(ASIN)isproposed,anetworkwhichclustersfacesbelongingtothesamemachiningfeatureusing pointclouddatabeforeperformingidentificationoffeatureclass. Inrecentyears,workhasbeendoneapplyinglearning-basedtechniquesdirectlytothefacesofboundaryrepresen-tation(B-rep)modelstoaddresstheissueofthelossof3Dmodeldatawhenconvertingtoafixed-sizeinput. In[17], informationfromeach B-repfaceisencodedandafeatureclasspredictedforeachfacetoeffectivelysegmentthe CAD modelintoindividualmachiningfeatures. Inadditiontotheselearning-basedmodels,thereisalsosignificantexistingresearchfocusingontheuseofrules-based techniquesforextractionofrelevantfeaturesfrom STEPfiles. Aswithanyrules-basedapproach,thesemodelsaretypically limitedtospecifictaskssuchasdetectingb-splinesurfacefeatures[18],spot-weldingfeatures[19]oridentifyingfeatures 3 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 relevanttothe V-bendingprocess[20]. Evenmoregeneralsolutions,suchasthefeaturerecognitionsystemforrotational partspresentedin[21],relyonlimitedfeaturesetswhichcanonlybeexpandedthroughtheproductionofanewsetof definitiverulestogovernthenewfeatureclass,thuslackingtheflexibilityofalearning-basedapproach. Thenoveltyofourapproachliesintreatingtheinterpretationofa STEPfileasalanguageprocessingtask,withthe hierarchicalinformationrepresentedina STEPfilemaintainedasinputtotheneuralnetwork,resultinginapurely learning-basedapproachwhichtakesmodelfilesdirectlyasinput. 1. 2. Neural networks for language processing The STEP format is an example of an artificial, or computer, language, with its own fixed vocabulary and rules equivalenttoasimplegrammaticalsystem. Astheyarelanguagesdesignedtobeinterpretedbycomputers,artificial languagesareinherentlylessnuancedandcomplexthannatural,orhuman,languages,withconsistentrulesandno ambiguity. Theythereforeprovideasignificantlymorestraightforwardformofdataforartificialintelligencesolutionsto interpretthannaturallanguages,whilstretainingenoughsimilaritytopermitthedirectapplicationofneuralnetworks designedfornaturallanguageprocessing. Earlyworkintoapplyingneuralnetworkstonaturallanguageprocessingtasksprimarilyfocusedontherecurrent neuralnetwork[22]. Arecurrentneuralnetwork(or RNN),incontrasttomostneuralnetworks,isdesignedtohandle avariableinputsize. Inputdataisexpectedtobeasequenceofanylength,suchasanaturallanguagesentence,with asinglecellduplicatedasmanytimesasnecessarytoprocesstheentiresequence. Ateachstep,anewwordfromthe sequenceisinputtothecellandanewoutputhiddenstateiscalculated. Thisoutputhiddenstateisthentakenasthe inputhiddenstateforthenextstepinthesequence. Thus,memoryismaintainedoftheinformationaddedateachstep. Acommonvariantof RNNisthe LSTM(longshorttermmemory)network[23],whichmakesuseofthe LSTMcell[24], inwhichtwomemorystatesaremaintained,representingbothlongandshort-termmemory. Recursiveneuralnetworkshavearchitectureverysimilartorecurrentneuralnetworks,withthekeydifferencebeing theshapeoftheinputdata. Whilstrecurrentnetworksassumethatinputdataisintheformofalinearsequenceof inputs,recursiveneuralnetworksassumethatinputdatahasahierarchical,ortree,structure. Firstproposedin[25],the recursiveneuralnetworkwasinitiallydesignedforapplicationsinbothnaturallanguageprocessingandsemanticscene segmentation,exploitingthehierarchicalstructurespresentbothinnaturallanguageandimages. Thisarchitecturewas improvedonin[26]withtheimplementationofamodified LSTMcell,designedtobeoperatedwithinatreestructure. Althoughthecontinueddominanceof RNNs[27,28]andtransformernetworks[29,30]haslimitedthescaleoffurther researchintothepotentialofrecursivearchitecturesfornaturallanguageprocessing,recursivenetworkshavesince beenadaptedtoapplicationsasdiverseasprogramtranslation[31],identificationofproteininteractions[32]andjet physics[33]. In[3]arecursiveneuralnetworkisappliedtothegeometricdatafroma STEPfileinordertocompressthecomplex hierarchical information into a single-vector representation. The viability of this approach is demonstrated through classificationofmachiningfeatures. Thisworkreportshighaccuracyforthesingle-featureclassificationtaskbutdoes notimplementamodelapplyingtheencodertoanycomplextask,suchasmulti-featurerecognition. 2. Methods Inthispaper,weproposeanovelarchitectureformachiningfeaturerecognition,incorporatingtherecursiveencoder networkpresentedin[3]. Theoperationoftheneuralnetworkisshownin Fig. 3(b). Thenetworkconsistsoftwosub-networks,therecursiveencodernetwork,whichtakesthedatafroma STEPfileasinputandproducesasingle-vector encodingrepresentingeach CADmodel,andthedecodernetworkwhichusestheseencodedvectorstoproducealist ofclasspredictionsforeach CADmodel. Thissectioncontainsabriefoutlineoftheencodernetwork,asfirstpresented in[3],followedbydetailsofthedesignandtrainingofanovel LSTM-baseddecodernetworkincludingevaluationof severalarchitecturevariationsandselectionofappropriatemodelparameters. 2. 1. Recursive encoder Therecursiveencodernetworkisbasedonthe Child-Sum Tree-LSTMnetworkfirstproposedin[26]. Thenetwork takesthetree-structureddatarepresentedina STEPfileandappliesatree LSTMcelltoeverynodeintheinputstructure, producingvectorswhichrepresentincreasinglycomplexinformationasthenetworkparsesrecursivelythroughthe hierarchicaldatastructure. Thefinaloutputoftheencodernetworkisasinglevectorrepresentingtheoutputofthe tree LSTMcellappliedtothehighest-levelnodeoftheinputdatatree. Asallothernodesinthetreefeeddirectlyinto thishighest-levelnode,itcanbeseentorepresenttheentiredatatreeandsotheoutputvectorrepresentsanencoding ofallofthegeometricinformationpresentina STEPfile. Fulldetailoftheimplementationoftherecursiveencoderis presentedin[3]. 4 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Fig. 3. Neuralnetworkarchitecturefortraditionalandadapteddecodernetworks,showingfourpredictionsteps,where iiareinputvectors, oioutput vectors, hetheoutputhiddenstatefromtheencodernetworkand c0theinitialcellstate. 2. 2. Decoder architecture Thedecodernetworkmakesuseofthe LSTMcell. An LSTMcellconsistsofthreegates,theforget,inputandoutput gates,whichareusedtoupdatetwomemorystates:thehiddenstate(short-termmemory)andthecellstate(long-term memory). Eachcelltakesasinglenewinputintheformofavector. Eachgateperformsaspecificfunction,withbehaviour inaparticular LSTMcelldependentontheprevioushiddenstateandtheinputvector. First,theforgetgatecontrolshow muchinformationfromthepreviouscellstateiscopiedintothenewcellstateandhowmuchisforgotten. Then,the inputgatecontrolswhichinformationfromtheprevioushiddenstateandthenewinputshouldbewrittenintothenew 5 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Fig. 4. Processeswithinan LSTMcell,where htisthehiddenstate, ctisthecellstate, xtthenewinputand ft,itand otareoutputsoftheforget, inputandoutputgatesattimestep t. cellstate. Finally,theoutputgatecontrolshowmuchofthenewcellstateshouldbewrittentothenewhiddenstate. Thehiddenstateistakenastheoutputofthecell,whereasthecellstatemerelyrepresentslong-termmemory,which willbepassedtothenext LSTMcellinthechain. Theprocesseswithinan LSTMcellarevisualisedin Fig. 4. Fig. 3(a)showsadecodernetworkusingatraditional LSTM-basedarchitecture. Thecellsarearrangedinachain,with theoutputhiddenstatesfromeachcelltakenastheinputforthenextcellinthechain. Theoutputhiddenstatefrom eachcellispassedthroughfully-connectedlayers,reducingthedimensionsofoutputvectorstoequalthenumberof classesinthedataset. Asoftmaxactivationfunctionisthenappliedtoproduceaprobabilityscoreforeachfeatureclass asfollows: Softmax(xi)=exp( xi)∑ jexp( xj)(1) andtheclasswiththehighestscoreispredictedatthisstep. Thefirstinputisazerovector,withsubsequentcellstaking thepreviousoutputvectorasinput. Thus,informationispassedthroughthechainof LSTMcells,withasingleprediction madeateachstep. Predictionsstopwhenadesignatedendtokenispredicted. To better adapt the LSTM architecture to fit the desired task, several alterations were made to this traditional architecture: 1. Giventhattheorderoftheoutputpredictionsisnotrelevant,anupdatinginputvectorwasimplemented. Instead offeedingthepreviousoutputdirectlyintothecellasinput,nowthepredictionsfromeachpreviousstepare summed. Thisresultsininputvectorswhichrepresentallpreviouspredictions,givingmoreusefulinformationas inputforeachpredictionstep. 2. Asthegoalofthedecoderistoderiveasmuchmeaningaspossiblefromtheoutputoftherecursiveencoder,it wasdecidedtofeedtheencoderoutputdirectlyintoeach LSTMcellastheprevioushiddenstate. Thecellstateis stillpassedthroughthechain,givingthenetworkanupdatinglong-termmemorywhilstalwaysworkingdirectly ontheencoderoutputastheshort-termmemory. This,combinedwiththeupdatinginputvector,meansthatthe decoderisalwaysdirectlyanalysingtheencoderoutput,withanupdatingmemoryofpreviouspredictions. 3. Toreducethenumberofpredictionstepsnecessary,theoutputlayershavebeenadaptedtoproduceseveral predictionsateveryoutputstep. Inordertoencouragetheexistenceofmultiplehighvaluesintheoutputvector, thesoftmaxactivationfunctionisreplacedwithsigmoid,whichiscalculatedelement-wiseas: σ(x)=1 1+exp(-x)(2) Topredictmultiplelabelsinonestep,allclasseswithscoreshigherthanagiventhresholdarepredicted. Through thetrialofseveralvalues,athresholdof0. 7wasdeemedtobeappropriateforoutputpredictionsandsotheoutput 6 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Table 1 Train-timefeaturerecognitionaccuracyforatraditional LSTMarchitectureandafterimplementingeach alterationtothearchitecture,eachmodellistedalsoincludesalladaptationslistedabove. Decoderarchitecture Validationaccuracy Traditional LSTM 0. 731 (1) Withupdatinginputvector 0. 874 (2) Withencoderhiddenstateusedasinputforeverystep 0. 897 (3) With sigmoid activation giving multiple predictions per step 0. 902 vectorisconvertedintoabinaryvectorasfollows: x={ 1 ifσ(x)≥0. 7 0 ifσ(x)<0. 7(3) Inthisway,eachinstanceofthe LSTMcellpredictsalistoffeatureclassesuntilastepisreachedwherenofurther predictionsareproduced. Thus,fora CADmodelwhichdoesnotcontainmorethanoneinstanceofanymachining featureclass,itispossibleforthemodeltopredictallclassespresentwithasinglepredictionstep. Fig. 3showsthealterationsmadetothedecodernetwork,withatraditional LSTMnetworkdisplayedin Fig. 3(a)and ouradaptedversionin Fig. 3(b). Featurerecognitionaccuracyachievedaftereachofthesealterationswasintroduced,as wellasforthetraditional LSTMarchitecture,ispresentedin Table1. 2. 3. Neural network size Oneadvantageofourapproachistherelativesimplicityoftheneuralnetworkarchitecture;theencodernetwork consistsofasingletree LSTMlayerandthedecoderofasingle LSTMlayerwithtwofullyconnectedlayersattheoutput. However,inordertoensureoptimumperformancewhilstminimisingthenumberoflearnableparameters,astudywas carriedouttocomparetheperformanceofseveralvariationsoftheneuralnetwork. Hiddensizeoftheencoderand decoder LSTMlayerswasvariedbetween300and600. Inaddition,decodernetworkswithasingleoutputfullyconnected layerandwithtwofullyconnectedlayerswheretheoutputsizeofthefirstlayerwasvariedbetween50and150were used. Fig. 5showsthevalidationaccuracyforeachofthemodelvariantstested. Theaccuracymeasureusedisf-score,a weightedaverageofprecisionandrecall,calculatedas: precision =tp tp+fp(4) recall=tp tp+fn(5) fscore=2×precision ×recall precision +recall(6) where tpisnumberoftruepositives, fpnumberoffalsepositivesand fnnumberoffalsenegativesintheneuralnetwork predictions. Itcanbeclearlyseenfrom Fig. 5thatahiddensizeof500achievesoptimalaccuracy,outperformingbothlargerand smallernetworks. Thiscombinedwithtwofullyconnectedlayers,withoutputsizeofthefirstlayerequalling50or100 achievethehighestf-scores. Ofthesetwomodels,eitherwouldbeanappropriatechoice. Onemodelissmaller,asa smallerfullyconnectedlayerrequiredfewerlearnedparameters. However,theothernetworkconvergedtoaslightly bettersolutioninfewertrainingsteps,indicatingashortertrainingtime. Fortheremainderofthispaper,themodelwith hiddensize500andintermediatefullyconnectedsizeof100isusedwhenpresentingresults. 2. 4. Training Theneuralnetworkistrainedtominimisebinarycross-entropy(BCE)lossbetweeneachsetofpredictionsandthose expectedforthecorresponding LSTMcell. BCElosscanbecalculatedas: L=1 nn∑ i=1yi·logxi+(1-yi)·log(1-xi) (7) where nisthenumberofvaluesinthemodeloutput, xiisthe ithscalarvalueinthemodeloutputand yiisthe ith scalarvalueinthetargetvector. Duringtraining,thenumberofoutputvectorspredictedislimitedto10andcumulative lossiscalculated. Idealbehaviourisdefinedasdetectingallfeaturespresentina CADmodelinasfewstepsaspossible, andgoodbehaviourasdetectingallfeatureswithinthe10stepsallowed. Inordertoencouragethisbehaviour,thefirst 7 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Fig. 5. Validationf-scoresplottedagainstnumberoftrainingstepsrequiredforvariousmodelsizes. targetvectorissetasalistofallpresentfeatureclasses. Subsequenttargetvectorsareinitialisedtorepresentonlyfeature classeswhichappearmultipletimesinthemodel. Ateachroundofpredictions,anyclassesfromthetargetvectorwhich arenotpredictedbythenetworkwillbepassedthroughintothenexttargetvector. Ifthemodelhassuccessfullydetected everygroundtruthclass,theloopisbrokenandtheprocessoflossaccumulationhalted. Thus,ifthemodelsuccessfully predictsallgroundtruthclassesquickly,lossisminimised,whereasifthemodeltakeslongertomakethepredictions, therewillbemorestepstakeninwhichtoaccumulateloss. Inordertoeffectivelylearnweightsforboththeencoderanddecodernetworks,theneuralnetworkistrainedusinga twostepprocess. First,theencoderispre-trainedusingasimplifieddecoder. Thissimplifiedmodelhasno LSTMcellsand insteadfeedstheoutputoftheencodernetworkdirectlyintothefullyconnectedlayers,toproduceasinglebinaryvector representingthepresenceorlackthereofofeachfeatureclass. Inthesecondstepoftraining,thesimplifieddecoderis replacedwiththefulldecoderarchitecture. Inthissecondphaseoftraining,thedecoderweightsarelearnedandthe encoderweightsmerelyfine-tuned. 3. Results and discussion Theneuralnetworkhasbeentrainedtoperformamachiningfeaturerecognitiontask,inwhichalistoffeatureclass predictionsisproducedforeach STEPfilegiventothemodelasinput. Thenetworkwastrainedwithlearningrate initialisedat1e-3andthe Adamoptimiserused[34]. Allcodeiswrittenin Py Torchandtrainingandtestingforour networkiscarriedoutusinga Xeon E5-2609v3CPUwithtwocoresand20GBRAM. Inordertodemonstratethesignificanceofourresults,networkperformancewillbecomparedtothatoftwoexisting machiningfeaturerecognitionmodels,Msv Net[12]and Ssd Net[14]. Inbothcases,optimalpre-trainedversionsofthe modelsareused,andtestingiscarriedoutusinga Xeon Gold6134CPUwith RTX2080Ti GPU. 3. 1. Machining features dataset Inthispaper,theneuralnetworkistrainedandtestedusingadatasetbasedon24classesofsimplemachiningfeature, asshownin Fig. 2. Eachmodelinthemachiningfeaturesdatasetconsistsofastandardbasecubewithsidelength10cm, 8 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Table 2 Parametersusedtogeneratetestdatasets. Testset Numberofmodels Scalefactor(Minimumsize) 1 400 1 2 200 1/2 3 200 1/3 4 200 1/4 Table 3 Parametersusedtogeneratedatasubsets. Datasubset 1 2 3 4 5 6 7 8 9 10 Numberoffeatures 2 2 3 4 5 6 7 8 9 10 Scalefactor(Maximumsize) 1 1/2 1/2 1/2 1/3 1/3 1/3 1/4 1/4 1/4 withbetweentwoandtenmachiningfeaturesadded. Theclassesoffeatureaddedtoeachmodelarerandomised,along withalldimensionsofthefeature,whichareassignedrandomvalueswithinfixedbounds. Thelistoffeatureclasses included,aswellasthescaleofthemodelsanddimensionlimitsforfeaturesarebasedonthoseincludedinthebenchmark datasetpresentedin[12]. However,thebenchmarkitselfisnotusedtoassessperformanceasthedatasetcontainsmodels onlyin STLformatandlarge-scaleconversionofadatasetfrom STLto STEPformatisnotpractical. Theneuralnetworkistrainedusingadatasetconsistingof2000totalmodels,dividedintotrainandvalidationsetsat aratioof8:2,givingatrainsetof1600totalmodelsandavalidationsetof400. Performance is assessed using several test datasets. These are generated using the same randomised process as thetrainingdatasets,meaningthatthemodelsproducedareequivalenttothoseusedduringtrainingbut,duetothe randomisation,therecurrenceofanidenticalfeatureisextremelyunlikely. Thefirsttestsetisadatasetofsize400in whichfeaturesizelimitsaresettothesamevaluesasthoseusedinthetrainingsets. Inordertodemonstratethescale invarianceofourmodel,additionaltestdatasetsareconstructedwiththeminimumsizeallowedforfeaturedimensions reduced. Theparametersusedtogenerateeachtestdatasetareshownin Table2. Eachdatasetiscomprisedof10subsets, eachcontaininganequalnumberofmodelsandgeneratedwithparametersasshownin Table3. Fordirectcomparisonoftheperformanceofournetworkwiththoseofexistingsolutions,ourtestdatasetshavebeen convertedintothe STLandbinvoxformatscompatiblewiththecomparisonnetworks,allowingfordirectcomparison usingthesamedatasets. Asourfirstdatasethasbeendesignedtobeexactlyequivalenttothebenchmarkset,withthe sameparametersusedforgeneration,optimalperformancecanbeexpectedfromcomparisonmodelstrainedusingthe benchmarkdata. Fig. 6showsanexample CADmodelfromeachdatasubsetineachofthefourtestsets. 3. 2. Feature recognition performance Performanceusingdataequivalenttothatseenduringtrainingwillfirstbediscussed,followedbyperformanceusing thescaled-downdatasets. 3. 2. 1. Performance on data equivalent to training data Table4showsthef-scoreachievedbyournetwork,comparedtoexistingsolutionswhenperformingamachining featurerecognitiontaskusingdataequivalenttothatseenduringtraining. Theseresultsarevisualisedin Fig. 8(a). As canbeseenfromthedata,ournetworkachieveshighperformance,closetothatof Ssd Netandsignificantlyoutperforms Msv Net. Akeypointofcomparisonistheperformanceofourmodelandthatof Ssd Netacrossdatasubsets1and2. Asboth subsetsconsistofmodelscontainingtwomachiningfeatures,theonlydifferencebetweenthesesetsisthescaleof thefeatures. Thescaleddownfeaturesinsubset2havelessoverallintersectionduetotheirreducedsize. Therefore, thesignificantlyimprovedperformanceofournetworkonsubsettwoindicatesthehighdegreetowhichourmodel's limitationsarecausedbyintersectionoffeatures. Thecomparativelyconsistentperformanceof Ssd Netacrossthesetwo datasubsetsindicateseitherthat Ssd Net'slimitationsarenotascloselyconnectedtolevelofintersection,orthatthe improvementsinperformanceconnectedtoalowerlevelofintersectionarenegatedbyasimultaneousdecreasein performanceduetotheincreasednumberofsmallerfeatures. Thisdisparitycanagainbeseeninthecomparativeperformanceacrossdatasubsets4and5. Whileaddinganadditional featureanddecreasingthefeaturesizeoncemoreresultsinaroughly3%dropinaccuracyfor Ssd Net,ourmodel'saccuracy againincreases. 9 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Fig. 6. Example CADmodelsfromeachtestdataset. Table 4 F-scoresfor Test Set1,separatedbydatasubset. Datasubset 1 2 3 4 5 6 7 8 9 10 Total Msv Net[12] 0. 7205 0. 8481 0. 7699 0. 7752 0. 7399 0. 6866 0. 6994 0. 7219 0. 6657 0. 6779 0. 7156 Ssd Net[14] 0. 9554 0. 956 0. 9664 0. 95 0. 9262 0. 9021 0. 9114 0. 9006 0. 9211 0. 9168 0. 9217 Ours 0. 9250 0. 9875 0. 9106 0. 9270 0. 9415 0. 8980 0. 9031 0. 8783 0. 8843 0. 8152 0. 8906 3. 2. 2. Scale invariance: Investigating the impact of reducing feature size Akeyadvantageofourapproachisthat,bytaking STEPfilesdirectlyasinputtotheneuralnetwork,allinformation necessarytoreproducethemodelcanbemaintained. Unlikeimageandvoxel-basedapproaches,resolutionisnotanissue andsothereisnolimitationtotheminimumfeaturesizerelativetothemodelsizewhichcanbeidentified. Inorder 10 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Table 5 F-scoresforeachtestdataset. Model Test Set1 Test Set2 Test Set3 Test Set4 Msv Net 0. 7156 0. 6101 0. 6023 0. 5713 Ssd Net 0. 9217 0. 8367 0. 7657 0. 7413 Ours 0. 8906 0. 8976 0. 8831 0. 8780 Fig. 7. Overallf-scoreofeachtestdatasetplottedagainstminimumfeaturesizescalefactor. todemonstratethisscaleinvariance,thenetworkistestedusingthreeadditionaltestdatasetsinwhichtheminimum dimensionsforeachfeaturehavebeenscaleddownbyfactorsof2,3and4. Forcomparison,thescaleddowndatasets havealsobeenconvertedinto STLandbinvoxformats,allowing Ssd Netand Msv Nettoalsorunoneachdatasettomeasure comparativeperformance. Neitherourmodeloreitherofthecomparisonmodelshavebeentrainedusingdatacontaining smallfeaturesso,forallthreenetworks,thesedatasetscontainmodelswhicharesubstantiallydifferentfromanything seenduringtraining. Table5showstheoverallf-scoreforeachtestdataset,andtheseresultsareplottedin Fig. 7. Ascanbeseenfrom Fig. 7,both Msv Netand Ssd Netshowacleardownwardstrendinaccuracywhenincreasingly smallerfeaturesareincludedinthedataset. Thisisapredictableresult;Bothnetworksrelyonvoxelisedmodels,with dimensionsinthiscasesettotheirhighestvaluesof64 ×64×64. Asaninevitableresultofthisprocess,smallerfeatures arelostordistortedasveryfewvoxelsareusedtorepresentthem. Increasingthedimensionsofthevoxelisedmodelcan resultinimpracticallylargenetworksandregardlessofthesizeofthevoxelgridtherewillalwaysbealimittothe resolutionofthemodel. Incontrast,oursolutionshowsnostrongcorrelationbetweenminimumfeaturesizeandmodelperformance. Small featuresarerepresentedusingthesamecomponentsina STEPfileaslargefeatures,withonlythecoordinatevalues differentiatingfeaturesbasedonsize. Therefore,ourmodeliscapableofdemonstratingscale-invariancetoadegreewhich wouldnotbepossibleforamodelreliantonvoxelisedrepresentations. Moreover,thenetworkdemonstratesrobustness whenfacedwith CADmodelswithfeatureswhichdiffersignificantlyinscalefromthoseseenduringtraining,indicating astronggeneralunderstandingoffeatureshapewhichisnotreliantonfeaturesconformingtotheexpectedscale. Table6showstheperformanceresultsbrokendownbydatasubsetandtheperformanceofeachmodelacrosseach testdatasetandsubsetisplottedin Fig. 8. Ascanbeseenin Fig. 8(a),whenusing Test Set1,inwhichfeaturesizesareequivalenttothoseusedduringtraining, ourmodeland Ssd Netdisplaycomparableperformance. However,asincreasinglysmallfeaturesareallowed,increased separationindetectionperformancecanbeobserved,withperformanceofbothcomparisonnetworksdroppingsignifi-cantly,whilstoursremainsconsistent. Fig. 9showsthedropinperformancefor Ssd Net,comparedtothecomparatively consistentperformanceofoursolution. 4. Conclusions Thispaperpresentsanovelsolutionformachiningfeaturerecognitioninwhichtheinterpretationof STEPfilesis treatedasalanguageprocessingtask. Resultsshowthatournetworkdisplayshighperformanceindependentofminimum 11 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Table 6 Networkperformanceacrossalltestsubsets. Model Datasubset 1 2 3 4 5 6 7 8 9 10 Total Msv Net[12] Scale1 0. 7205 0. 8481 0. 7699 0. 7752 0. 7399 0. 6866 0. 6994 0. 7219 0. 6657 0. 6779 0. 7156 Scale2 0. 8052 0. 6053 0. 7664 0. 6531 0. 7571 0. 5951 0. 6378 0. 5839 0. 4984 0. 5581 0. 6101 Scale3 0. 6579 0. 7632 0. 7027 0. 7183 0. 7363 0. 6667 0. 5815 0. 5292 0. 5811 0. 5122 0. 6023 Scale4 0. 6446 0. 7818 0. 6988 0. 6697 0. 6347 0. 5817 0. 6006 0. 5089 0. 4955 0. 5011 0. 5713 Ssd Net[14] Scale1 0. 9554 0. 9560 0. 9664 0. 9500 0. 9262 0. 9021 0. 9114 0. 9006 0. 9211 0. 9168 0. 9217 Scale2 0. 8571 0. 9114 0. 8689 0. 8758 0. 9053 0. 8547 0. 8645 0. 8608 0. 7588 0. 7757 0. 8367 Scale3 0. 7848 0. 8312 0. 8673 0. 8742 0. 8283 0. 8295 0. 7344 0. 7153 0. 7222 0. 6997 0. 7657 Scale4 0. 8983 0. 8727 0. 7901 0. 8472 0. 8073 0. 7485 0. 7565 0. 6746 0. 6994 0. 6578 0. 7413 Ours Scale1 0. 9250 0. 9875 0. 9106 0. 9270 0. 9415 0. 8980 0. 9031 0. 8783 0. 8843 0. 8152 0. 8906 Scale2 0. 9639 0. 9877 0. 9322 0. 9114 0. 9286 0. 9160 0. 8561 0. 9055 0. 8606 0. 8740 0. 8976 Scale3 0. 8608 0. 9877 0. 9344 0. 9125 0. 9490 0. 9270 0. 8645 0. 8636 0. 8300 0. 8525 0. 8831 Scale4 0. 9421 0. 9500 0. 8962 0. 9099 0. 8829 0. 9209 0. 8492 0. 8639 0. 8527 0. 8546 0. 8780 Fig. 8. Featurerecognitionperformanceacrossalltestsubsets. featuresize,whilstotherexistingsolutionsshowreductionsinperformanceasfeaturesbecomesmaller. Ascomplex CADmodelsmaycontainsmallfeaturesrelativetothemodelsize,thisresultdemonstratesthatourmodeliscapableof performingconsistentlyacrossawiderrangeof CADmodels,andthereforedisplaysgreaterflexibilitythantheexisting solutionsusedforcomparison. 12 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 Fig. 9. Featurerecognitionperformanceacrossalltestsubsets. Data availability Alinktothereserveddoiforthedatasetisincludedintheacknowledgementssection Acknowledgements Thisworkhasused Durham University's NCCcluster. NCChasbeenpurchasedthrough Durham University'sstrategic investmentfunds,andisinstalledandmaintainedbythe Departmentof Computer Science. Thisworkwassupportedbythe Engineeringand Physical Sciences Research Council(EPSRC),United Kingdom[grant number EP/T518001/1]. Forthepurposeofopenaccess,theauthorhasapplieda Creative Commons Attribution(CCBY) licencetoany Author Accepted Manuscriptversionarising. Thedatasetusedinthispapercanbeaccessedfromthe Durham Research Data Repository,athttp://dx. doi. org/10. 15128/r26q182k16s. Theauthorshavenocompetinginterests todeclare. References [1] R. Y. Zhong,X. Xu,E. Klotz,S. T. Newman,Intelligentmanufacturinginthecontextofindustry4. 0:Areview,Engineering3(2017)616-630. [2] ISO,Standards,2016,https://www. iso. org/standard/63141. html[Accessed March2022]. [3] V. Miles,S. Giani,O. Vogt,Recursiveencodernetworkfortheautomaticanalysisof STEPfiles,J. Intell. Manuf. (2022)1-16. [4] F.-w. Qin,L.-y. Li,S.-m. Gao,X.-l. Yang,X. Chen,Adeeplearningapproachtotheclassificationof3DCADmodels,J. Zhejiang Univ.-Sci. C15 (2014)91-106. [5] H. Su,S. Maji,E. Kalogerakis,E. Learned-Miller,Multi-viewconvolutionalneuralnetworksfor3Dshaperecognition,in:Proceedingsofthe IEEE International Conferenceon Computer Vision,ICCV,2015. [6] Z. Wu,S. Song,A. Khosla,F. Yu,L. Zhang,X. Tang,J. Xiao,3DShape Nets:Adeeprepresentationforvolumetricshapes,in:Proceedingsofthe IEEEConferenceon Computer Visionand Pattern Recognition,CVPR,2015. [7] D. Maturana,S. Scherer,Vox Net:A3Dconvolutionalneuralnetworkforreal-timeobjectrecognition,in:2015IEEE/RSJInternational Conference on Intelligent Robotsand Systems,IROS,2015,pp. 922-928. [8] G. Riegler,A. Osman Ulusoy,A. Geiger,Oct Net:Learningdeep3Drepresentationsathighresolutions,in:Proceedingsofthe IEEEConference on Computer Visionand Pattern Recognition,CVPR,2017. [9] C. R. Qi,H. Su,K. Mo,L. J. Guibas,Point Net:Deeplearningonpointsetsfor3Dclassificationandsegmentation,in:Proceedingsofthe IEEE Conferenceon Computer Visionand Pattern Recognition,CVPR,2017. [10] C. R. Qi,L. Yi,H. Su,L. J. Guibas,Pointnet++:Deephierarchicalfeaturelearningonpointsetsinametricspace,Adv. Neural Inf. Process. Syst. 30(2017). [11] Z. Zhang,P. Jaiswal,R. Rai,Feature Net:Machiningfeaturerecognitionbasedon3Dconvolutionneuralnetwork,Comput. Aided Des. 101(2018) 12-22. [12] P. Shi,Q. Qi,Y. Qin,P. J. Scott,X. Jiang,Anovellearning-basedfeaturerecognitionmethodusingmultiplesectionalviewrepresentation,J. Intell. Manuf. 31(2020)1291-1309. [13] P. Shi,Q. Qi,Y. Qin,P. J. Scott,X. Jiang,Intersectingmachiningfeaturelocalizationandrecognitionviasingleshotmultiboxdetector,IEEETrans. Ind. Inform. 17(5)(2021)3292-3302. [14] W. Liu,D. Anguelov,D. Erhan,C. Szegedy,S. Reed,C. Fu,A. Berg,SSD:Singleshot Multi Boxdetector,in:Proceedingsofthe European Conference on Computer Vision,ECCV,2016. [15] X. Yao,D. Wang,T. Yu,C. Luan,J. Fu,Amachiningfeaturerecognitionapproachbasedonhierarchicalneuralnetworkformulti-featurepoint cloudmodels,J. Intell. Manuf. (2022)1-12. [16] H. Zhang, S. Zhang, Y. Zhang, J. Liang, Z. Wang, Machining feature recognition based on a novel multi-task deep learning network, Robot. Comput.-Integr. Manuf. 77(2022)102369. 13 | survey3DCADfeatureExtractors.pdf |
V. Miles, S. Giani and O. Vogt Journal of Computational and Applied Mathematics 427 (2023) 115166 [17] C. Yeo,B. C. Kim,S. Cheon,J. Lee,D. Mun,Machiningfeaturerecognitionbasedondeepneuralnetworkstosupporttightintegrationwith3D CADsystems,Sci. Rep. 11(1)(2021)1-20. [18] B. K. Venu,V. Rao,D. Srivastava,STEP-basedfeaturerecognitionsystemfor B-splinesurfacefeatures,Int. J. Autom. Comput. 15(2018)500-512. [19] M. A. Kiani,H. A. Saeed,Automaticspotweldingfeaturerecognitionfrom STEPdata,in:Proceedingsofthe International Symposiumon Recent Advancesin Electrical Engineering,Vol. 4,RAEE,2019,pp. 1-6. [20] A. A. Salem,T. F. Abdelmaguid,A. S. Wifi,A. Elmokadem,Towardsanefficientprocessplanningofthe V-bendingprocess:anenhancedautomated featurerecognitionsystem,Int. J. Adv. Manuf. Technol. 91(9)(2017)4163-4181. [21] M. Al-wswasi,A. Ivanov,Anovelandsmartinteractivefeaturerecognitionsystemforrotationalpartsusinga STEPfile,Int. J. Adv. Manuf. Technol. 104(2019)261-284. [22] K. Cho,B. Van Merriënboer,C. Gulcehre,D. Bahdanau,F. Bougares,H. Schwenk,Y. Bengio,Learningphraserepresentationsusing RNNencoder-decoder for statistical machine translation, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP,2014,pp. 1724-1734. [23] I. Sutskever,O. Vinyals,Q. V. Le,Sequencetosequencelearningwithneuralnetworks,2014,Co RR,ar Xiv:1409. 3215. [24] S. Hochreiter,J. Schmidhuber,Longshort-termmemory,Neural Comput. 9(1997)1735-1780. [25] R. Socher,C. C.-Y. Lin,A. Y. Ng,C. D. Manning,Parsingnaturalscenesandnaturallanguagewithrecursiveneuralnetworks,in:Proceedingsof the International Conferenceon Machine Learning,ICML,2011,pp. 129-136. [26] K. S. Tai,R. Socher,C. D. Manning,Improvedsemanticrepresentationsfromtree-structuredlongshort-termmemorynetworks,in:Proceedings ofthe Associationfor Computational Linguistics,ACL,2015. [27] K. Cho, B. van Merrienboer, D. Bahdanau, Y. Bengio, On the properties of neural machine translation: Encoder-decoder approaches, in: Proceedingsof SSST-8,Eighth Workshopon Syntax,Semanticsand Structurein Statistical Translation,2014. [28] J. Chung,C. Gulcehre,K. Cho,Y. Bengio,Gatedfeedbackrecurrentneuralnetworks,in:Proceedingsofthe32nd International Conferenceon Machine Learning,2015,pp. 2067-2075. [29] A. Vaswani,N. Shazeer,N. Parmar,J. Uszkoreit,L. Jones,A. N. Gomez,L. Kaiser,I. Polosukhin,Attentionisallyouneed,in:Proceedingsofthe Conferenceon Neural Information Processing Systems,NIPS,2017. [30] J. Devlin,M. Chang,K. Lee,K. Toutanova,BERT:pre-trainingofdeepbidirectionaltransformersforlanguageunderstanding,in:Proceedingsof the2019Conferenceofthe North American Chapterofthe Associationfor Computational Linguistics:Human Language Technologies,2019, pp. 4171-4186. [31] X. Chen, C. Liu, D. Song, Tree-to-tree neural networks for program translation, in: Proceedings of the Conference on Neural Information Processing Systems,NIPS,2018. [32] M. Ahmed,J. Islam,M. R. Samee,R. E. Mercer,Identifyingprotein-proteininteractionusingtree LSTMandstructuredattention,in:2019IEEE 13th International Conferenceon Semantic Computing,ICSC,IEEE,2019,pp. 224-231. [33] G. Louppe,K. Cho,C. Becot,K. Cranmer,QCD-awarerecursiveneuralnetworksforjetphysics,J. High Energy Phys. 2019(1)(2019)1-23. [34] D. Kingma,J. Ba,Adam:Amethodforstochasticoptimization,in:Proceedingsofthe3rd International Conferencefor Learning Representations, 2015. 14 | survey3DCADfeatureExtractors.pdf |
README.md exists but content is empty.
- Downloads last month
- 29