人工智能詞匯_第1頁
人工智能詞匯_第2頁
人工智能詞匯_第3頁
人工智能詞匯_第4頁
人工智能詞匯_第5頁
已閱讀5頁,還剩11頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

逐層貪心訓(xùn)練方法Hadamardproduct阿達(dá)馬乘積逐層貪心訓(xùn)練方法Hadamardproduct阿達(dá)馬乘積常用英語詞匯-andrewNg課程intensity強(qiáng)度Regression回歸Lossfunction損失函數(shù)non-convex非凸函數(shù)neuralnetwork神經(jīng)網(wǎng)絡(luò)supervisedlearning監(jiān)察學(xué)習(xí)regressionproblem回歸問題辦理的是連續(xù)的問題classificationproblem分類問題discreetvalue失散值supportvectormachines支持向量機(jī)learningtheory學(xué)習(xí)理論learningalgorithms學(xué)習(xí)算法unsupervisedlearning無監(jiān)察學(xué)習(xí)gradientdescent梯度降落linearregression線性回歸NeuralNetwork神經(jīng)網(wǎng)絡(luò)gradientdescent梯度降落normalequationslinearalgebra線性代數(shù)superscript上標(biāo)exponentiation指數(shù)trainingset訓(xùn)練會(huì)合trainingexample訓(xùn)練樣本hypothesis假定,用來表示學(xué)習(xí)算法的輸出LMSalgorithm“l(fā)eastmeansquares最小二乘法算batchgradientdescent批量梯度降落constantlygradientdescent隨機(jī)梯度降落iterativealgorithm迭代算法partialderivative偏導(dǎo)數(shù)contour等高線quadraticfunction二元函數(shù)locallyweightedregression局部加權(quán)回歸underfitting欠擬合overfitting過擬合non-parametriclearningalgorithms無參數(shù)學(xué)習(xí)算parametriclearningalgorithm參數(shù)學(xué)習(xí)算法activation激活值activationfunction激活函數(shù)additivenoise加性噪聲autoencoder自編碼器Autoencoders自編碼算法averagefiringrate均勻激活率averagesum-of-squareserror均方差backpropagation后向流傳basis基basisfeaturevectors特點(diǎn)基向量batchgradientascent批量梯度上漲法Bayesianregularizationmethod貝葉斯規(guī)則化方法Bernoullirandomvariable伯努利隨機(jī)變量biasterm偏置項(xiàng)binaryclassfication二元分類classlabels種類標(biāo)記concatenation級(jí)聯(lián)conjugategradient共軛梯度contiguousgroups聯(lián)通地區(qū)convexoptimizationsoftware凸優(yōu)化軟件convolution卷積costfunction代價(jià)函數(shù)covariancematrix協(xié)方差矩陣DCcomponent直流重量decorrelation去有關(guān)degeneracy退化demensionalityreduction降維derivative導(dǎo)函數(shù)diagonal對(duì)角線diffusionofgradients梯度的彌散eigenvalue特點(diǎn)值eigenvector特點(diǎn)向量errorterm殘差featurematrix特點(diǎn)矩陣featurestandardization特點(diǎn)標(biāo)準(zhǔn)化feedforwardarchitectures前饋構(gòu)造算法feedforwardneuralnetwork前饋神經(jīng)網(wǎng)絡(luò)feedforwardpass前饋傳導(dǎo)fine-tuned微調(diào)first-orderfeature一階特點(diǎn)forwardpass前向傳導(dǎo)forwardpropagation前向流傳Gaussianprior高斯先驗(yàn)概率generativemodel生成模型gradientdescent梯度降落Greedylayer-wisetraininggroupingmatrix分組矩陣HessianmatrixHessian矩陣localmeansubtraction局部均值消減localmeansubtraction局部均值消減hiddenlayer隱含層hiddenunits隱蔽神經(jīng)元Hierarchicalgrouping層次型分組higher-orderfeatures更高階特點(diǎn)highlynon-convexoptimizationproblem高度非凸的優(yōu)化問題histogram直方圖hyperbolictangent雙曲正切函數(shù)hypothesis估值,假定identityactivationfunction恒等激勵(lì)函數(shù)IID獨(dú)立同散布illumination照明inactive克制independentcomponentanalysis獨(dú)立成份剖析inputdomains輸入域inputlayer輸入層intensity亮度/灰度interceptterm截距KLdivergence相對(duì)熵KLdivergenceKL分別度k-MeansK-均值learningrate學(xué)習(xí)速率leastsquares最小二乘法linearcorrespondence線性響應(yīng)linearsuperposition線性疊加line-searchalgorithm線搜尋算法localoptima局部最優(yōu)解logisticregression邏輯回歸lossfunction損失函數(shù)low-passfiltering低通濾波magnitude幅值MAP極大后驗(yàn)預(yù)計(jì)maximumlikelihoodestimation極大似然預(yù)計(jì)mean均勻值MFCCMel倒頻系數(shù)multi-classclassification多元分類neuralnetworks神經(jīng)網(wǎng)絡(luò)neuron神經(jīng)元Newton'smethod牛頓法non-convexfunction非凸函數(shù)non-linearfeature非線性特點(diǎn)norm范式normbounded有界范數(shù)normconstrained范數(shù)拘束normalization歸一化numericalroundofferrors數(shù)值舍入偏差numericallychecking數(shù)值查驗(yàn)numericallyreliable數(shù)值計(jì)算上穩(wěn)固objectdetection物體檢測objectivefunction目標(biāo)函數(shù)off-by-oneerror缺位錯(cuò)誤orthogonalization正交化outputlayer輸出層overallcostfunction整體代價(jià)函數(shù)over-completebasis超齊備基over-fitting過擬合partsofobjects目標(biāo)的零件part-wholedecomposti部分-整體分解PCA主元剖析penaltyterm處罰因子per-examplemeansubtraction逐樣本均值消減pooling池化pretrain預(yù)訓(xùn)練principalcomponentsanalysis主成份剖析quadraticconstraints二次拘束RBMs受限Boltzman機(jī)reconstructionbasedmodels鑒于重構(gòu)的模型reconstructioncost重修代價(jià)reconstructionterm重構(gòu)項(xiàng)redundant冗余reflectionmatrix反射矩陣regularization正則化regularizationterm正則化項(xiàng)rescaling縮放robust魯棒性run行程second-orderfeature二階特點(diǎn)sigmoidactivationfunctionS型激勵(lì)函數(shù)significantdigits有效數(shù)字singularvalue奇怪值singularvector奇怪向量smoothedL1penalty光滑的L1范數(shù)處罰SmoothedtopographicL1sparsitypenalty光滑地形L1稀少處罰函數(shù)smoothing光滑SoftmaxRegressonSoftmax回歸sortedindecreasingorder降序擺列sourcefeatures源特點(diǎn)sparseautoencoder消減歸一化Sparsity稀少性sparsityparameter稀少性參數(shù)sparsitypenalty稀少處罰squarefunction平方函數(shù)squared-error方差stationary安穩(wěn)性(不變性)stationarystochasticprocess安穩(wěn)隨機(jī)過程step-size步長值supervisedlearning監(jiān)察學(xué)習(xí)symmetricpositivesemi-definitematrix對(duì)稱半正定矩陣symmetrybreaking對(duì)稱無效tanhfunction雙曲正切函數(shù)theaverageactivation均勻活躍度thederivativecheckingmethod梯度考證方法theempiricaldistribution經(jīng)驗(yàn)散布函數(shù)theenergyfunction能量函數(shù)theLagrangedual拉格朗日對(duì)偶函數(shù)theloglikelihood對(duì)數(shù)似然函數(shù)thepixelintensityvalue像素灰度值therateofconvergence收斂速度topographiccostterm拓?fù)浯鷥r(jià)項(xiàng)topographicordered拓?fù)浯涡騮ransformation變換translationinvariant平移不變性trivialanswer平庸解under-completebasis不齊備基unrolling組合擴(kuò)展unsupervisedlearning無監(jiān)察學(xué)習(xí)variance方差vecotrizedimplementation向量化實(shí)現(xiàn)vectorization矢量化visualcortex視覺皮層weightdecay權(quán)重衰減weightedaverage加權(quán)均勻值whitening白化zero-mean均值為零Accumulatederrorbackpropagation積累偏差逆?zhèn)鰽ctivationFunction激活函數(shù)AdaptiveResonanceTheory/ART自適應(yīng)諧振理論Addictivemodel加性學(xué)習(xí)AdversarialNetworks抗衡網(wǎng)絡(luò)AffineLayer仿射層Affinitymatrix親和矩陣Agent代理/智能體Algorithm算法Alpha-betapruninga-B剪枝Anomalydetectior異樣檢測Approximation近似AreaUnderROCCurve/AUCRoc曲線下邊積ArtificialGeneralIntelligence/AGI通用人工智厶匕能ArtificialIntelligence/AI人工智能Associationanalysis關(guān)系剖析Attentionmechanism注意力體制Attributeconditionalindependenceassumption屬性條件獨(dú)立性假定Attributespace屬性空間Attributevalue屬性值A(chǔ)utoencoder自編碼器Automaticspeechrecognition自動(dòng)語音辨別Automaticsummarization自動(dòng)綱要Averagegradient均勻梯度Average-Pooling均勻池化BackpropagationThroughTime經(jīng)過時(shí)間的反向流傳Backpropagation/BP反向流傳Baselearner基學(xué)習(xí)器Baselearningalgorithm基學(xué)習(xí)算法BatchNormalization/BN批量歸一化Bayesdecisionrule貝葉斯判斷準(zhǔn)則BayesModelAveraging/BMA貝葉斯模型均勻Bayesoptimalclassifier貝葉斯最優(yōu)分類器Bayesiandecisiontheory貝葉斯決議論Bayesiannetwork貝葉斯網(wǎng)絡(luò)Between-classscattermatrix類間散度矩陣Bias偏置/偏差Bias-variancedecomposition偏差-方差分解Bias-VarianceDilemma偏差-方差窘境Bi-directionalLong-ShortTermMemory/Bi-LSTM雙向長短期記憶Binaryclassification二分類Binomialtest二項(xiàng)查驗(yàn)Bi-partition二分法Boltzmannmachine玻爾茲曼機(jī)Bootstrapsampling自助采樣法/可重復(fù)采樣觀點(diǎn)學(xué)習(xí)系統(tǒng)條件互信息

觀點(diǎn)學(xué)習(xí)系統(tǒng)條件互信息

/CPT條件概率表

條件隨機(jī)場卷積神經(jīng)網(wǎng)絡(luò)Bootstrapping自助法Break-EventPoint/BEP均衡點(diǎn)Calibration校準(zhǔn)Cascade-Correlation級(jí)聯(lián)有關(guān)Categoricalattribute失散屬性Class-conditionalprobability類條件概率Classificationandregressiontree/CART分類與回歸樹Classifier分類器Class-imbalance類型不均衡Closed-form閉式Cluster簇/類/集群Clusteranalysis聚類剖析Clustering聚類Clusteringensemble聚類集成Co-adapting共適應(yīng)Codingmatrix編碼矩陣COLT國際學(xué)習(xí)理論會(huì)議Committee-basedlearning鑒于委員會(huì)的學(xué)習(xí)Competitivelearning競爭型學(xué)習(xí)Componentlearner組件學(xué)習(xí)器Comprehensibility可解說性ComputationCost計(jì)算成本ComputationalLinguistics計(jì)算語言學(xué)Computervision計(jì)算機(jī)視覺Conceptdrift觀點(diǎn)漂移ConceptLearningSystem/CLSConditionalentropy條件熵ConditionalmutualinformationConditionalProbabilityTableConditionalrandomfield/CRFConditionalrisk條件風(fēng)險(xiǎn)Confidence置信度Confusionmatrix混雜矩陣Connectionweight連結(jié)權(quán)Connectionism連結(jié)主義Consistency一致性/相合性Contingencytable列聯(lián)表Continuousattribute連續(xù)屬性Convergence收斂Conversationalagen會(huì)話智能體Convexquadraticprogramming凸二次規(guī)劃Convexity凸性Convolutionalneuralnetwork/CNNCo-occurrence同現(xiàn)Correlationcoefficient有關(guān)系數(shù)Cosinesimilarity余弦相像度Costcurve成本曲線CostFunction成本函數(shù)Costmatrix成本矩陣Cost-sensitive成本敏感Crossentropy交錯(cuò)熵Crossvalidation交錯(cuò)考證Crowdsourcing眾包Curseofdimensionality維數(shù)災(zāi)害Cutpoint截?cái)帱c(diǎn)Cuttingplanealgorithm割平面法Datamining數(shù)據(jù)發(fā)掘Dataset數(shù)據(jù)集DecisionBoundary決議界限D(zhuǎn)ecisionstump決議樹樁Decisiontree決議樹/判斷樹Deduction演繹DeepBeliefNetwork深度信念網(wǎng)絡(luò)DeepConvolutionalGenerativeAdversarialNetworkDCGAN深度卷積生成抗衡網(wǎng)絡(luò)Deeplearning深度學(xué)習(xí)Deepneuralnetwork/DNN深度神經(jīng)網(wǎng)絡(luò)DeepQ-Learning深度Q學(xué)習(xí)DeepQ-Network深度Q網(wǎng)絡(luò)Densityestimation密度預(yù)計(jì)Density-basedclustering密度聚類Differentiableneuralcomputer可微分神經(jīng)計(jì)算機(jī)Dimensionalityreductionalgorithm降維算法Directededge有向邊Disagreementmeasure不合胸懷Discriminativemodel鑒別模型Discriminator鑒別器Distancemeasure距離胸懷Distancemetriclearning距離胸懷學(xué)習(xí)Distribution散布Divergence散度Diversitymeasure多樣性胸懷/差別性胸懷Domainadaption領(lǐng)域自適應(yīng)Downsampling下采樣D-separation(Directedseparation)有向分別Dualproblem對(duì)偶問題演化計(jì)算希望最大化梯度爆炸問題演化計(jì)算希望最大化梯度爆炸問題指數(shù)損失函數(shù)高斯核函數(shù)高斯混雜模型Dummynode啞結(jié)點(diǎn)DynamicFusion動(dòng)向交融Dynamicprogramming動(dòng)向規(guī)劃Eigenvaluedecompositior特點(diǎn)值分解Embedding嵌入Emotionalanalysis情緒剖析Empiricalconditionalentropy經(jīng)驗(yàn)條件熵Empiricalentropy經(jīng)驗(yàn)熵Empiricalerror經(jīng)驗(yàn)偏差Empiricalrisk經(jīng)驗(yàn)風(fēng)險(xiǎn)End-to-End端到端Energy-basedmodel鑒于能量的模型Ensemblelearning集成學(xué)習(xí)Ensemblepruning集成修剪ErrorCorrectingOutputCodes/ECOC糾錯(cuò)輸出碼Errorrate錯(cuò)誤率Error-ambiguitydecomposition偏差-分歧分解Euclideandistance歐氏距離EvolutionarycomputationExpectation-MaximizationExpectedloss希望損失ExplodingGradientProblemExponentiallossfunctionExtremeLearningMachine/ELM超限學(xué)習(xí)機(jī)Factorization因子分解Falsenegative假負(fù)類Falsepositive假正類FalsePositiveRate/FPR假正例率Featureengineering特點(diǎn)工程Featureselection特點(diǎn)選擇Featurevector特點(diǎn)向量FeaturedLearning特點(diǎn)學(xué)習(xí)FeedforwardNeuralNetworks/FNN前饋神經(jīng)網(wǎng)絡(luò)Fine-tuning微調(diào)Flippingoutput翻轉(zhuǎn)法Fluctuation震蕩Forwardstagewisealgorithm前向分步算法Frequentist頻次主義學(xué)派Full-rankmatrix滿秩矩陣Functionalneuron功能神經(jīng)元Gainratio增益率Gametheory博弈論GaussiankernelfunctionGaussianMixtureModelGeneralProblemSolving通用問題求解Generalizatio泛化Generalizationerror泛化偏差Generalizationerrorbound泛化偏差上界GeneralizedLagrangefunction廣義拉格朗日函數(shù)Generalizedlinearmodel廣義線性模型GeneralizedRayleighquotient廣義瑞利商GenerativeAdversarialNetworks/GAN生成抗衡網(wǎng)絡(luò)GenerativeModel生成模型Generator生成器GeneticAlgorithm/GA遺傳算法Gibbssampling吉布斯采樣Giniindex基尼指數(shù)Globalminimum全局最小GlobalOptimization全局優(yōu)化Gradientboosting梯度提高GradientDescent梯度降落Graphtheory圖論Ground-truth實(shí)情/真切Hardmargin硬間隔Hardvoting硬投票Harmonicmean調(diào)解均勻Hessematrix海塞矩陣Hiddendynamicmodel隱動(dòng)向模型Hiddenlayer隱蔽層HiddenMarkovModel/HMM隱馬爾可夫模型Hierarchicalclustering層次聚類Hilbertspace希爾伯特空間Hingelossfunction合頁損失函數(shù)Hold-out留出法Homogeneous同質(zhì)Hybridcomputing混雜計(jì)算Hyperparameter超參數(shù)Hypothesis假定Hypothesistest假定考證ICML國際機(jī)器學(xué)習(xí)會(huì)議Improvediterativescaling/IIS改良的迭代尺度法Incrementallearning增量學(xué)習(xí)Independentandidenticallydistributed/獨(dú)立同散布IndependentComponentAnalysis/ICA獨(dú)立成分剖析Indicatorfunction指示函數(shù)Individuallearner個(gè)體學(xué)習(xí)器考證考證Induction歸納Inductivebias歸納偏好Inductivelearning歸納學(xué)習(xí)InductiveLogicProgramming/ILP歸納邏輯程序設(shè)Informationentropy信息熵Informationgain信息增益Inputlayer輸入層Insensitiveloss不敏感損失Inter-clustersimilarity簇間相像度InternationalConferenceforMachineLearning/ICML國際機(jī)器學(xué)習(xí)大會(huì)Intra-clustersimilarity簇內(nèi)相像度Intrinsicvalue固有值IsometricMapping/Isomap等胸懷映照Isotonicregression平分回歸IterativeDichotomiser迭代二分器Kernelmethod核方法Kerneltrick核技巧KernelizedLinearDiscriminantAnalysis/KLDA核線性鑒別剖析K-foldcrossvalidationk折交錯(cuò)考證/k倍交錯(cuò)K-MeansClusteringK-均值聚類K-NearestNeighboursAlgorithm/KNNK近鄰算法Knowledgebase知識(shí)庫KnowledgeRepresentation知識(shí)表征Labelspace標(biāo)記空間Lagrangeduality拉格朗日對(duì)偶性Lagrangemultiplier拉格朗日乘子Laplacesmoothing拉普拉斯光滑Laplaciancorrection拉普拉斯修正LatentDirichletAllocation隱狄利克雷散布Latentsemanticanalysis潛伏語義剖析Latentvariable隱變量Lazylearning懶散學(xué)習(xí)Learner學(xué)習(xí)器Learningbyanalogy類比學(xué)習(xí)Learningrate學(xué)習(xí)率LearningVectorQuantization/LVQ學(xué)習(xí)向量量化Leastsquaresregressiontree最小二乘回歸樹Leave-One-Out/LOO留一法linearchainconditionalrandomfield線性鏈條件隨機(jī)場

LinearDiscriminantAnalysis/LDA線性鑒別剖析Linearmodel線性模型LinearRegression線性回歸Linkfunction聯(lián)系函數(shù)LocalMarkovproperty局部馬爾可夫性Localminimum局部最小Loglikelihood對(duì)數(shù)似然Logodds/logit對(duì)數(shù)幾率LogisticRegressionLogistic回歸Log-likelihood對(duì)數(shù)似然Log-linearregression對(duì)數(shù)線性回歸Long-ShortTermMemory/LSTM長短期記憶Lossfunction損失函數(shù)Machinetranslation/MT機(jī)器翻譯Macron-P宏查準(zhǔn)率Macron-R宏查全率Majorityvoting絕對(duì)多半投票法Manifoldassumption流形假定Manifoldlearning流形學(xué)習(xí)Margintheory間隔理論Marginaldistribution邊沿散布Marginalindependence邊沿獨(dú)立性Marginalization邊沿化MarkovChainMonteCarlo/MCMC馬爾可夫鏈蒙特卡羅方法MarkovRandomField馬爾可夫隨機(jī)場Maximalclique最大團(tuán)MaximumLikelihoodEstimation/MLE極大似然預(yù)計(jì)/極大似然法Maximummargin最大間隔Maximumweightedspanningtree最大帶權(quán)生成樹Max-Pooling最大池化Meansquarederror均方偏差Meta-learner元學(xué)習(xí)器Metriclearning胸懷學(xué)習(xí)Micro-P微查準(zhǔn)率Micro-R微查全率MinimalDescriptionLength/MDL最小描繪長度Minimaxgame極小極大博弈Misclassificationcost誤分類成本Mixtureofexperts混雜專家Momentum動(dòng)量Moralgraph道德圖/正直圖Multi-classclassification多分類Multi-documentsummarization多文檔綱要Multi-layerfeedforwardneuralnetworks多層前饋神經(jīng)網(wǎng)絡(luò)MultilayerPerceptron/MLP多層感知器Multimodallearning多模態(tài)學(xué)習(xí)MultipleDimensionalScaling多維縮放Multiplelinearregression多元線性回歸Multi-responseLinearRegression/MLR多響應(yīng)線性回歸Mutualinformation互信息Naivebayes樸實(shí)貝葉斯NaiveBayesClassifier樸實(shí)貝葉斯分類器Namedentityrecognition命名實(shí)體辨別Nashequilibrium納什均衡Naturallanguagegeneration/NLG自然語言生成Naturallanguageprocessing自然語言辦理Negativeclass負(fù)類Negativecorrelation負(fù)有關(guān)法NegativeLogLikelihood負(fù)對(duì)數(shù)似然NeighbourhoodComponentAnalysis/NCA近鄰成分剖析NeuralMachineTranslation神經(jīng)機(jī)器翻譯NeuralTuringMachine神經(jīng)圖靈機(jī)Newtonmethod牛頓法NIPS國際神經(jīng)信息辦理系統(tǒng)會(huì)議NoFreeLunchTheorem/NFL沒有免費(fèi)的午飯定理Noise-contrastiveestimation噪音對(duì)照預(yù)計(jì)Nominalattribute列名屬性Non-convexoptimization非凸優(yōu)化Nonlinearmodel非線性模型Non-metricdistance非胸懷距離Non-negativematrixfactorization非負(fù)矩陣分解Non-ordinalattribute無序?qū)傩訬on-SaturatingGame非飽和博弈Norm范數(shù)Normalization歸一化Nuclearnorm核范數(shù)Numericalattribute數(shù)值屬性LetterOObjectivefunction目標(biāo)函數(shù)Obliquedecisiontree斜決議樹Occam'srazor奧卡姆剃刀Odds幾率Off-Polic離策略O(shè)neshotlearning一次性學(xué)習(xí)One-DependentEstimator/ODE獨(dú)依靠預(yù)計(jì)On-Policy在策略O(shè)rdinalattribute有序?qū)傩設(shè)ut-of-bagestimate包外預(yù)計(jì)Outputlayer輸出層Outputsmearing輸出調(diào)制法Overfitting過擬合/過配Oversampling過采樣Pairedt-test成對(duì)t查驗(yàn)Pairwise成對(duì)型PairwiseMarkovproperty成對(duì)馬爾可夫性Parameter參數(shù)Parameterestimation參數(shù)預(yù)計(jì)Parametertuning調(diào)參Parsetree分析樹ParticleSwarmOptimization/PSO粒子群優(yōu)化算法Part-of-speechtagging詞性標(biāo)明Perceptron感知機(jī)Performancemeasure性能胸懷PlugandPlayGenerativeNetwork即插即用生成網(wǎng)絡(luò)Pluralityvoting相對(duì)多半投票法Polaritydetection極性檢測Polynomialkernelfunction多項(xiàng)式核函數(shù)Pooling池化Positiveclass正類Positivedefinitematrix正定矩陣Post-hoctest后續(xù)查驗(yàn)Post-pruning后剪枝potentialfunction勢(shì)函數(shù)Precision查準(zhǔn)率/正確率Prepruning預(yù)剪枝Principalcomponentanalysis/PCA主成分剖析Principleofmultipleexplanations多釋原則Prior先驗(yàn)ProbabilityGraphicalModel概率圖模型ProximalGradientDescent/PGD近端梯度降落Pruning剪枝Pseudo-label偽標(biāo)記QuantizedNeuralNetwork量子化神經(jīng)網(wǎng)絡(luò)Quantumcomputer量子計(jì)算機(jī)QuantumComputing量子計(jì)算QuasiNewtonmethod擬牛頓法線性修正單元循環(huán)神經(jīng)網(wǎng)絡(luò)遞歸神經(jīng)網(wǎng)絡(luò)加強(qiáng)學(xué)習(xí)線性修正單元循環(huán)神經(jīng)網(wǎng)絡(luò)遞歸神經(jīng)網(wǎng)絡(luò)加強(qiáng)學(xué)習(xí)Statusfeaturefunction狀態(tài)特點(diǎn)函RadialBasisFunction/RBF徑向基函數(shù)RandomForestAlgorithm隨機(jī)叢林算法Randomwalk隨機(jī)閑步Recall查全率/召回率ReceiverOperatingCharacteristic/ROC受試者工作特點(diǎn)RectifiedLinearUnit/ReLURecurrentNeuralNetworkRecursiveneuralnetworkReferencemodel參照模型Regression回歸Regularization正則化Reinforcementlearning/RLRepresentationlearning表征學(xué)習(xí)Representertheorem表示定理reproducingkernelHilbertspace/RKHS重生核希爾伯特空間Re-sampling重采樣法Rescaling再縮放ResidualMapping殘差映照ResidualNetwork殘差網(wǎng)絡(luò)RestrictedBoltzmannMachine/RBM受限玻爾茲曼機(jī)RestrictedIsometryProperty/RIP限制等距性Re-weighting重賦權(quán)法Robustness穩(wěn)重性/魯棒性Rootnode根結(jié)點(diǎn)RuleEngine規(guī)則引擎Rulelearning規(guī)則學(xué)習(xí)Saddlepoint鞍點(diǎn)Samplespace樣本空間Sampling采樣Scorefunction評(píng)分函數(shù)Self-Driving自動(dòng)駕駛Self-OrganizingMap/SOM自組織映照Semi-naiveBayesclassifiers半樸實(shí)貝葉斯分類器Semi-SupervisedLearning半監(jiān)察學(xué)習(xí)semi-SupervisedSupportVectorMachine半監(jiān)察支持向量機(jī)Sentimentanalysis感情剖析Separatinghyperplane分別超平面SigmoidfunctionSigmoid函數(shù)Similaritymeasure相像度胸懷Simulatedannealing模擬退火Simultaneouslocalizationandmapping

同步定位與地圖建立SingularValueDecomposition奇怪值分解Slackvariables廢弛變量Smoothing光滑Softmargin軟間隔Softmarginmaximization軟間隔最大化Softvoting軟投票Sparserepresentation稀少表征Sparsity稀少性Specialization特化SpectralClustering譜聚類SpeechRecognition語音辨別Splittingvariable切分變量Squashingfunction擠壓函數(shù)Stability-plasticitydilemma可塑性-穩(wěn)固性窘境Statisticallearning統(tǒng)計(jì)學(xué)習(xí)Stochasticgradientdescent隨機(jī)梯度降落Stratifiedsampling分層采樣Structuralrisk構(gòu)造風(fēng)險(xiǎn)Structuralriskminimization/SRM構(gòu)造風(fēng)險(xiǎn)最小化Subspace子空間Supervisedlearning監(jiān)察學(xué)習(xí)/有導(dǎo)師學(xué)習(xí)supportvectorexpansion支持向量展式SupportVectorMachine/SVM支持向量機(jī)Surrogatloss代替損失Surrogatefunction代替函數(shù)Symboliclearning符號(hào)學(xué)習(xí)Symbolism符號(hào)主義Synset同義詞集T-DistributionStochasticNeighbourEmbeddingt-SNET-散布隨機(jī)近鄰嵌入Tensor張量TensorProcessingUnits/TPU張量辦理單元Theleastsquaremethod最小二乘法Threshold閾值Thresholdlogicunit閾值邏輯單元Threshold-moving閾值挪動(dòng)TimeStep時(shí)間步驟Tokenization標(biāo)記化Trainingerrc訓(xùn)練偏差Traininginstance訓(xùn)練示例/訓(xùn)練例Transductivelearning直推學(xué)習(xí)Transferlearning遷徙學(xué)習(xí)[邏[邏]指稱algebra線性代數(shù)asymptotically無癥狀的appropriate適合的bias偏差brevity簡潔,簡潔;短暫[800]broader寬泛briefly簡潔的batch批量convergence收斂,集中到一點(diǎn)convex凸的contours輪廓constraint拘束constant常理commercial商務(wù)的complementarity增補(bǔ)coordinateascent同樣級(jí)上漲clipping剪下物;剪報(bào);修剪component重量;零件continuous連續(xù)的covariance協(xié)方差canonical正規(guī)的,正則的concave非凸的corresponds相切合;相當(dāng);通訊corollary推論concrete詳細(xì)的事物,實(shí)在的東西crossvalidation交錯(cuò)考證correlation互相關(guān)系convention商定cluster一簇centroids質(zhì)心,形心converge收斂computationa計(jì)1算(機(jī))的calculus計(jì)算derive獲取,獲得dual二元的duality二元性;二象性;對(duì)偶性derivation求導(dǎo);獲?。话l(fā)源denote預(yù)示,表示,是的標(biāo)記;意味著divergence散度;發(fā)散性dimension尺度,規(guī)格;維數(shù)dot小圓點(diǎn)distortion變形density概率密度函數(shù)discrete失散的Treebank樹庫Tria-by-error試錯(cuò)法Truenegative真負(fù)類Truepositive真切類TruePositiveRate/TPR真切例率TuringMachine圖靈機(jī)Twice-learning二次學(xué)習(xí)Underfitting欠擬合/欠配Undersampling欠采樣Understandability可理解性Unequalcost非均等代價(jià)Unit-stepfunction單位階躍函數(shù)Univariatedecisiontree單變量決議樹Unsupervisedlearning無監(jiān)察學(xué)習(xí)/無導(dǎo)師學(xué)習(xí)Unsupervisedlayer-wisetraining無監(jiān)察逐層訓(xùn)練Upsampling上采樣VanishingGradientProblem梯度消逝問題Variationalinference變分推測VCTheoryVC維理論Versionspace版本空間Viterbialgorithm維特比算法VonNeumannarchitecture馮?諾伊曼架構(gòu)WassersteinGAN/WGANWasserstein生成抗衡網(wǎng)絡(luò)Weaklearner弱學(xué)習(xí)器Weight權(quán)重Weightsharing權(quán)共享Weightedvoting加權(quán)投票法Within-classscattermatrix類內(nèi)散度矩陣Wordembedding詞嵌入Wordsensedisambiguation詞義消歧Zero-datalearning零數(shù)據(jù)學(xué)習(xí)Zero-shotlearning零次學(xué)習(xí)approximations近似值arbitrary任意的affine仿射的arbitrary任意的aminoacid氨基酸amenable經(jīng)得起查驗(yàn)的axiom公義,原則abstract提取architecture架構(gòu),系統(tǒng)構(gòu)造;建筑業(yè)absolute絕對(duì)的arsenal軍械庫assignment分派足)足)discriminative有辨別能力的diagonal對(duì)角dispersion分別,散開determinant決定要素disjoint不訂交的encounter碰到ellipses橢圓equality等式extra額外的empirical經(jīng)驗(yàn);察看ennmerate例舉,計(jì)數(shù)exceed超出,越出expectation希望efficient奏效的endow給予e

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論