人工智能專(zhuān)業(yè)詞匯_第1頁(yè)
人工智能專(zhuān)業(yè)詞匯_第2頁(yè)
人工智能專(zhuān)業(yè)詞匯_第3頁(yè)
已閱讀5頁(yè),還剩8頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、Letter AAccumulated error backpropagation 累積誤差逆?zhèn)鞑ctivation Function 激活函數(shù)Adaptive Resonance Theory/ART 自適應(yīng)諧振理論Addictive model 加性學(xué)習(xí)Adversarial Networks 對(duì)抗網(wǎng)絡(luò)Affine Layer 仿射層Affinity matrix 親和矩陣Agent 代理 / 智能體Algorithm 算法Alpha-beta pruning a - B 剪枝Anomaly detection 異常檢測(cè)Approximation 近似Area Under ROC Cu

2、rve AUC Roc 曲線下面積Artificial General Intelligence/AGI 通用人工智能Artificial Intelligence/AI 人工智能Association analysis 關(guān)聯(lián)分析Attention mechanism 注意力機(jī)制Attribute conditional independence assumption 屬性條件獨(dú)立性假設(shè)Attribute space屬性空間Attribute value屬性值A(chǔ)utoencoder 自編碼器Automatic speech recognition 自動(dòng)語(yǔ)音識(shí)別Automatic summari

3、zation 自動(dòng)摘要Average gradient 平均梯度Average-Pooling 平均池化Letter BBackpropagation Through Time 通過(guò)時(shí)間的反向傳播 Backpropagation/BP 反向傳播Base learner 基學(xué)習(xí)器Base learning algorithm 基學(xué)習(xí)算法Batch Normalization/BN 批量歸一化Bayes decision rule 貝葉斯判定準(zhǔn)則Bayes Model AveragingBMA貝葉斯模型平均Bayes optimal classifier 貝葉斯最優(yōu)分類(lèi)器Bayesian deci

4、sion theory 貝葉斯決策論Bayesian network 貝葉斯網(wǎng)絡(luò)Between-class scatter matrix 類(lèi)間散度矩陣Bias 偏置 / 偏差Bias-variance decomposition 偏差 -方差分解Bias-Varianee Dilemma 偏差-方差困境Bi-directional Long-Short Term Memory/Bi-LSTM 雙向長(zhǎng)短期記憶 Binary elassifieation 二分類(lèi)Binomial test 二項(xiàng)檢驗(yàn)Bi-partition 二分法Boltzmann machine 玻爾茲曼機(jī)Bootstrap sa

5、mpling 自助采樣法可重復(fù)采樣有放回采樣Bootstrapping 自助法Break-Eve nt Poi nt/ BEP 平衡點(diǎn)Letter CCalibration 校準(zhǔn)Cascade-Correlation級(jí)聯(lián)相關(guān)Categorical attribute離散屬性Class-conditional probability 類(lèi)條件概率Classification and regression tree/CART 分類(lèi)與回歸樹(shù)Classifier 分類(lèi)器Class-imbalance 類(lèi)別不平衡Closed -form 閉式Cluster 簇/類(lèi) /集群Cluster analysis

6、聚類(lèi)分析Clustering 聚類(lèi)Clustering ensemble 聚類(lèi)集成Co-adapting 共適應(yīng)Coding matrix 編碼矩陣COLT 國(guó)際學(xué)習(xí)理論會(huì)議Committee-based learning 基于委員會(huì)的學(xué)習(xí)Competitive learning 競(jìng)爭(zhēng)型學(xué)習(xí)Component learner 組件學(xué)習(xí)器Comprehensibility 可解釋性Computation Cost 計(jì)算成本Computational Linguistics 計(jì)算語(yǔ)言學(xué)Computer vision 計(jì)算機(jī)視覺(jué)Concept drift 概念漂移Concept Learning S

7、ystem /CLS 概念學(xué)習(xí)系統(tǒng)Conditional entropy 條件熵Conditional mutual information條件互信息Conditional Probability Table /CPT 條件概率表Conditional random field/CRF條件隨機(jī)場(chǎng)Conditional risk 條件風(fēng)險(xiǎn)Confidence 置信度Confusion matrix 混淆矩陣Connection weight 連接權(quán)Connectionism 連結(jié)主義Consistency 一致性/相合性Contingency table 列聯(lián)表Continuous attrib

8、ute連續(xù)屬性Convergence 收斂Conversational agent會(huì)話智能體Convex quadratic programming 凸二次規(guī)劃Convexity 凸性Convolutional neural network/CNN 卷積神經(jīng)網(wǎng)絡(luò) Co-occurrence 同現(xiàn)Correlation coefficient 相關(guān)系數(shù)Cosine similarity 余弦相似度Cost curve 成本曲線Cost Function 成本函數(shù)Cost matrix 成本矩陣Cost-sensitive 成本敏感Cross entropy 交叉熵Cross validation

9、 交叉驗(yàn)證Crowdsourcing 眾包Curse of dimensionality 維數(shù)災(zāi)難Cut point 截?cái)帱c(diǎn)Cutting plane algorithm 割平面法Letter DData mining 數(shù)據(jù)挖掘Data set 數(shù)據(jù)集Decision Boundary 決策邊界Decision stump 決策樹(shù)樁Decision tree 決策樹(shù)判定樹(shù)Deduction 演繹深度卷積生成對(duì)抗網(wǎng)絡(luò)Deep Belief Network深度信念網(wǎng)絡(luò)Deep Convolutional Generative Adversarial Network/DCGANDeep learni

10、ng 深度學(xué)習(xí)Deep neural network/DNN 深度神經(jīng)網(wǎng)絡(luò)Deep Q-Learning 深度 Q 學(xué)習(xí)Deep Q-Network 深度 Q 網(wǎng)絡(luò)Density estimation 密度估計(jì)Density-based clustering 密 度聚類(lèi)Differentiable neural computer可微分神經(jīng)計(jì)算機(jī)Dimensionality reduction algorithm 降維算法 Directed edge 有向邊Disagreement measure 不合度量Discriminative model 判別模型 Discriminator 判別器 D

11、istance measure 距離度量 Distance metric learning 距離度量學(xué)習(xí) Distribution 分布 Divergence 散度Diversity measure 多樣性度量差異性度量Domain adaption 領(lǐng)域自適應(yīng)Downsampling 下采樣D-separation ( Directed separation ) 有向分離Dual problem 對(duì)偶問(wèn)題Dummy node 啞結(jié)點(diǎn)Dynamic Fusion 動(dòng)態(tài)融合Dynamic programming 動(dòng)態(tài)規(guī)劃Letter EEigenvalue decomposition 特征值分解

12、Embedding 嵌入Emotional analysis 情緒分析Empirical conditional entropy 經(jīng) 驗(yàn)條件熵Empirical entropy 經(jīng)驗(yàn)熵Empirical error 經(jīng)驗(yàn)誤差Empirical risk 經(jīng)驗(yàn)風(fēng)險(xiǎn)End-to-End 端到端Energy-based model 基于能量的模型Ensemble learning 集成學(xué)習(xí)Ensemble pruning 集成修剪Error Correcting Output Codes ECOC 糾錯(cuò)輸出碼Error rate 錯(cuò)誤率Error-ambiguity decomposition誤差

13、-分歧分解Euclidean distance 歐氏距離Evolutionary computation演化計(jì)算Expectation-Maximization 期望最大化Expected loss 期望損失Exploding Gradient Problem 梯度爆炸問(wèn)題Exponential loss function 指數(shù)損失函數(shù)Extreme Learning Machine/ELM 超限學(xué)習(xí)機(jī) Letter FFactorization 因子分解False negative 假負(fù)類(lèi)False positive 假正類(lèi)False Positive Rate/FPR 假正例率Featur

14、e engineering 特征工程Feature selection 特征選擇Feature vector 特征向量Featured Learning 特征學(xué)習(xí)Feedforward Neural Networks/FNN前饋神經(jīng)網(wǎng)絡(luò)Fine-tuning 微調(diào)Flipping output 翻轉(zhuǎn)法Fluctuation 震蕩Forward stagewise algorithm 前向分步算法Frequentist 頻率主義學(xué)派Full-rank matrix 滿秩矩陣Functional neuron 功能神經(jīng)元Letter GGain ratio 增益率Game theory 博弈論Ga

15、ussian kernel function 高斯核函數(shù)Gaussian Mixture Model 高 斯混合模型General Problem Solving 通用問(wèn)題求解Generalization 泛化Generalization error 泛化誤差Generalization error bound 泛化誤差上界 Generalized Lagrange function 廣義拉格朗日函數(shù) Generalized linear model廣義線性模型Generalized Rayleigh quotient 廣義瑞利商 Generative Adversarial Networks

16、/GAN 生成對(duì)抗網(wǎng)絡(luò) Generative Model 生成模型Generator 生成器Genetic Algorithm/GA 遺傳算法Gibbs sampling 吉布斯采樣Gini index基尼指數(shù)Global minimum 全局最小Global Optimization全局優(yōu)化Gradient boosting 梯度提升Gradient Descent 梯度下降Graph theory 圖論Ground-truth 真相真實(shí)Letter HHard margin 硬間隔Hard voting 硬投票Harmonic mean 調(diào)和平均Hesse matrix 海塞矩陣Hidde

17、n dynamic model 隱動(dòng)態(tài)模型Hidden layer 隱藏層Hidden Markov Model/HMM 隱馬爾可夫模型Hierarchical clustering 層次聚類(lèi)Hilbert space 希爾伯特空間Hinge loss function 合頁(yè)損失函數(shù)Hold-out 留出法Homogeneous 同 質(zhì)Hybrid computing 混合計(jì)算Hyperparameter 超參數(shù)Hypothesis 假設(shè)Hypothesis test 假設(shè)驗(yàn)證Letter IICML 國(guó)際機(jī)器學(xué)習(xí)會(huì)議Improved iterative scaling/IIS 改進(jìn)的迭代尺度

18、法Incremental learning 增量學(xué)習(xí)Independent and identically distributed/i.i.d. 獨(dú)立同分布Independent Component Analysis/ICA 獨(dú)立成分分析Indicator function指示函數(shù)Individual learner個(gè)體學(xué)習(xí)器Induction 歸納Inductive bias 歸納偏好Inductive learning 歸納學(xué)習(xí)Inductive Logic Programming /ILP歸納邏輯程序設(shè)計(jì)Information entropy 信息熵Information gain 信息

19、增益Input layer 輸入層Insensitive loss 不敏感損失Inter-cluster similarity 簇間相似度International Conference for Machine Learning/ICML 國(guó)際機(jī)器學(xué)習(xí)大會(huì)Intra-cluster similarity 簇相似度Intrinsic value 固有值Isometric Mapping/Isomap 等度量映射Isotonic regression 等分回歸Iterative Dichotomiser 迭代二分器Letter KKernel method 核方法Kernel trick 核技巧K

20、ernelized Linear Discriminant Analysis / KLDA 核線性判別分析K-fold cross validation k 折交叉驗(yàn)證/ k 倍交叉驗(yàn)證K-Means Clustering K -均值聚類(lèi)K-Nearest Neighbours Algorithm/KNN K 近鄰算法Knowledge base 知識(shí)庫(kù)Knowledge Representation 知識(shí)表征Letter LLabel space 標(biāo)記空間Lagrange duality 拉格朗日對(duì)偶性Lagrange multiplier 拉格朗日乘子Laplace smoothing 拉

21、普拉斯平滑Laplacian correction 拉普拉斯修正Latent Dirichlet Allocation 隱狄利克雷分布Latent semantic analysis 潛在語(yǔ)義分析Latent variable 隱變量Lazy learning 懶惰學(xué)習(xí)Learner 學(xué)習(xí)器Learning by analogy 類(lèi)比學(xué)習(xí)Learning rate 學(xué)習(xí)率Learning Vector Quantization/LVQ 學(xué)習(xí)向量量化Least squares regression tree 最小二乘回歸樹(shù)Leave-One-Out/LOO 留一法linear chain con

22、ditional random field 線性鏈條件隨機(jī)場(chǎng)Linear Discrimi nant An alysis / LDA線性判別分析Linear model 線性模型Linear Regression 線性回歸Link function 聯(lián)系函數(shù)Local Markov property 局部馬爾可夫性Local minimum局部最小Log likelihood 對(duì)數(shù)似然Log odds/ logit對(duì)數(shù)幾率Logistic Regression Logistic 回歸Log-likelihood 對(duì)數(shù)似然Log-linear regression 對(duì)數(shù)線性回歸Long-Shor

23、t Term Memory/LSTM 長(zhǎng)短期記憶Loss function 損失函數(shù)Letter MMachine translation/MT 機(jī)器翻譯Macron-P 宏查準(zhǔn)率Macron-R 宏查全率Majority voting 絕對(duì)多數(shù)投票法Manifold assumption 流形假設(shè)Manifold learning 流形學(xué)習(xí)Margin theory 間隔理論Marginal distribution 邊際分布Marginal independence 邊際獨(dú)立性Marginalization 邊際化Markov Chain Monte Carlo/MCMC 馬爾可夫鏈蒙特卡

24、羅方法Markov Random Field 馬爾可夫隨機(jī)場(chǎng)Maximal clique 最大團(tuán)Maximum Likelihood Estimation/MLE 極大似然估計(jì)/極大似然法Maximum margin 最大間隔Maximum weighted spanning tree 最大帶權(quán)生成樹(shù)Max-Pooling 最大池化Mean squared error 均方誤差Meta-learner 元學(xué)習(xí)器Metric learning 度量學(xué)習(xí)Micro-P 微查準(zhǔn)率Micro-R 微查全率Minimal Description Length/MDL 最小描述長(zhǎng)度Minimax game

25、 極小極大博弈Misclassification cost 誤分類(lèi)成本Mixture of experts 混合專(zhuān)家Momentum 動(dòng)量Moral graph 道德圖/端正圖Multi-class classification 多分類(lèi)Multi-document summarization 多文檔摘要Multi-layer feedforward neural networks 多層前饋神經(jīng)網(wǎng)絡(luò)Multilayer Perceptron/MLP 多層感知器Multimodal learning 多模態(tài)學(xué)習(xí)Multiple Dimensional Scaling 多維縮放Multiple li

26、near regression 多元線性回歸Multi-response Linear Regression MLR 多響應(yīng)線性回歸Mutual information 互信息Letter NNaive bayes 樸素貝葉斯Naive Bayes Classifier 樸素貝葉斯分類(lèi)器Named entity recognition 命名實(shí)體識(shí)別Nash equilibrium 納什均衡Natural language generation/NLG 自然語(yǔ)言生成Natural language processing 自然語(yǔ)言處理Negative class 負(fù)類(lèi)Negative correl

27、ation 負(fù)相關(guān)法Negative Log Likelihood 負(fù)對(duì)數(shù)似然Neighbourhood Component Analysis/NCA 近鄰成分分析Neural Machine Translation 神經(jīng)機(jī)器翻譯Neural Turing Machine 神經(jīng)圖靈機(jī)Newton method 牛頓法NIPS國(guó)際神經(jīng)信息處理系統(tǒng)會(huì)議No Free Lunch Theorem NFL沒(méi)有免費(fèi)的午餐定理Noise-contrastive estimation 噪音對(duì)比估計(jì)Nominal attribute 列名屬性Non-convex optimization 非凸優(yōu)化Nonlin

28、ear model 非線性模型Non-metric distance非度量距離Non-negative matrix factorization 非負(fù)矩陣分解Non-ordinal attribute無(wú)序?qū)傩訬on-Saturating Game 非飽和博弈Norm 數(shù)Normalization 歸一化Nuclear norm 核數(shù)Numerical attribute 數(shù)值屬性Letter OObjective function 目標(biāo)函數(shù)Oblique decision tree 斜決策樹(shù)Occam' s razor奧卡姆剃刀Odds 幾率Off-Policy 離策略O(shè)ne shot

29、 learning 一次性學(xué)習(xí)One-Dependent Estimator ODE 獨(dú)依賴(lài)估計(jì) On-Policy 在策略O(shè)rdinal attribute 有序?qū)傩設(shè)ut-of-bag estimate包外估計(jì)Output layer 輸出層Output smearing 輸出調(diào)制法Overfitting 過(guò)擬合過(guò)配Oversampling 過(guò)采樣Letter PPaired t-test 成對(duì) t 檢驗(yàn)Pairwise 成對(duì)型Pairwise Markov property成對(duì)馬爾可夫性Parameter 參數(shù)Parameter estimation 參數(shù)估計(jì)Parameter tuni

30、ng 調(diào)參Parse tree 解析樹(shù)Particle Swarm Optimization/PSO 粒子群優(yōu)化算法 Part-of-speech tagging 詞性標(biāo)注 Perceptron 感知機(jī)Performance measure 性能度量Plug and Play Generative Network 即插即用生成網(wǎng)絡(luò) Plurality voting 相對(duì)多數(shù)投票法 Polarity detection 極性檢測(cè)Polynomial kernel function 多項(xiàng)式核函數(shù)Pooling 池化Positive class 正類(lèi)Positive definite matrix

31、正定矩陣Post-hoc test 后續(xù)檢驗(yàn)Post-pruning 后剪枝 potential function 勢(shì)函數(shù) Precision 查準(zhǔn)率準(zhǔn)確率 Prepruning 預(yù)剪枝 Principal component analysis/PCA 主成分分析 Principle of multiple explanations 多釋原則 Prior 先驗(yàn)Probability Graphical Model 概率圖模型 Proximal Gradient Descent/PGD 近端梯度下降 Pruning 剪枝Pseudo-label 偽標(biāo)記Letter QQuantized Neur

32、al Network 量子化神經(jīng)網(wǎng)絡(luò) Quantum computer 量子計(jì)算機(jī) Quantum Computing 量子計(jì)算 Quasi Newton method 擬牛頓法Letter RRadial Basis Function RBF 徑向基函數(shù) Random Forest Algorithm 隨機(jī)森林算法 Random walk 隨機(jī)漫步Recall 查全率召回率Receiver Operating Characteristic/ROC 受試者工作特征Rectified Linear Unit/ReLU 線性修正單元 Recurrent Neural Network 循環(huán)神經(jīng)網(wǎng)絡(luò)

33、Recursive neural network 遞歸神經(jīng)網(wǎng)絡(luò) Reference model 參考模型 Regression 回歸Regularization 正則化 Reinforcement learning/RL 強(qiáng)化學(xué)習(xí) Representation learning 表征學(xué)習(xí) Representer theorem 表示定理 reproducing kernel Hilbert space/RKHS 再生核希爾伯特空間 Re-sampling 重采樣法Rescaling 再縮放Residual Mapping 殘差映射Residual Network 殘差網(wǎng)絡(luò)Restricted

34、Boltzmann Machine/RBM 受限玻爾茲曼機(jī) Restricted Isometry Property/RIP 限定等距性 Re-weighting 重賦權(quán)法Robustness 穩(wěn)健性 / 魯棒性Root node 根結(jié)點(diǎn)Rule Engine 規(guī)則引擎Rule learning 規(guī)則學(xué)習(xí)Letter S Saddle point 鞍點(diǎn) Sample space 樣本空間 Sampling 采樣Score function 評(píng)分函數(shù)Self-Driving 自動(dòng)駕駛Self-Organizing Map SOM 自組織映射 Semi-naive Bayes classifier

35、s 半樸素貝葉斯分類(lèi)器 Semi-Supervised Learning 半監(jiān)督學(xué)習(xí) semi-Supervised Support Vector Machine 半監(jiān)督支持向量機(jī) Sentiment analysis 情感分析Separating hyperplane 分離超平面 Sigmoid function Sigmoid 函數(shù) Similarity measure 相似度度量 Simulated annealing 模擬退火 Simultaneous localization and mapping 同步定位與地圖構(gòu)建 Singular Value Decomposition 奇異值

36、分解Slack variables 松弛變量Smoothing 平滑Soft margin 軟間隔Soft margin maximization 軟間隔最大化Soft voting 軟投票Sparse representation 稀疏表征Sparsity 稀疏性Specialization 特化Spectral Clustering 譜聚類(lèi)Speech Recognition 語(yǔ)音識(shí)別Splitting variable 切分變量Squashing function 擠壓函數(shù)Stability-plasticity dilemma 可塑性 -穩(wěn)定性困境Statistical learning 統(tǒng)計(jì)學(xué)習(xí)Status feature function 狀態(tài)特征函Stochastic gradient descent 隨機(jī)梯度下降Stratified sampling 分層采樣Structural risk 結(jié)構(gòu)風(fēng)險(xiǎn)Structural risk minimization/SRM 結(jié)構(gòu)風(fēng)險(xiǎn)最小化Subspace子空間Supervised learning 監(jiān)督學(xué)習(xí)有導(dǎo)師學(xué)習(xí) support vector expansion 支持向量展式Suppor

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論