模式識別聚類算法和線性判別算法_第1頁
模式識別聚類算法和線性判別算法_第2頁
模式識別聚類算法和線性判別算法_第3頁
模式識別聚類算法和線性判別算法_第4頁
模式識別聚類算法和線性判別算法_第5頁
已閱讀5頁,還剩28頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

1、Pattern Classificationheight weight C-means clustering algorithm Hierarchical clusteringHierarchical clusteringLinear decision Pattern Classification1.1 C-means clustering algorithmThe flow chart of C-means clustering algorithm 1.2 Hierarchical clusteringThe flow chart of Hierarchical clustering 1.3

2、 Function M = mean (A):Find the average number or mean of the array Returns the average value of the elements in the different dimension along the array. If A is a vector, mean (A) returns the average value of the elements in the A. If A is a matrix, mean (A) is regarded as a vector, each column in

3、the matrix as a vector, the return of a row vector containing the average value of all elements of each column. If A is an array of multiple, mean (A) will be the first in the array of non single dimensional values as a vector, the average value of the return of each vector.M = mean (A, dim) Returns

4、 the average value of the elements in the dimension specified by the scalar dim along the A. For a matrix, mean (A, 2) is the column vector containing the average value of each row.C-means clustering: 1.4 Source Program and Simulation Result Simulation Result of C-means clustering Simulation Result

5、of C-means clustering Hierarchical clustering 1.4 Source Program and Simulation Result Simulation Result of Hierarchical clusteringSimulation Result of Hierarchical clustering1.5 Effect of initial value of different clustering on clustering results2.1 Linear decision Program first calls to randQ to

6、distinguish between training samples and testing samples, obtained through the related parameters and design the minimum Euclidean distance classifier, and used to detect all the test samples, finally, get every mistake rate .2.1 Linear decision Design a linear classifier with the minimum Euclidean

7、distance criterion: Discriminant function : Source Program of RandQ Source Program of Linear decision Simulation Result of Linear decision Simulation Result of Linear decision err_vg(將汽車誤判為背景時的錯判率);err_gv(將背景誤判為汽車時的錯判率);err_VG(將汽車誤判為背景時的錯判率);err_GV(將背景誤判為汽車時的錯判率);Program description: Pdist function

8、is used to calculate the distance between each other, and then the linkage function is used to establish the hierarchical structure tree. By comparing the classification results, the least error algorithm is selected for the class of inner square distance. The final call cluster function, the struct

9、ure of the tree to cluster, determine the final category.2.2 Hierarchical clustering2.2 Hierarchical clustering 1.pdist Y=pdist(X,metric) Description: use the metric method to calculate the distance between objects in the X data matrix. X: a m * n matrix, which is composed of M objects of the data s

10、et, the size of each object is n. Metric values are as follows: Euclidean:歐氏距離(默認); seuclidean:標準化歐氏距離; Mahalanobis:馬氏距離 cityblock:布洛克距離; Minkowski:明可夫斯基距離;the Function of Hierarchical clustering: 2.linkage Z=linkage(Y,method) Input value Description: Y for the return of the pdist function M* (M-1)

11、/2 elements of the row vector, using the method parameter specified algorithm to calculate the system clustering tree. Method: can be valued as follows : single:最短距離法(默認); complete:最長距離法; average:未加權(quán)平均距離法; weighted: 加權(quán)平均法; centroid:質(zhì)心距離法; median:加權(quán)質(zhì)心距離法; ward:內(nèi)平方距離法(最小方差算法)2.2 Hierarchical clusterin

12、gthe Function of Hierarchical clustering:3.cluster T=cluster(Z,) Description: according to the output of linkage function Z to create classification. In order to express the matrix Z, we can use more intuitive clustering number to display, the method is dendrogram (z), produced by the clustering num

13、ber is a n type tree, the bottom said sample, then a level to clustering, and eventually become the top. The vertical axis represents the height of column distance. In addition, you can set the number of clusters of the lowest number of samples, the default is 30, you can modify the dendrogram (Z, n

14、) parameter n to achieve, 1nM. Dendrogram (Z, 0) is the case of the table n=M, showing all the leaf nodes.2.2 Hierarchical clusteringthe Function of Hierarchical clustering:load feature_table;W=feature_table;Dist=pdist(W); 計算兩兩對象之間的距離Tree=linkage(Dist,ward); 建立層次化的結(jié)構(gòu)樹(類內(nèi)平方距離最小誤差)class=cluster(Tree,3

15、); 聚類class_1=find(class=1); 第一類class_2=find(class=2); 第二類class_3=find(class=3); 第三類n1=size(class_1);n2=size(class_2);n3=size(class_3); Source Program of Hierarchical clusteringBy clustering:Background error probability (sample as the background, misjudged other) err_bg = 0.0420;Auto error probabilit

16、y (sample for the car, misjudged other) err_car = 0.0400;Pedestrian error probability (sample for pedestrians, misjudged other) err_hm=0.002; Simulation Result of Hierarchical clustering Simulation Result of Hierarchical clusteringWhere the array class_1, class_2, class_3 store the data each other c

17、orresponding to the sample in the total sample set the index value. From which we can get the correct classification of the samples, which are divided into different categories.Summary: hierarchical clustering algorithm is no label learning, so there is no learning training process, directly according to the sample data in the feature space clustering, so the selection of suitable samples distribution characteristics of clustering algorithm is

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論