{"id":16071,"date":"2022-05-04T20:07:10","date_gmt":"2022-05-04T19:07:10","guid":{"rendered":"https:\/\/complex-systems-ai.com\/?page_id=16071"},"modified":"2022-11-27T21:15:54","modified_gmt":"2022-11-27T20:15:54","slug":"les-techniques-de-reduction-de-dimension","status":"publish","type":"page","link":"https:\/\/complex-systems-ai.com\/en\/data-analysis\/dimension-reduction-techniques\/","title":{"rendered":"Dimension reduction techniques"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"16071\" class=\"elementor elementor-16071\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-0e2220c elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"0e2220c\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-5ccc6f5\" data-id=\"5ccc6f5\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-869d198 elementor-align-justify elementor-widget elementor-widget-button\" data-id=\"869d198\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/complex-systems-ai.com\/analyse-des-donnees\/\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">Analyse des donn\u00e9es<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-c9a6c73\" data-id=\"c9a6c73\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-b71adab elementor-align-justify elementor-widget elementor-widget-button\" data-id=\"b71adab\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/complex-systems-ai.com\/\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">Page d'accueil<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-49cb3fb\" data-id=\"49cb3fb\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-86848e3 elementor-align-justify elementor-widget elementor-widget-button\" data-id=\"86848e3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/en.wikipedia.org\/wiki\/Data_analysis\" target=\"_blank\" rel=\"noopener\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">Wiki<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-1aab990 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"1aab990\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-579c12e\" data-id=\"579c12e\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-08821de elementor-widget elementor-widget-text-editor\" data-id=\"08821de\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Un jeu de donn\u00e9es de grande dimension est un jeu de donn\u00e9es qui comporte un grand nombre de colonnes (ou de variables). Un tel ensemble de donn\u00e9es pr\u00e9sente de nombreux d\u00e9fis <a href=\"https:\/\/complex-systems-ai.com\/en\/logic-math-27\/\">math\u00e9matiques<\/a> ou informatiques. L&rsquo;objectif est de r\u00e9duire les dimensions avec des techniques de r\u00e9duction de dimension.<\/p><p><img decoding=\"async\" class=\"aligncenter wp-image-11096 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2020\/09\/cropped-Capture.png\" alt=\"techniques de r\u00e9duction de dimension\" width=\"97\" height=\"97\" title=\"\"><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-87bf1d8 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"87bf1d8\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-e153f52\" data-id=\"e153f52\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-02b7eec elementor-widget elementor-widget-heading\" data-id=\"02b7eec\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Contenus<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/complex-systems-ai.com\/en\/data-analysis\/dimension-reduction-techniques\/#Techniques-de-reduction-de-dimension-bases-sur-le-PCA\" >Techniques de r\u00e9duction de dimension bas\u00e9s sur le PCA<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/complex-systems-ai.com\/en\/data-analysis\/dimension-reduction-techniques\/#Analyse-en-composantes-principales-ACP-principal-component-analysis-PCA\" >Analyse en composantes principales ACP (principal component analysis PCA)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/complex-systems-ai.com\/en\/data-analysis\/dimension-reduction-techniques\/#Kernel-ACP-KPCA\" >Kernel ACP (KPCA)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/complex-systems-ai.com\/en\/data-analysis\/dimension-reduction-techniques\/#Analyse-Discriminante-Lineaire-ADL-Linear-Discriminant-Analysis-LDA\" >Analyse Discriminante Lin\u00e9aire ADL (Linear Discriminant Analysis LDA)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/complex-systems-ai.com\/en\/data-analysis\/dimension-reduction-techniques\/#Decomposition-en-valeurs-singulieres-Singular-Value-Decomposition-SVD\" >D\u00e9composition en valeurs singuli\u00e8res (Singular Value Decomposition SVD)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/complex-systems-ai.com\/en\/data-analysis\/dimension-reduction-techniques\/#Encastrement-du-voisin-stochastique-distribue-en-t-t-SNE\" >Encastrement du voisin stochastique distribu\u00e9 en t (t-SNE)<\/a><\/li><\/ul><\/nav><\/div>\n<h2 class=\"elementor-heading-title elementor-size-default\"><span class=\"ez-toc-section\" id=\"Techniques-de-reduction-de-dimension-bases-sur-le-PCA\"><\/span>Techniques de r\u00e9duction de dimension bas\u00e9s sur le PCA<span class=\"ez-toc-section-end\"><\/span><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-aceda15 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"aceda15\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-054d6f5\" data-id=\"054d6f5\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-a624161 elementor-widget elementor-widget-text-editor\" data-id=\"a624161\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>La bonne nouvelle est que les variables (ou appel\u00e9es caract\u00e9ristiques) sont souvent corr\u00e9l\u00e9es &#8211; les donn\u00e9es de grande dimension sont domin\u00e9es \u00ab\u00a0superficiellement\u00a0\u00bb par un petit nombre de variables simples.<\/p><p>Nous pouvons trouver un sous-ensemble de variables pour repr\u00e9senter le m\u00eame niveau d&rsquo;information dans les donn\u00e9es ou transformer les variables en un nouvel ensemble de variables sans perdre beaucoup d&rsquo;informations. Bien que le calcul haute puissance puisse d&rsquo;une mani\u00e8re ou d&rsquo;une autre g\u00e9rer des donn\u00e9es de grande dimension, dans de nombreuses applications, il est toujours n\u00e9cessaire de r\u00e9duire la dimensionnalit\u00e9 des donn\u00e9es d&rsquo;origine.<\/p><p>L&rsquo;analyse en composantes principales (ACP) est probablement la technique la plus populaire lorsque l&rsquo;on pense \u00e0 la r\u00e9duction de dimension. Dans cet article, je commencerai par l&rsquo;ACP, puis je pr\u00e9senterai d&rsquo;autres techniques de r\u00e9duction de dimension. Le code Python sera inclus dans chaque technique.<\/p><p>Les data scientists peuvent utiliser des techniques de r\u00e9duction de dimension pour identifier les anomalies. Pourquoi? Ne voulons-nous pas simplement r\u00e9duire la dimensionnalit\u00e9\u00a0? L&rsquo;intuition r\u00e9side dans les valeurs aberrantes elles-m\u00eames. D.M.Hawkins a d\u00e9clar\u00e9 : \u00ab Une valeur aberrante est une observation qui s&rsquo;\u00e9carte tellement des autres observations qu&rsquo;elle \u00e9veille des soup\u00e7ons qu&rsquo;elle a \u00e9t\u00e9 g\u00e9n\u00e9r\u00e9e par un m\u00e9canisme diff\u00e9rent. Une fois que les dimensions sont r\u00e9duites \u00e0 moins de dimensions principales, les mod\u00e8les sont identifi\u00e9s, puis les valeurs aberrantes sont r\u00e9v\u00e9l\u00e9es.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-3af18dd elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"3af18dd\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-456ade6\" data-id=\"456ade6\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-e5aee13 elementor-widget elementor-widget-heading\" data-id=\"e5aee13\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><span class=\"ez-toc-section\" id=\"Analyse-en-composantes-principales-ACP-principal-component-analysis-PCA\"><\/span>Analyse en composantes principales ACP (principal component analysis PCA)<span class=\"ez-toc-section-end\"><\/span><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-f51bbcf elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"f51bbcf\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-63e39f7\" data-id=\"63e39f7\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-9957001 elementor-widget elementor-widget-text-editor\" data-id=\"9957001\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>L&rsquo;id\u00e9e de l&rsquo;analyse en composantes principales (ACP) est de r\u00e9duire la dimensionnalit\u00e9 d&rsquo;un ensemble de donn\u00e9es compos\u00e9 d&rsquo;un grand nombre de variables li\u00e9es tout en conservant autant de variance dans les donn\u00e9es que possible. L&rsquo;ACP trouve un ensemble de nouvelles variables dont les variables d&rsquo;origine ne sont que leurs combinaisons lin\u00e9aires. Les nouvelles variables sont appel\u00e9es composantes principales (PC). Ces composantes principales sont orthogonales : dans un cas 3D, les composantes principales sont perpendiculaires entre elles. X ne peut pas \u00eatre repr\u00e9sent\u00e9 par Y ou Y ne peut pas \u00eatre repr\u00e9sent\u00e9 par Z.<\/p><p>La figure suivante montre l&rsquo;intuition de l&rsquo;ACP : elle \u00ab fait pivoter \u00bb les axes pour mieux s&rsquo;aligner avec vos donn\u00e9es. La premi\u00e8re composante principale captera la majeure partie de la variance des donn\u00e9es, suivie de la deuxi\u00e8me, de la troisi\u00e8me, etc. Par cons\u00e9quent, les nouvelles donn\u00e9es auront moins de dimensions.<\/p><p><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter size-medium wp-image-16077\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_d30YKpg-mAMWI3ekYM1plA-300x213.png\" alt=\"\" width=\"300\" height=\"213\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_d30YKpg-mAMWI3ekYM1plA-300x213.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_d30YKpg-mAMWI3ekYM1plA-18x12.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_d30YKpg-mAMWI3ekYM1plA-120x85.png 120w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_d30YKpg-mAMWI3ekYM1plA.png 371w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p><p id=\"2bf0\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Let\u2019s use the iris dataset to illustrate PCA:<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"cf6c\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\"># Use the iris dataset to illustrate PCA:<br \/>import pandas as pd<br \/>url = \u201c<a class=\"au sh\" href=\"https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/iris\/iris.data\" target=\"_blank\" rel=\"noopener ugc nofollow\">https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/iris\/iris.data<\/a>\"<br \/># load dataset into Pandas DataFrame<br \/>df = pd.read_csv(url, names=[\u2018sepal length\u2019,\u2019sepal width\u2019,\u2019petal length\u2019,\u2019petal width\u2019,\u2019target\u2019])<br \/>df.head()<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go vv\"><img decoding=\"async\" class=\"aligncenter wp-image-16082 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_z-YRY-YbxE2YLRyipzl5aQ.png\" alt=\"\" width=\"420\" height=\"171\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_z-YRY-YbxE2YLRyipzl5aQ.png 420w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_z-YRY-YbxE2YLRyipzl5aQ-300x122.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_z-YRY-YbxE2YLRyipzl5aQ-18x7.png 18w\" sizes=\"(max-width: 420px) 100vw, 420px\" \/><\/div><\/figure><p id=\"151c\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Notez que cet ensemble de donn\u00e9es IRIS est fourni avec la variable cible. Dans PCA, vous ne transformez que les variables X sans la variable Y cible.<\/p><p id=\"bb96\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Toutes les variables doivent \u00eatre sur la m\u00eame \u00e9chelle avant d&rsquo;appliquer l&rsquo;ACP, sinon, une caract\u00e9ristique avec de grandes valeurs dominera le r\u00e9sultat.<\/p><p class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Ci-dessous, j&rsquo;utilise StandardScaler dans scikit-learn pour standardiser les caract\u00e9ristiques de l&rsquo;ensemble de donn\u00e9es sur l&rsquo;\u00e9chelle unitaire (moyenne = 0 et variance = 1).<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"47f3\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">from sklearn.preprocessing import StandardScaler<br \/>variables = [\u2018sepal length\u2019, \u2018sepal width\u2019, \u2018petal length\u2019, \u2018petal width\u2019]<br \/>x = df.loc[:, variables].values<br \/>y = df.loc[:,[\u2018target\u2019]].values<br \/>x = StandardScaler().fit_transform(x)<br \/>x = pd.DataFrame(x)<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wb\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16081 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_hExuh028sgLCVCYnb0t2dA.png\" alt=\"\" width=\"321\" height=\"241\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_hExuh028sgLCVCYnb0t2dA.png 321w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_hExuh028sgLCVCYnb0t2dA-300x225.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_hExuh028sgLCVCYnb0t2dA-16x12.png 16w\" sizes=\"(max-width: 321px) 100vw, 321px\" \/><\/div><\/figure><p id=\"a976\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Il y a quatre caract\u00e9ristiques dans les donn\u00e9es d&rsquo;origine. PCA fournira donc le m\u00eame nombre de composants principaux.<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"716f\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">from sklearn.decomposition import PCA<br \/>pca = PCA()<br \/>x_pca = pca.fit_transform(x)<br \/>x_pca = pd.DataFrame(x_pca)<br \/>x_pca.head()<\/span><\/pre><p data-selectable-paragraph=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-16080\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_jeHJde_TqC7n3iW-EDVipQ-300x164.png\" alt=\"\" width=\"300\" height=\"164\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_jeHJde_TqC7n3iW-EDVipQ-300x164.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_jeHJde_TqC7n3iW-EDVipQ-18x10.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_jeHJde_TqC7n3iW-EDVipQ.png 303w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p><p id=\"3c44\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Quelles sont les variances expliqu\u00e9es par chacune des composantes principales ? Utilisez\u00a0pca.explained_variance_ratio_\u00a0pour renvoyer un vecteur de la variance\u00a0:<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"7e61\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">explained_variance = pca.explained_variance_ratio_<br \/>explained_variance<\/span><span id=\"5a67\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">array([0.72770452, 0.23030523, 0.03683832, 0.00515193])<\/span><\/pre><p id=\"87a9\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Il montre que la premi\u00e8re composante principale repr\u00e9sente une variance de 72,22\u00a0%, les deuxi\u00e8me, troisi\u00e8me et quatri\u00e8me repr\u00e9sentent respectivement une variance de 23,9\u00a0%, 3,68\u00a0% et 0,51\u00a0%. Nous pouvons dire que 72,22 + 23,9 = 96,21% de l&rsquo;information est captur\u00e9e par les premi\u00e8re et deuxi\u00e8me composantes principales.<\/p><p class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Nous voulons souvent ne garder que les fonctionnalit\u00e9s significatives et supprimer les insignifiantes. Une r\u00e8gle empirique consiste \u00e0 conserver les principales composantes principales qui captent une variance importante et \u00e0 ignorer les petites.<\/p><p id=\"5fb0\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Nous pouvons tracer les r\u00e9sultats en utilisant les deux premi\u00e8res composantes. Ajoutons la variable cible y aux nouvelles donn\u00e9es x_pca\u00a0:<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"dbec\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">x_pca[\u2018target\u2019]=y<br \/>x_pca.columns = [\u2018PC1\u2019,\u2019PC2',\u2019PC3',\u2019PC4',\u2019target\u2019]<br \/>x_pca.head()<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wl\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16079 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_pHwvmVMzCx5oWNN4ImWOCQ.png\" alt=\"\" width=\"368\" height=\"167\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_pHwvmVMzCx5oWNN4ImWOCQ.png 368w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_pHwvmVMzCx5oWNN4ImWOCQ-300x136.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_pHwvmVMzCx5oWNN4ImWOCQ-18x8.png 18w\" sizes=\"(max-width: 368px) 100vw, 368px\" \/><\/div><\/figure><p id=\"34c7\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Le r\u00e9sultat montre que les donn\u00e9es sont s\u00e9parables dans le nouvel espace.<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"efd1\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">import matplotlib.pyplot as plt<br \/>fig = plt.figure()<br \/>ax = fig.add_subplot(1,1,1) <br \/>ax.set_xlabel(\u2018Principal Component 1\u2019) <br \/>ax.set_ylabel(\u2018Principal Component 2\u2019) <br \/>ax.set_title(\u20182 component PCA\u2019) <br \/>targets = [\u2018Iris-setosa\u2019, \u2018Iris-versicolor\u2019, \u2018Iris-virginica\u2019]<br \/>colors = [\u2018r\u2019, \u2018g\u2019, \u2018b\u2019]<br \/>for target, color in zip(targets,colors):<br \/> indicesToKeep = x_pca[\u2018target\u2019] == target<br \/> ax.scatter(x_pca.loc[indicesToKeep, \u2018PC1\u2019]<br \/> , x_pca.loc[indicesToKeep, \u2018PC2\u2019]<br \/> , c = color<br \/> , s = 50)<br \/>ax.legend(targets)<br \/>ax.grid()<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wm\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-16078\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_1H8KClqGklqXaAMGp5HT4A-300x211.png\" alt=\"\" width=\"300\" height=\"211\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_1H8KClqGklqXaAMGp5HT4A-300x211.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_1H8KClqGklqXaAMGp5HT4A-18x12.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_1H8KClqGklqXaAMGp5HT4A-120x85.png 120w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_1H8KClqGklqXaAMGp5HT4A.png 396w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/div><\/figure><p id=\"d6a3\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Comment utilisons-nous l&rsquo;ACP pour d\u00e9tecter les valeurs aberrantes\u00a0? Laissez-moi vous donner l&rsquo;intuition. Apr\u00e8s la transformation, les points de donn\u00e9es \u00ab\u00a0normaux\u00a0\u00bb s&rsquo;aligneront le long des vecteurs propres (nouveaux axes) avec de petites valeurs propres. Les valeurs aberrantes sont \u00e9loign\u00e9es des vecteurs propres avec de grandes valeurs propres. Par cons\u00e9quent, les distances entre chaque point de donn\u00e9es et les vecteurs propres deviennent une mesure de la valeur aberrante. Une grande distance indique une anomalie.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-f41ba63 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"f41ba63\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-89a0730\" data-id=\"89a0730\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-e827a92 elementor-widget elementor-widget-heading\" data-id=\"e827a92\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><span class=\"ez-toc-section\" id=\"Kernel-ACP-KPCA\"><\/span>Kernel ACP (KPCA)<span class=\"ez-toc-section-end\"><\/span><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-b845b91 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"b845b91\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-01250ac\" data-id=\"01250ac\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-04d39d7 elementor-widget elementor-widget-text-editor\" data-id=\"04d39d7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>L&rsquo;ACP applique la transformation lin\u00e9aire, qui n&rsquo;est que sa limitation. KACP\u00a0 \u00e9tend l&rsquo;ACP \u00e0 la non-lin\u00e9arit\u00e9. Il mappe d&rsquo;abord les donn\u00e9es d&rsquo;origine sur un espace d&rsquo;entit\u00e9s non lin\u00e9aires (g\u00e9n\u00e9ralement de dimension sup\u00e9rieure), puis applique l&rsquo;ACP pour extraire les composants principaux de cet espace. Ceci peut \u00eatre compris par la figure (B). Le graphique de gauche montre que les points bleus et rouges ne peuvent pas \u00eatre s\u00e9par\u00e9s \u00e0 l&rsquo;aide d&rsquo;une transformation lin\u00e9aire. Mais si tous les points sont projet\u00e9s sur un espace 3D, le r\u00e9sultat devient lin\u00e9airement s\u00e9parable ! Nous appliquons ensuite PCA pour s\u00e9parer les composants.<\/p><p>D&rsquo;o\u00f9 vient l&rsquo;intuition ? Pourquoi la s\u00e9paration des composants devient-elle plus facile dans un espace de dimension sup\u00e9rieure\u00a0? Cela doit remonter \u00e0 la th\u00e9orie de Vapnik-Chervonenkis (VC). Il dit que la cartographie dans un espace de dimension sup\u00e9rieure fournit souvent une plus grande puissance de classification.<\/p><p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16083 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_0iimd45B-RthXBoeugOKkg.png\" alt=\"\" width=\"598\" height=\"293\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_0iimd45B-RthXBoeugOKkg.png 598w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_0iimd45B-RthXBoeugOKkg-300x147.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_0iimd45B-RthXBoeugOKkg-18x9.png 18w\" sizes=\"(max-width: 598px) 100vw, 598px\" \/><\/p><p id=\"8b13\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Le code Python suivant cr\u00e9e un trac\u00e9 circulaire compos\u00e9 de points rouges et bleus. \u00c9videmment, il n&rsquo;y a aucun moyen de s\u00e9parer les points rouges et bleus avec une ligne (s\u00e9paration lin\u00e9aire).<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"e0c9\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">print(__doc__)<br \/>import numpy as np<br \/>import matplotlib.pyplot as plt<br \/>from sklearn.decomposition import PCA, KernelPCA<br \/>from sklearn.datasets import make_circles<\/span><span id=\"1318\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">np.random.seed(0)<br \/>X, y = make_circles(n_samples=400, factor=.3, noise=.05)<\/span><span id=\"6253\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">plt.figure(figsize=(10,10))<br \/>plt.subplot(2, 2, 1, aspect='equal')<br \/>plt.title(\"Original space\")<br \/>reds = y == 0<br \/>blues = y == 1<\/span><span id=\"64de\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">plt.scatter(X[reds, 0], X[reds, 1], c=\"red\",s=20, edgecolor='k')<br \/>plt.scatter(X[blues, 0], X[blues, 1], c=\"blue\",s=20, edgecolor='k')<br \/>plt.xlabel(\"$x_1$\")<br \/>plt.ylabel(\"$x_2$\")<\/span><\/pre><p data-selectable-paragraph=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16084 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_9ohZfj3sdcxxqM4DgJKBeA.png\" alt=\"\" width=\"320\" height=\"313\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_9ohZfj3sdcxxqM4DgJKBeA.png 320w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_9ohZfj3sdcxxqM4DgJKBeA-300x293.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_9ohZfj3sdcxxqM4DgJKBeA-12x12.png 12w\" sizes=\"(max-width: 320px) 100vw, 320px\" \/><\/p><p id=\"1018\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Cependant, lorsque nous projetons le cercle dans un espace de dimension sup\u00e9rieure et que nous s\u00e9parons \u00e0 l&rsquo;aide de l&rsquo;ACP, les observations de donn\u00e9es par rapport aux premi\u00e8re et deuxi\u00e8me composantes principales sont s\u00e9parables\u00a0! Vous trouverez ci-dessous le r\u00e9sultat que les points sont trac\u00e9s par rapport aux premi\u00e8re et deuxi\u00e8me composantes principales. Je trace une ligne pour s\u00e9parer les points rouges et bleus.<\/p><p class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Dans KernelPCA, nous sp\u00e9cifions kernel=&rsquo;rbf&rsquo;, qui est\u00a0la fonction de base radiale, ou la distance euclidienne. Les RBF sont couramment utilis\u00e9s comme noyau dans les techniques d&rsquo;apprentissage automatique telles que la machine \u00e0 vecteurs de support (SVM).<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"dbf1\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">kpca = KernelPCA(kernel=\u201drbf\u201d, fit_inverse_transform=True, gamma=10)<br \/>X_kpca = kpca.fit_transform(X)<br \/>pca = PCA()<br \/>X_pca = pca.fit_transform(X)<\/span><span id=\"3249\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">plt.scatter(X_kpca[reds, 0], X_kpca[reds, 1], c=\u201dred\u201d,s=20, edgecolor=\u2019k\u2019)<br \/>plt.scatter(X_kpca[blues, 0], X_kpca[blues, 1], c=\u201dblue\u201d,s=20, edgecolor=\u2019k\u2019)<br \/>x = np.linspace(-1, 1, 1000)<br \/>plt.plot(x, -0.1*x, linestyle=\u2019solid\u2019)<br \/>plt.title(\u201cProjection by KPCA\u201d)<br \/>plt.xlabel(r\u201d1st principal component in space induced by $\\phi$\u201d)<br \/>plt.ylabel(\u201c2nd component\u201d)<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wp\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-16085\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_95gMmBiWU8CFV12hiDQyJA-300x211.png\" alt=\"\" width=\"300\" height=\"211\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_95gMmBiWU8CFV12hiDQyJA-300x211.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_95gMmBiWU8CFV12hiDQyJA-18x12.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_95gMmBiWU8CFV12hiDQyJA-120x85.png 120w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_95gMmBiWU8CFV12hiDQyJA.png 404w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/div><\/figure><p id=\"2507\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Si nous sp\u00e9cifions que le noyau est \u00ab\u00a0lin\u00e9aire\u00a0\u00bb comme le code ci-dessous (KernelPCA(kernel=&rsquo;linear&rsquo;), il devient le PCA standard avec seulement une transformation lin\u00e9aire, et les points rouges et bleus ne sont pas s\u00e9parables.<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"2139\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">kpca = KernelPCA(kernel=\u201dlinear\u201d, fit_inverse_transform=True, gamma=10)<br \/>X_kpca = kpca.fit_transform(X)<br \/>pca = PCA()<br \/>X_pca = pca.fit_transform(X)<\/span><span id=\"2662\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">plt.scatter(X_kpca[reds, 0], X_kpca[reds, 1], c=\u201dred\u201d,s=20, edgecolor=\u2019k\u2019)<br \/>plt.scatter(X_kpca[blues, 0], X_kpca[blues, 1], c=\u201dblue\u201d,s=20, edgecolor=\u2019k\u2019)<br \/>x = np.linspace(-1, 1, 1000)<br \/>plt.plot(x, -0.1*x, linestyle=\u2019solid\u2019)<br \/>plt.title(\u201cProjection by KPCA\u201d)<br \/>plt.xlabel(r\u201d1st principal component in space induced by $\\phi$\u201d)<br \/>plt.ylabel(\u201c2nd component\u201d)<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wq\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16086 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_jf1MI2UskFKLzQ34CJjnHw.png\" alt=\"\" width=\"418\" height=\"284\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_jf1MI2UskFKLzQ34CJjnHw.png 418w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_jf1MI2UskFKLzQ34CJjnHw-300x204.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_jf1MI2UskFKLzQ34CJjnHw-18x12.png 18w\" sizes=\"(max-width: 418px) 100vw, 418px\" \/><\/div><\/figure>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-f97a460 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"f97a460\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-89d9f87\" data-id=\"89d9f87\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-889765f elementor-widget elementor-widget-heading\" data-id=\"889765f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><span class=\"ez-toc-section\" id=\"Analyse-Discriminante-Lineaire-ADL-Linear-Discriminant-Analysis-LDA\"><\/span>Analyse Discriminante Lin\u00e9aire ADL (Linear Discriminant Analysis LDA)<span class=\"ez-toc-section-end\"><\/span><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-5a32796 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"5a32796\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-24aa7e3\" data-id=\"24aa7e3\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-7387421 elementor-widget elementor-widget-text-editor\" data-id=\"7387421\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>L&rsquo;origine de LDA est diff\u00e9rente de PCA. L&rsquo;ACP est une m\u00e9thode d&rsquo;apprentissage non supervis\u00e9e qui transforme les fonctionnalit\u00e9s d&rsquo;origine en un ensemble de nouvelles fonctionnalit\u00e9s. Nous ne nous soucions pas de savoir si le nouvel ensemble de caract\u00e9ristiques peut fournir le meilleur pouvoir discriminant pour la variable cible. En revanche, l&rsquo;analyse discriminante lin\u00e9aire (LDA) cherche \u00e0 pr\u00e9server autant de pouvoir discriminant que possible pour la variable d\u00e9pendante, tout en projetant la matrice de donn\u00e9es d&rsquo;origine sur un espace de dimension inf\u00e9rieure.<\/p><p>LDA est un type de technique d&rsquo;apprentissage supervis\u00e9. Il utilise les classes de la variable d\u00e9pendante pour diviser l&rsquo;espace des pr\u00e9dicteurs en\u00a0r\u00e9gions. Toutes les\u00a0r\u00e9gions\u00a0devraient avoir des\u00a0limites\u00a0lin\u00e9aires. D&rsquo;o\u00f9 le nom\u00a0lin\u00e9aire\u00a0vient. Le mod\u00e8le pr\u00e9dit que toutes les observations d&rsquo;une r\u00e9gion appartiennent \u00e0 la m\u00eame classe de la variable d\u00e9pendante.<\/p><p>LDA atteint l&rsquo;objectif ci-dessus en trois \u00e9tapes principales. Tout d&rsquo;abord, il calcule la s\u00e9parabilit\u00e9 entre les diff\u00e9rentes classes de la variable d\u00e9pendante, appel\u00e9e variance entre les classes, comme indiqu\u00e9 en (1) de la figure LDA. Deuxi\u00e8mement, il calcule la distance entre la moyenne et les \u00e9chantillons de chaque classe, appel\u00e9e variance intra-classe, comme indiqu\u00e9 dans (2). Ensuite, il construit l&rsquo;espace de dimension inf\u00e9rieure avec ce crit\u00e8re : maximiser la variance inter-classes et minimiser la variance intra-classe.<\/p><p>La solution \u00e0 ce crit\u00e8re est de calculer les valeurs propres et les vecteurs propres. Les vecteurs propres r\u00e9sultants repr\u00e9sentent les directions du nouvel espace et les valeurs propres correspondantes repr\u00e9sentent la longueur des vecteurs propres. Ainsi, chaque vecteur propre repr\u00e9sente un axe de l&rsquo;espace LDA, et la valeur propre repr\u00e9sente la longueur de ce vecteur propre.<\/p><p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16087 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_W1xnDANqnLkjRUg9r0Zm8w.png\" alt=\"\" width=\"630\" height=\"277\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_W1xnDANqnLkjRUg9r0Zm8w.png 630w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_W1xnDANqnLkjRUg9r0Zm8w-300x132.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_W1xnDANqnLkjRUg9r0Zm8w-18x8.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_W1xnDANqnLkjRUg9r0Zm8w-600x264.png 600w\" sizes=\"(max-width: 630px) 100vw, 630px\" \/><\/p><p id=\"4ac3\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">J&rsquo;utiliserai l&rsquo;ensemble de donn\u00e9es \u00ab\u00a0Qualit\u00e9 du vin rouge\u00a0\u00bb dans le cadre du concours Kaggle. Cet ensemble de donn\u00e9es a 11 variables d&rsquo;entr\u00e9e et une variable de sortie \u00ab\u00a0qualit\u00e9\u00a0\u00bb.<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"3549\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">import matplotlib.pyplot as plt<br \/>from sklearn.decomposition import PCA<br \/>from sklearn.discriminant_analysis import LinearDiscriminantAnalysis<br \/>wine = pd.read_csv(\u2018winequality-red.csv\u2019)<br \/>wine.head()<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"vx vy dq vz cf wa\" tabindex=\"0\" role=\"button\"><div class=\"gn go ws\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16088 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_MFVu1n3dzQ8znn9IwSL-qg.png\" alt=\"\" width=\"700\" height=\"161\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_MFVu1n3dzQ8znn9IwSL-qg.png 700w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_MFVu1n3dzQ8znn9IwSL-qg-300x69.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_MFVu1n3dzQ8znn9IwSL-qg-18x4.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_MFVu1n3dzQ8znn9IwSL-qg-600x138.png 600w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/div><\/div><\/figure><p id=\"bda5\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Par souci de simplicit\u00e9, je regroupe la variable de sortie en trois valeurs. wine[\u2018quality2\u2019] = np.where(wine[\u2018quality\u2019]&lt;=4,1, np.where(wine[\u2018quality\u2019]&lt;=6,2,3)).<\/p><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wt\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16089 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_IMzZOFQE1J8H9jFxuTsJ6w.png\" alt=\"\" width=\"522\" height=\"198\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_IMzZOFQE1J8H9jFxuTsJ6w.png 522w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_IMzZOFQE1J8H9jFxuTsJ6w-300x114.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_IMzZOFQE1J8H9jFxuTsJ6w-18x7.png 18w\" sizes=\"(max-width: 522px) 100vw, 522px\" \/><\/div><\/figure><p id=\"066f\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Le code suivant ex\u00e9cute PCA et LDA.<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"5db4\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">X = wine.drop(columns=[\u2018quality\u2019,\u2019quality2'])<br \/>y = wine[\u2018quality2\u2019]<br \/>target_names = np.unique(y)<br \/>target_names<\/span><span id=\"a069\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">pca = PCA(n_components=2)<br \/>X_r = pca.fit(X).transform(X)<\/span><span id=\"b364\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">lda = LinearDiscriminantAnalysis(n_components=2)<br \/>X_r2 = lda.fit(X, y).transform(X)<\/span><\/pre><p id=\"c006\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Ensuite, tracez les r\u00e9sultats de PCA et LDA\u00a0:<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"0802\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\"># Percentage of variance explained for each components<br \/>print(\u2018explained variance ratio (first two components): %s\u2019<br \/> % str(pca.explained_variance_ratio_))<\/span><span id=\"5a0b\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">plt.figure()<br \/>colors = [\u2018navy\u2019, \u2018turquoise\u2019, \u2018darkorange\u2019]<br \/>lw = 2<\/span><span id=\"bca1\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">for color, i, target_name in zip(colors, target_names, target_names):<br \/> plt.scatter(X_r[y == i, 0], X_r[y == i, 1], color=color, alpha=.8, lw=lw,<br \/> label=target_name)<br \/>plt.legend(loc=\u2019best\u2019, shadow=False, scatterpoints=1)<br \/>plt.title(\u2018PCA of WINE dataset\u2019)<\/span><span id=\"e600\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">plt.figure()<br \/>for color, i, target_name in zip(colors, target_names, target_names):<br \/> plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], alpha=.8, color=color,<br \/> label=target_name)<br \/>plt.legend(loc=\u2019best\u2019, shadow=False, scatterpoints=1)<br \/>plt.title(\u2018LDA of WINE dataset\u2019)<\/span><span id=\"eaa1\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">plt.show()<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wu\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16090 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_nMd65CFiRCHvuyGHIdzZmA.png\" alt=\"\" width=\"639\" height=\"228\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_nMd65CFiRCHvuyGHIdzZmA.png 639w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_nMd65CFiRCHvuyGHIdzZmA-300x107.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_nMd65CFiRCHvuyGHIdzZmA-18x6.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_nMd65CFiRCHvuyGHIdzZmA-600x214.png 600w\" sizes=\"(max-width: 639px) 100vw, 639px\" \/><\/div><\/figure><p id=\"903b\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">LDA convient \u00e0 un probl\u00e8me de classification multi-classes.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-c5e524e elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"c5e524e\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-7e14cc9\" data-id=\"7e14cc9\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-40ac1ff elementor-widget elementor-widget-heading\" data-id=\"40ac1ff\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><span class=\"ez-toc-section\" id=\"Decomposition-en-valeurs-singulieres-Singular-Value-Decomposition-SVD\"><\/span>D\u00e9composition en valeurs singuli\u00e8res (Singular Value Decomposition SVD)<span class=\"ez-toc-section-end\"><\/span><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-b729843 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"b729843\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-b5f80dd\" data-id=\"b5f80dd\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-91324b2 elementor-widget elementor-widget-text-editor\" data-id=\"91324b2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"c896\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">SVD est une m\u00e9thode de synth\u00e8se des donn\u00e9es similaire \u00e0 l&rsquo;ACP. Il extrait les caract\u00e9ristiques importantes des donn\u00e9es. Mais il y a un autre avantage de SVD : reconstruire l&rsquo;ensemble de donn\u00e9es d&rsquo;origine en un petit ensemble de donn\u00e9es. Il a donc de larges applications telles que la compression d&rsquo;images. Par exemple, si vous avez une image 32*32 = 1 024 pixels, SVD peut la r\u00e9sumer en 66 pixels. Les 66 pixels peuvent r\u00e9cup\u00e9rer des images de 32*32 pixels sans manquer aucune information importante.<\/p><p id=\"18c9\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">SVD a jou\u00e9 un r\u00f4le d\u00e9terminant dans l&rsquo;alg\u00e8bre lin\u00e9aire, mais il semble \u00ab\u00a0pas aussi c\u00e9l\u00e8bre qu&rsquo;il devrait l&rsquo;\u00eatre\u00a0\u00bb comme indiqu\u00e9 dans le manuel classique \u00ab\u00a0L&rsquo;alg\u00e8bre lin\u00e9aire et ses applications\u00a0\u00bb de Gilbert Strang. Pour bien introduire SVD, il est indispensable de commencer par le fonctionnement matriciel. Si A est une matrice r\u00e9elle n \u00d7 n sym\u00e9trique, il existe une matrice orthogonale V et une diagonale D telles que<\/p><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wv\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-16091\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_rdEtBbHkOk4XF9dPoGQX3w.png\" alt=\"\" width=\"124\" height=\"37\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_rdEtBbHkOk4XF9dPoGQX3w.png 124w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_rdEtBbHkOk4XF9dPoGQX3w-18x5.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_rdEtBbHkOk4XF9dPoGQX3w-120x37.png 120w\" sizes=\"(max-width: 124px) 100vw, 124px\" \/><\/div><\/figure><p id=\"a347\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Les colonnes\u00a0V\u00a0sont des vecteurs propres pour A et les entr\u00e9es diagonales de\u00a0D\u00a0sont les valeurs propres de A. Ce processus s&rsquo;appelle la\u00a0d\u00e9composition des valeurs propres, ou\u00a0EVD, pour la matrice\u00a0A. Il nous indique comment choisir des\u00a0bases\u00a0orthonormales\u00a0de sorte que la transformation soit repr\u00e9sent\u00e9e par une matrice. avec la forme la plus simple possible, c&rsquo;est-\u00e0-dire en diagonale. (Pour les lecteurs qui souhaitent passer en revue les \u00e9tapes de diagonalisation d&rsquo;une matrice,\u00a0voici\u00a0un bon exemple.)\u00a0Le terme\u00a0orthonormal\u00a0signifie que deux vecteurs sont orthogonaux ou perpendiculaires.<\/p><p id=\"fabb\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">En \u00e9tendant la matrice sym\u00e9trique, le SVD fonctionne avec n&rsquo;importe quelle matrice A r\u00e9elle m \u00d7 n. \u00c9tant donn\u00e9 une matrice A r\u00e9elle m \u00d7 n, il existe une matrice orthogonale m \u00d7 m U, une matrice orthogonale m \u00d7 m V et une diagonale m \u00d7 n matrice\u00a0\u03a3\u00a0telle que<\/p><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go ww\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-16092\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_eLi5Mt2ewpvLxEcC3FzXlQ.png\" alt=\"\" width=\"113\" height=\"40\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_eLi5Mt2ewpvLxEcC3FzXlQ.png 113w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_eLi5Mt2ewpvLxEcC3FzXlQ-18x6.png 18w\" sizes=\"(max-width: 113px) 100vw, 113px\" \/><\/div><\/figure><p id=\"34de\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Notez qu&rsquo;une matrice orthogonale est une matrice carr\u00e9e telle que le produit d&rsquo;elle-m\u00eame et de sa matrice inverse est une matrice identit\u00e9. Une matrice diagonale est une matrice dans laquelle les entr\u00e9es autres que la diagonale sont toutes nulles.<\/p><p id=\"f9c0\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Ci-dessous, j&rsquo;utiliserai \u00e0 nouveau l&rsquo;ensemble de donn\u00e9es d&rsquo;iris pour vous montrer comment appliquer SVD.<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"39f7\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">from numpy import *<br \/>import operator<br \/>import matplotlib.pyplot as plt<br \/>import pandas as pd<br \/>from numpy.linalg import *<\/span><span id=\"9e92\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">url = \u201c<a class=\"au sh\" href=\"https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/iris\/iris.data\" target=\"_blank\" rel=\"noopener ugc nofollow\">https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/iris\/iris.data<\/a>\"<br \/># load dataset into Pandas DataFrame<br \/>df = pd.read_csv(url, names=[\u2018sepal length\u2019,\u2019sepal width\u2019,\u2019petal length\u2019,\u2019petal width\u2019,\u2019target\u2019])<\/span><span id=\"5076\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\"># Only the X variables<br \/>data = df[[\u2018sepal length\u2019,\u2019sepal width\u2019,\u2019petal length\u2019,\u2019petal width\u2019]]<\/span><span id=\"11ef\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">#calculate SVD<br \/>n = 2 # We will take two Singular Values<br \/>U, s, V = linalg.svd( data )<\/span><span id=\"7294\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\"># eye() creates a matrix with ones on the diagonal and zeros elsewhere<br \/>Sig = mat(eye(n)*s[:n])<br \/>newdata = U[:,:n]<br \/>newdata = pd.DataFrame(newdata)<br \/>newdata.columns=[\u2018SVD1\u2019,\u2019SVD2']<br \/>newdata.head()<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wx\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16093 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_r4_EB62AwuXQkF52Bd36gA.png\" alt=\"\" width=\"168\" height=\"168\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_r4_EB62AwuXQkF52Bd36gA.png 168w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_r4_EB62AwuXQkF52Bd36gA-150x150.png 150w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_r4_EB62AwuXQkF52Bd36gA-12x12.png 12w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_r4_EB62AwuXQkF52Bd36gA-100x100.png 100w\" sizes=\"(max-width: 168px) 100vw, 168px\" \/><\/div><\/figure><p id=\"7dc6\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Vous pouvez comparer le r\u00e9sultat de SVD \u00e0 celui de PCA. Les deux obtiennent des r\u00e9sultats similaires.<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"bd72\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\"># Add the actual target to the data in order to plot it<br \/>newdata[\u2018target\u2019]=df[\u2018target\u2019]<\/span><span id=\"9201\" class=\"gc vp vq jl vr b do wg wh wi wj wk vt l vu\" data-selectable-paragraph=\"\">fig = plt.figure()<br \/>ax = fig.add_subplot(1,1,1) <br \/>ax.set_xlabel(\u2018SVD 1\u2019) <br \/>ax.set_ylabel(\u2018SVD 2\u2019) <br \/>ax.set_title(\u2018SVD\u2019) <br \/>targets = [\u2018Iris-setosa\u2019, \u2018Iris-versicolor\u2019, \u2018Iris-virginica\u2019]<br \/>colors = [\u2018r\u2019, \u2018g\u2019, \u2018b\u2019]<br \/>for target, color in zip(targets,colors):<br \/> indicesToKeep = newdata[\u2018target\u2019] == target<br \/> ax.scatter(newdata.loc[indicesToKeep, \u2018SVD1\u2019]<br \/> , newdata.loc[indicesToKeep, \u2018SVD2\u2019]<br \/> , c = color<br \/> , s = 50)<br \/>ax.legend(targets)<br \/>ax.grid()<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wy\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16094 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_LHcoeSP9ZQhpgwRF-K4R0w.png\" alt=\"\" width=\"416\" height=\"288\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_LHcoeSP9ZQhpgwRF-K4R0w.png 416w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_LHcoeSP9ZQhpgwRF-K4R0w-300x208.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_LHcoeSP9ZQhpgwRF-K4R0w-18x12.png 18w\" sizes=\"(max-width: 416px) 100vw, 416px\" \/><\/div><\/figure>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-a138c4b elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"a138c4b\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-645c2b9\" data-id=\"645c2b9\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-90c2f2a elementor-widget elementor-widget-heading\" data-id=\"90c2f2a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><span class=\"ez-toc-section\" id=\"Encastrement-du-voisin-stochastique-distribue-en-t-t-SNE\"><\/span>Encastrement du voisin stochastique distribu\u00e9 en t (t-SNE)<span class=\"ez-toc-section-end\"><\/span><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-2ff3532 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"2ff3532\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-9445532\" data-id=\"9445532\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-9dbdf44 elementor-widget elementor-widget-text-editor\" data-id=\"9dbdf44\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"30f2\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">t-SNE est d\u00e9velopp\u00e9 par\u00a0Laurens van der Maaten et Geoggrey Hinton. Il s&rsquo;agit d&rsquo;un <a href=\"https:\/\/complex-systems-ai.com\/en\/algorithmic\/\">algorithme<\/a> d&rsquo;apprentissage automatique pour la visualisation qui pr\u00e9sente l&rsquo;int\u00e9gration de donn\u00e9es de grande dimension dans un espace de faible dimension \u00e0 deux ou trois dimensions.<\/p><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wm\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-16095\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_0MgoGw_4-lw5Wj85-Ql48Q-300x173.png\" alt=\"\" width=\"300\" height=\"173\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_0MgoGw_4-lw5Wj85-Ql48Q-300x173.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_0MgoGw_4-lw5Wj85-Ql48Q-18x10.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_0MgoGw_4-lw5Wj85-Ql48Q.png 396w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/div><\/figure><p id=\"fed2\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Quelle est la meilleure fa\u00e7on de pr\u00e9senter le rouleau suisse tridimensionnel ci-dessus en bidimensionnel\u00a0? Intuitivement, nous voulons \u00ab\u00a0d\u00e9rouler\u00a0\u00bb le rouleau suisse en un g\u00e2teau plat. En math\u00e9matiques, cela signifie que des points similaires deviendront des points proches et que des points dissemblables deviendront des points distants.<\/p><p id=\"a0da\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">La figure suivante montre un autre exemple. C&rsquo;est un t\u00e9tra\u00e8dre tridimensionnel avec des points de donn\u00e9es regroup\u00e9s dans les coins des sommets. Si nous r\u00e9duisons simplement le graphique tridimensionnel \u00e0 un graphique bidimensionnel comme le fait le panneau (A), cela ne fonctionne pas bien car le groupe (A) devient le cluster central. En revanche, le panneau (B) est probablement une meilleure exposition 2D qui pr\u00e9serve les distances \u00e9loign\u00e9es entre le groupe (A) &#8211; (E) tout en conservant les distances locales des points dans chaque groupe. t-SNE, une technique de r\u00e9duction de dimension non lin\u00e9aire, est con\u00e7ue pour pr\u00e9server les voisinages locaux. Si un ensemble de points se regroupent sur un trac\u00e9 t-SNE, nous pouvons \u00eatre assez certains que ces points sont proches les uns des autres.<\/p><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go la\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16096 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_s9oDAdzGHD20C034aOuP7w.png\" alt=\"\" width=\"567\" height=\"427\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_s9oDAdzGHD20C034aOuP7w.png 567w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_s9oDAdzGHD20C034aOuP7w-300x226.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_s9oDAdzGHD20C034aOuP7w-16x12.png 16w\" sizes=\"(max-width: 567px) 100vw, 567px\" \/><\/div><\/figure><p id=\"8234\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">t-SNE mod\u00e9lise les similitudes entre les points. Comment d\u00e9finit-il les similitudes ? Premi\u00e8rement, il est d\u00e9fini par la distance euclidienne entre le point\u00a0Xi\u00a0et\u00a0Xj. Deuxi\u00e8mement, il est d\u00e9fini comme la probabilit\u00e9 conditionnelle que \u00ab\u00a0la similarit\u00e9 du point de donn\u00e9es\u00a0i\u00a0au point\u00a0j\u00a0est la probabilit\u00e9 conditionnelle\u00a0p\u00a0que le point\u00a0i\u00a0choisisse les donn\u00e9es\u00a0j\u00a0comme son voisin si d&rsquo;autres voisins \u00e9taient s\u00e9lectionn\u00e9s en fonction de leurs probabilit\u00e9s sous une distribution gaussienne\u00a0\u00bb. Dans l&rsquo;expression conditionnelle suivante, si le point\u00a0j\u00a0est plus proche du point\u00a0i\u00a0que d&rsquo;autres points, il a une probabilit\u00e9 plus \u00e9lev\u00e9e (notez le signe n\u00e9gatif) d&rsquo;\u00eatre choisi.<\/p><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go wz\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16097 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_7-Vz3Qh3wdnVtkRR799cXw.png\" alt=\"\" width=\"298\" height=\"74\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_7-Vz3Qh3wdnVtkRR799cXw.png 298w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_7-Vz3Qh3wdnVtkRR799cXw-18x4.png 18w\" sizes=\"(max-width: 298px) 100vw, 298px\" \/><\/div><\/figure><p id=\"9777\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">t-SNE vise \u00e0 faire correspondre la probabilit\u00e9 conditionnelle ci-dessus\u00a0p\u00a0entre\u00a0j\u00a0et\u00a0i\u00a0aussi bien que possible par un espace de faible dimension\u00a0q\u00a0entre le point\u00a0Yi\u00a0et\u00a0Yj,\u00a0comme indiqu\u00e9 ci-dessous. La probabilit\u00e9\u00a0q\u00a0suit une distribution Student-t \u00e0 queue grasse, d&rsquo;o\u00f9 provient le \u00ab\u00a0t\u00a0\u00bb dans t-SNE.<\/p><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go xa\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16098 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_ywDzSjc3gaC7ihlDxHKW_Q.png\" alt=\"\" width=\"266\" height=\"79\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_ywDzSjc3gaC7ihlDxHKW_Q.png 266w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_ywDzSjc3gaC7ihlDxHKW_Q-18x5.png 18w\" sizes=\"(max-width: 266px) 100vw, 266px\" \/><\/div><\/figure><p id=\"6133\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">L&rsquo;\u00e9tape suivante consiste \u00e0 trouver\u00a0Yi\u00a0de sorte que la distribution\u00a0q\u00a0se rapproche le plus possible de la distribution\u00a0p\u00a0. t-SNE utilise la technique de descente de gradient, une technique d&rsquo;optimisation, pour trouver les valeurs.<\/p><p id=\"67bc\" class=\"pw-post-body-paragraph li lj jl lk b ll lm km ln lo lp kp lq lr ls lt lu lv lw lx ly lz ma mb mc md je gc\" data-selectable-paragraph=\"\">Ci-dessous, je montre comment la technique t-SNE est utilis\u00e9e avec l&rsquo;ensemble de donn\u00e9es de l&rsquo;iris.<\/p><pre class=\"lb lc ld le gz vn bt vo\"><span id=\"8589\" class=\"gc vp vq jl vr b do vs vt l vu\" data-selectable-paragraph=\"\">from sklearn.manifold import TSNE<br \/>from sklearn.datasets import load_iris<br \/>from sklearn.decomposition import PCA<br \/>import matplotlib.pyplot as plt<br \/>iris = load_iris()<br \/>X_tsne = TSNE(learning_rate=100).fit_transform(iris.data)<br \/>X_pca = PCA().fit_transform(iris.data)<br \/>plt.figure(figsize=(10, 5))<br \/>plt.subplot(121)<br \/>plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=iris.target)<br \/>plt.subplot(122)<br \/>plt.scatter(X_pca[:, 0], X_pca[:, 1], c=iris.target)<\/span><\/pre><figure class=\"lb lc ld le gz lf gn go paragraph-image\"><div class=\"gn go xb\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-16099 size-full\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_7FZOWHdYI5L_hThheKlJ7Q.png\" alt=\"\" width=\"617\" height=\"325\" title=\"\" srcset=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_7FZOWHdYI5L_hThheKlJ7Q.png 617w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_7FZOWHdYI5L_hThheKlJ7Q-300x158.png 300w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_7FZOWHdYI5L_hThheKlJ7Q-18x9.png 18w, https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2022\/05\/1_7FZOWHdYI5L_hThheKlJ7Q-600x316.png 600w\" sizes=\"(max-width: 617px) 100vw, 617px\" \/><\/div><\/figure>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Data Analysis Wiki Home Page A high-dimensional dataset is a dataset that has a large number of columns \u2026 <\/p>","protected":false},"author":1,"featured_media":0,"parent":15503,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-16071","page","type-page","status-publish","hentry"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages\/16071","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/comments?post=16071"}],"version-history":[{"count":4,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages\/16071\/revisions"}],"predecessor-version":[{"id":17888,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages\/16071\/revisions\/17888"}],"up":[{"embeddable":true,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages\/15503"}],"wp:attachment":[{"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/media?parent=16071"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}