R version 4.2.2 (2022-10-31)
Platform: aarch64-apple-darwin20 (64-bit)
Running under: macOS Ventura 13.2.1
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] Rcpp_1.0.10 here_1.0.1 lattice_0.20-45
[4] png_0.1-8 rprojroot_2.0.3 digest_0.6.31
[7] grid_4.2.2 jsonlite_1.8.4 evaluate_0.20
[10] rlang_1.0.6 cli_3.6.0 rstudioapi_0.14
[13] Matrix_1.5-1 reticulate_1.28-9000 rmarkdown_2.20
[16] tools_4.2.2 xfun_0.37 yaml_2.3.7
[19] fastmap_1.1.1 compiler_4.2.2 htmltools_0.5.4
[22] knitr_1.42
1 Overview
Most of this course focuses on supervised learning methods such as regression and classification.
In that setting we observe both a set of features \(X_1,X_2,...,X_p\) for each object, as well as a response or outcome variable \(Y\) . The goal is then to predict \(Y\) using \(X_1,X_2,...,X_p\).
In this lecture we instead focus on unsupervised learning, we where observe only the features \(X_1,X_2,...,X_p\). We are not interested in prediction, because we do not have an associated response variable \(Y\).
1.1 Goals of unsupervised learning
The goal is to discover interesting things about the measurements: is there an informative way to visualize the data? Can we discover subgroups among the variables or among the observations?
We discuss two methods:
principal components analysis, a tool used for data visualization or data pre-processing before supervised techniques are applied, and
clusterng, a broad class of methods for discovering unknown subgroups in data.
1.2 Challenge of unsupervised learning
Unsupervised learning is more subjective than supervised learning, as there is no simple goal for the analysis, such as prediction of a response.
But techniques for unsupervised learning are of growing importance in a number of fields:
subgroups of breast cancer patients grouped by their gene expression measurements,
groups of shoppers characterized by their browsing and purchase histories,
movies grouped by the ratings assigned by movie viewers.
1.3 Another advantage
It is often easier to obtain unlabeled data — from a lab instrument or a computer — than labeled data, which can require human intervention.
For example it is difficult to automatically assess the overall sentiment of a movie review: is it favorable or not?
2 Principal Components Analysis (PCA)
PCA produces a low-dimensional representation of a dataset. It finds a sequence of linear combinations of the variables that have maximal variance, and are mutually uncorrelated.
Apart from producing derived variables for use in supervised learning problems, PCA also serves as a tool for data visualization.
The first principal component of a set of features \(X_1,X_2,...,X_p\) is the normalized linear combination of the features \[
Z_1 = \phi_{11} X_1 + \phi_{21} X_2 + \cdots + \phi_{p1} X_p
\] that has the largest variance. By normalized, we mean that \(\sum_{j=1}^p \phi_{j1}^2 = 1\).
We refer to the elements \(\phi_{11}, \ldots, \phi_{p1}\) as the loadings of the first principal component; together, the loadings make up the principal component loading vector, \(\phi_1 = (\phi_{11}, \ldots, \phi_{p1})\).
We constrain the loadings so that their sum of squares is equal to one, since otherwise setting these elements to be arbitrarily large in absolute value could result in an arbitrarily large variance.
2.1 Computation of PCs
Suppose we have an \(n \times p\) data set \(\boldsymbol{X}\). Since we are only interested in variance, we assume that each of the variables in \(\boldsymbol{X}\) has been centered to have mean zero (that is, the column means of \(\boldsymbol{X}\) are zero).
We then look for the linear combination of the sample feature values of the form \[
z_{i1} = \phi_{11} x_{i1} + \phi_{21} x_{i2} + \cdots + \phi_{p1} x_{ip}
\tag{1}\] for \(i=1,\ldots,n\) that has largest sample variance, subject to the constraint that \(\sum_{j=1}^p \phi_{j1}^2 = 1\).
Since each of the \(x_{ij}\) has mean zero, then so does \(z_{i1}\) (for any values of \(\phi_{j1}\)). Hence the sample variance of the \(z_{i1}\) can be written as \(\frac{1}{n} \sum_{i=1}^n z_{i1}^2\).
Plugging in (Equation 1) the first principal component loading vector solves the optimization problem \[
\max_{\phi_{11},\ldots,\phi_{p1}} \frac{1}{n} \sum_{i=1}^n \left( \sum_{j=1}^p \phi_{j1} x_{ij} \right)^2 \text{ subject to } \sum_{j=1}^p \phi_{j1}^2 = 1.
\]
This problem can be solved via a singular-value decomposition (SVD) of the matrix \(\boldsymbol{X}\), a standard technique in linear algebra.
We refer to \(Z_1\) as the first principal component, with realized values \(z_{11}, \ldots, z_{n1}\). ### Geometry of PCA
The loading vector \(\phi_1\) with elements \(\phi_{11}, \phi_{21}\), , \(\phi_{p1}\) defines a direction in feature space along which the data vary the most.
If we project the \(n\) data points \(x_1, \ldots, x_n\) onto this direction, the projected values are the principal component scores \(z_{11},\ldots,z_{n1}\) themselves.
2.2 Futher PCs
The second principal component is the linear combination of \(X_1, \ldots, X_p\) that has maximal variance among all linear combinations that are uncorrelated with \(Z_1\).
The second principal component scores \(z_{12}, z_{22}, \ldots, z_{n2}\) take the form \[
z_{i2} = \phi_{12} x_{i1} + \phi_{22} x_{i2} + \cdots + \phi_{p2} x_{ip},
\] where \(\phi_2\) is the second principal component loading vector, with elements \(\phi_{12}, \phi_{22}, \ldots , \phi_{p2}\).
It turns out that constraining \(Z_2\) to be uncorrelated with \(Z_1\) is equivalent to constraining the direction \(\phi_2\) to be orthogonal (perpendicular) to the direction \(\phi_1\). And so on.
The principal component directions \(\phi_1, \phi_2, \phi_3, \ldots\) are the ordered sequence of right singular vectors of the matrix \(\boldsymbol{X}\), and the variances of the components are \(\frac{1}{n}\) times the squares of the singular values. There are at most \(\min(n − 1, p)\) principal components.
2.3USAarrests data
For each of the fifty states in the United States, the data set contains the number of arrests per 100,000 residents for each of three crimes: Assault, Murder, and Rape. We also record UrbanPop (the percent of the population in each state living in urban areas).
The principal component score vectors have length \(n = 50\), and the principal component loading vectors have length \(p = 4\).
PCA was performed after standardizing each variable to have mean zero and standard deviation one.
PCA Loadings:
PC1
PC2
Murder
0.5358995
-0.4181809
Assault
0.5831836
-0.1879856
UrbanPop
0.2781909
0.8728062
Rape
0.5434321
0.1673186
2.4 PCA find the hyperplane closest to the observations
The first principal component loading vector has a very special property: it defines the line in \(p\)-dimensional space that is closest to the \(n\) observations (using average squared Euclidean distance as a measure of closeness).
The notion of principal components as the dimensions that are closest to the \(n\) observations extends beyond just the first principal component.
For instance, the first two principal components of a data set span the plane that is closest to the \(n\) observations, in terms of average squared Euclidean distance.
2.5 Scaling
If the variables are in different units, scaling each to have standard deviation equal to one is recommended.
If they are in the same units, you might or might not scale the variables.
2.6 Proportion variance explained (PVE)
To understand the strength of each component, we are interested in knowing the proportion of variance explained (PVE) by each one.
The total variance present in a data set (assuming that the variables have been centered to have mean zero) is defined as \[
\sum_{j=1}^p \text{Var}(X_j) = \sum_{j=1}^p \frac{1}{n} \sum_{i=1}^n x_{ij}^2,
\] and the variance explained by the \(m\)th principal component is \[
\text{Var}(Z_m) = \frac{1}{n} \sum_{i=1}^n z_{im}^2.
\]
It can be shown that \[
\sum_{j=1}^p \text{Var}(X_j) = \sum_{m=1}^M \text{Var}(Z_m),
\] with \(M = \min(n-1, p)\).
Therefore, the PVE of the \(m\)th principal component is given by the positive quantity between 0 and 1 \[
\frac{\sum_{i=1}^n z_{im}^2}{\sum_{j=1}^p \sum_{i=1}^n x_{ij}^2}.
\]
The PVEs sum to one. We sometimes display the cumulative PVEs.
The scree plot on the previous slide can be used as a guide: we look for an elbow.
3 Clustering
PCA looks for a low-dimensional representation of the observations that explains a good fraction of the variance.
Clustering looks for homogeneous subgroups among the observations.
3.1 Two clustering methods
In K-means clustering, we seek to partition the observations into a pre-specified number of clusters.
In hierarchical clustering, we do not know in advance how many clusters we want; in fact, we end up with a tree-like visual representation of the observations, called a dendrogram, that allows us to view at once the clusterings obtained for each possible number of clusters, from 1 to \(n\).
3.2 K-means clustering
Let \(C_1,\ldots,C_K\) denotesetscontainingtheindicesofthe observations in each cluster. These sets satisfy two properties:
\(C_1 \cup C_2 \cup \ldots \cup C_K = \{1,\ldots,n\}\). In other words, each observation belongs to at least one of the \(K\) clusters.
\(C_k \cap C_{k'} = \emptyset\) for all \(k \ne k'\). In other words, the clusters are non-overlapping: no observation belongs to more than one cluster.
For instance, if the \(i\)th observation is in the \(k\)th cluster, then \(i \in C_k\).
The idea behind \(K\)-means clustering is that a good clustering is one for which the within-cluster variation is as small as possible.
The within-cluster variation for cluster \(C_k\) is a measure WCV(\(C_k\)) of the amount by which the observations within a cluster differ from each other.
Hence we want to solve the problem \[
\min_{C_1,\ldots,C_K} \left\{ \sum_{i=1}^K \text{WCV}(C_k) \right\}.
\] In words, this formula says that we want to partition the observations into \(K\) clusters such that the total within-cluster variation, summed over all \(K\) clusters, is as small as possible.
Typically we use Euclidean distance \[
\text{WCV}(C_k) = \frac{1}{|C_k|} \sum_{i, i' \in C_k} \sum_{j=1}^p (x_{ij} - x_{i'j})^2,
\] where \(|C_k|\) denotes the number of observations in the \(k\)th cluster.
Therefore the optimization problem that defines \(K\)-means clustering is \[
\min_{C_1,\ldots,C_K} \left\{ \sum_{i=1}^K \frac{1}{|C_k|} \sum_{i, i' \in C_k} \sum_{j=1}^p (x_{ij} - x_{i'j})^2 \right\}.
\]
\(K\)-means clustering algorithm:
Randomly assign a number, from 1 to \(K\), to each of the observations. These serve as initial cluster assignments for the observations
Iterate until the cluster assignments stop changing:
2.a. For each of the \(K\) clusters, compute the cluster centroid. The \(k\)th cluster centroid is the vector of the \(p\) feature means for the observations in the \(k\)th cluster.
2.b. Assign each observation to the cluster whose centroid is closest (where closest is defined using Euclidean distance).
This algorithm is guaranteed to decrease the value of the objective at each step. Why? Note that \[
\frac{1}{|C_k|} \sum_{i, i' \in C_k} \sum_{j=1}^p (x_{ij} - x_{i'j})^2 = 2 \sum_{i \in C_k} \sum_{j=1}^p (x_{ij} - \bar{x}_{kj})^2,
\] where \(\bar{x}_{kj} = \frac{1}{|C_k|} \sum_{i \in C_k} x_{ij}\) is the mean for for feature \(j\) in cluster \(C_k\). However it is not guaranteed to give the global minimum.
Different staring values.
3.3 Hierarchical clustering
\(K\)-means clustering requires us to pre-specify the number of clusters \(K\). This can be a disadvantage (later we discuss strategies for choosing \(K\)).
Hierarchical clustering is an alternative approach which does not require that we commit to a particular choice of \(K\).
We describe bottom-up or agglomerative clustering. This is the most common type of hierarchical clustering, and refers to the fact that a dendrogram is built starting from the leaves and combining clusters up to the trunk.
Hierarchical clustering algorithm
Start with each point in its own cluster.
Identify the closest two clusters and merge them.
Repeat.
Ends when all points are in a single cluster.
An example.
45 observations generated in 2-dimensional space. In reality there are three distinct classes, shown in separate colors. However, we will treat these class labels as unknown and will seek to cluster the observations in order to discover the classes from the data.
Left: Dendrogram obtained from hierarchically clustering the data from previous slide, with complete linkage and Euclidean distance. Center: The dendrogram from the left-hand panel, cut at a height of 9 (indicated by the dashed line). This cut results in two distinct clusters, shown in different colors. Right: The dendrogram from the left-hand panel, now cut at a height of 5. This cut results in three distinct clusters, shown in different colors. Note that the colors were not used in clustering, but are simply used for display purposes in this figure.
Types of linkage.
Linkage
Description
Complete
Maximal inter-cluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the largest of these dissimilarities.
Single
Minimal inter-cluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the smallest of these dissimilarities.
Average
Mean inter-cluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the average of these dissimilarities.
Centroid
Dissimilarity between the centroid for cluster A (a mean vector of length \(p\)) and the centroid for cluster B. Centroid linkage can result in undesirable inversions.
Choice of dissimilarity measure.
So far have used Euclidean distance.
An alternative is correlation-based distance which considers two observations to be similar if their features are highly correlated.
Practical issues.
Scaling of the variables matters!
What dissimilarity measure should be used?
What type of linkage should be used?
How many clusters to choose?
Which features should we use to drive the clustering?
4 Conclusions
Unsupervised learning is important for understanding the variation and grouping structure of a set of unlabeled data, and can be a useful pre-processor for supervised learning.
It is intrinsically more difficult than supervised learning because there is no gold standard (like an outcome variable) and no single objective (like test set accuracy).
It is an active field of research, with many recently developed tools such as self-organizing maps, independent components analysis (ICA) and spectral clustering. See ESL Chapter 14.