The Similarity Measures
The Similarity Measures
When dealing with a matrix , it is often important to determine whether its columns or rows exhibit any relationships. This can be assessed using various association coefficients, such as the matrix product . Other measures include Pearson correlation, cosine similarity, and covariance. Let's explore these measures in detail for a pair of columns and .
Pearson Correlation
The Pearson correlation between columns and is defined as:
Here, represent a pair (e.g., a pair of industries), denotes the mean of column , and the square norm is defined as .
Cosine Similarity
Cosine similarity is given by:
It can be noted that .
Sample Covariance
The sample covariance is calculated as:
where is the number of observations.
These measures are interrelated, as evident from their formulas. In special cases, the matrix product , covariance matrix, cosine similarity, or Pearson correlation may become identical to one another.
If the column variables are centered (mean is zero), the covariance matrix is , with . When columns are z-standardized (demeaned and divided by the standard deviation), Pearson correlation matches the covariance: with . If columns are unit-scaled (sum of squares is 1), the cosine similarity matrix is with . Centering before unit scaling, i.e., , results in the Pearson correlation matrix equaling the cosine similarity matrix.
In practice, empirical data () often do not meet these special conditions, leading to differences among these measures. For instance, when counting populations or total nominal values of output or trade, the matrix is typically not centered. While normalization or standardization could be applied, it is crucial to evaluate whether such transformations are justified in the specific empirical context.
By understanding these similarity measures, we can better analyze the relationships within data matrices, providing valuable insights into the structure of the data.