Continuing from the previous chapter, we have published four articles in the series of articles on "Constructing a Powerful Crypto-Asset Portfolio Using Multi-Factor Models": "Theoretical Basics" , "Data Preprocessing" , and "Factor Validity Testing" , "Large Category Factor Analysis: Factor Synthesis" .
In the previous article, we explained in detail the problem of factor collinearity (high correlation between factors). Before synthesizing large categories of factors, factor orthogonalization needs to be performed to eliminate collinearity.
Through factor orthogonalization, the directions of the original factors are re-adjusted so that they are orthogonal to each other ($$[\vec{f_i},\vec{f_j}]=0$$, that is, the two vectors are perpendicular to each other). The essence is to Rotation of the original factors on the coordinate axes. This rotation does not change the linear relationship between factors nor the original information contained, and the correlation between the new factors is zero (zero inner product is equivalent to zero correlation), and the factor's explanation of returns remains constant.
1. Mathematical derivation of factor orthogonalization
From the perspective of multi-factor cross-sectional regression, a factor orthogonal system is established.
On each section, we can get the value of the whole market token on each factor. N represents the number of all market tokens on the section, K represents the number of factors, $$f^k=[f_1^k,f_2^k,... ,f_N^k]'$$ represents the value of the entire market token on the k-th factor, and z-score normalization has been performed on each factor, that is, $$\bar{f^k}=0 , ||f^k||=1$$.
$$F_{N\times K}=[f^1,f^2,...,f^K]$$ is a matrix composed of K linearly independent factor column vectors on the cross-section. It is assumed that the above factors are linearly independent ( The correlation is not 100% or -100%, the theoretical basis of orthogonalization processing).
$$$ Factor matrix\ F_{N\times K} =\begin{vmatrix} f_1^1 & f_1^2& \dots&f_1^K\ f_2^1 & f_2^2&\dots&f_2^K\ \vdots &\vdots &\ ddots&\vdots\ f_N^1 & f_N^2&\dots&f_N^K\ \end{vmatrix}(1) $$$
By linearly transforming $$Fₘₙ$$, a new factor orthogonal matrix $$F'ₘₙ = [fᵏ₁,fᵏ₂,...fᵏₙ]'$$ is obtained. The column vectors of the new matrix are orthogonal to each other, that is, any two The inner product of the new factor vector is zero, $$\forall i,j,i\not=j,[(\tilde f^i)'\tilde f^j]=0$$.
Define a transition matrix $$S {K\times K}$$ that rotates from $$F_{N\times K}$$ to $$\tilde{F} {N\times K}$$
$$$ \tilde{F} {N\times K}=F {N\times K}\cdot S_{K\times K}(2) $$$
1.1 Transition matrix $$S_{K\times K}$$
The following begins to solve the transition matrix $$Sₖₖ$$. First, calculate the covariance matrix $$∑ₖₖ$$ of $$ Fₙₖ $$, then the overlap matrix $$Fₙₖ$$ $$Mₖₖ=(N-1)∑ₖₖ$ $, that is
$$$ Overlap matrix\ M_{K\times K} =\begin{vmatrix} (f^1)'(f^1)& (f^1)'(f^2)& \dots&(f^1) '(f^K)\ (f^2)'(f^1) & (f^2)'(f^2)&\dots&(f^2)'(f^K)\ \vdots &\vdots &\ddots&\vdots\ (f^K)'(f^1) & (f^K)'(f^2)&\dots&(f^K)'(f^K)\ \end{vmatrix} ( 3) $$$
The rotated $$\tilde{F}_{N\times K}$$ is an orthogonal matrix. According to the properties of the orthogonal matrix $$AA^T=I$$, then there is
$$$ \begin{aligned} (\tilde{F} {N\times K})'\tilde{F} {N\times K}&=(F_{N\times K}S_{K\times K} )'F_{N\times K}S_{K\times K}\ &=S_{K\times K}'F_{N\times K}'F_{N\times K}S_{K\times K}\ &=S_{K\times K}'M_{K\times K}S_{K\times K}\ &=I_{K\times K} \end{aligned} (4) $$$
so,
$$$ S_{K\times K}'S_{K\times K}=M_{K\times K}^{-1} (7) $$$
$$Sₖₖ$$ that satisfies this condition is a transition matrix that meets the conditions. The general solution of the above formula is:
$$$ S_{K\times K}=M_{K\times K}^{-1/2}C_{K\times K}(8) $$$
Among them, $$C_{K\times K}$$ is any orthogonal matrix
1.2 Symmetric matrix $$M_{K\times K}^{-1/2}$$
Let’s start to solve $$M*{K\times K}^{-1/2}$$, because $$M*{K\times K}$$ is a symmetric matrix, so there must be a positive definite matrix $$U_{ K\times K}$$ satisfies:
$$$ U_{K\times K}'M_{K\times K}U_{K\times K}=D_{K\times K}(9) $$$
in,
$$$ D_{K\times K}=\begin{vmatrix} \lambda_1& 0& \dots&0\ 0 & \lambda_2&\dots&0\ \vdots &\vdots &\ddots&\vdots\ 0 &0&\dots&\lambda_K\ \end{ vmatrix} (10) $$$
$$U*{K\times K},D*{K \times K}$$ are the eigenvector matrix and eigenroot diagonal matrix of $$M*{K\times K}$$ respectively, and $$U *{K\times K}'=U_{K\times K}^{-1},\forall k,\lambda_K>0$$. It can be obtained from formula (13)
$$$ \begin{aligned} M_{K\times K}&=U_{K\times K}D_{K\times K}U_{K\times K}'\ M_{K\times K}^{- 1}&=U_{K\times K}D_{K\times K}^{-1}U_{K\times K}'\ M_{K\times K}^{-1/2}M_{K\ times K}^{-1/2}&=U_{K\times K}D_{K\times K}^{-1/2}I_{K\times K}D_{K\times K}^{- 1/2}U_{K\times K}'\ \end{aligned} (11) $$$
Since $$M*{K\times K}^{-1/2}$$ is a symmetric matrix, and $$U*{K\times K}U*{K\times K}'=I*{K\ times K}$$, a special solution of $$M_{K\times K}^{-1/2}$$ can be obtained based on the above formula:
$$$ \begin{aligned} M_{K\times K}^{-1/2}M_{K\times K}^{-1/2}&=U_{K\times K}D_{K\times K}^{-1/2}U_{K\times K}'U_{K\times K}D_{K\times K}^{-1/2}U_{K\times K}'\ M_{K \times K}^{-1/2}&=U_{K\times K}D_{K\times K}^{-1/2}U_{K\times K}' \end{aligned}(12) $$$
in
$$$ D_{K\times K}^{-1/2}=\begin{vmatrix} 1/\sqrt{\lambda_1}& 0& \dots&0\ 0 & 1/\sqrt{\lambda_2}&\dots&0\ \vdots &\vdots &\ddots&\vdots\ 0 &0&\dots&1/\sqrt{\lambda_K}\ \end{vmatrix}(13) $$$
The transition matrix that can be obtained by bringing the solution of $$M_{K\times K}^{-1/2}$$ into formula (6):
$$$ \begin{aligned} \color{brown}S_{K\times K}&=M_{K\times K}^{-1/2}C_{K\times K}\ &=\color{brown }U_{K\times K}D_{K\times K}^{-1/2}U_{K\times K}'C_{K\times K} \end{aligned}(14) $$$
Among them, $$C_{K\times K}$$ is any orthogonal matrix.
According to formula (12), any factor orthogonality can be transformed into selecting different orthogonal matrices $$C_{K\times K}$$ to rotate the original factors.
1.3 Three orthogonal methods are mainly used to eliminate collinearity
1.3.1 Schmidt quadrature
Therefore, $$S*{K\times K}$$ is an upper triangular matrix, $$C*{K\times K}=U*{K\times K}D*{K\times K}^{-1 /2}U*{K\times K}'S*{K\times K}$$
1.3.2 Canonical orthogonality
Therefore, $$S_{K\times K}=U_{K\times K}D_{K\times K}^{-1/2}$$, $$C_{K\times K}= U_{K\ times K}$$
1.3.3 Symmetric orthogonality
Therefore, $$S_{K\times K}=U_{K\times K}D_{K\times K}^{-1/2}U_{K\times K}'$$, $$C_{K\ times K}=I_{K\times K}$$
2. Specific implementation of three orthogonal methods
1.Schmitt orthogonality
There is a set of linearly independent factor column vectors $$f^1,f^2,...,f^K$$, and a set of orthogonal vector groups $$\tilde{f}^1 can be gradually constructed, \tilde{f}^2,...,\tilde{f}^K$$, the orthogonal vector is:
$$$ \begin{aligned} \tilde{f}^1 &= f^1\ \tilde{f}^2 &= f^2-\frac{[f^2,\tilde{f}^1] }{[\tilde{f}^1,\tilde{f}^1]}\tilde{f}^1\ \tilde{f}^3 &= f^3-\frac{[f^3,\ tilde{f}^1]}{[\tilde{f}^1,\tilde{f}^1]}\tilde{f}^1-\frac{[f^3,\tilde{f}^2 ]}{[\tilde{f}^2,\tilde{f}^2]}\tilde{f}^2\ \dots&=\dots\ \tilde{f}^k &= f^k-\frac {[f^k,\tilde{f}^1]}{[\tilde{f}^1,\tilde{f}^1]}\tilde{f}^1-\frac{[f^k, \tilde{f}^2]}{[\tilde{f}^2,\tilde{f}^2]}\tilde{f}^2-\dots-\frac{[f^k,\tilde{ f}^{k-1}]}{[\tilde{f}^{k-1},\tilde{f}^{k-1}]}\tilde{f}^{k-1}\ \ end{aligned}(15) $$$
And after unitizing $$\tilde{f}^1,\tilde{f}^2,...,\tilde{f}^K$$:
$$$ e^k=\frac{\tilde{f}^k}{||\tilde{f}^k||},(k=1,2,\dots,k)(16) $$$
After the above processing, a set of orthonormal basis is obtained. Since $$e^1,e^2,\dots,e^K$$ is equivalent to $$f^1,f^2,...,f^K$$, they can be expressed linearly with each other, that is $$e^k$$ is a linear combination of $$f^1,f^2,...,f^k$$, there is $$e^k=\beta_1^kf^1+\beta_2^kf^ 2+...+\beta_k^kf^k$$, so the transition matrix $$S*{K\times K}$$ corresponding to the original matrix $$F*{K\times K}$$ is an upper Triangular matrix, shaped like:
$$$ S_{K\times K}=\begin{vmatrix} \beta_1^1& \beta_1^2& \dots&\beta_1^K\ 0 & \beta_2^2&\dots&\beta_2^K\ \vdots &\vdots & \ddots&\vdots\ 0 &0&\dots&\beta_K^K\ \end{vmatrix}(17) $$$
Where $$\beta_k^k=\frac{1}{||\tilde{f}^k||}>0$$. Based on formula (17), any orthogonal matrix selected by Schmidt orthogonal is:
$$$ C_{K\times K}=U_{K\times K}D_{K\times K}^{-1/2}U_{K\times K}'S_{K\times K} (1) $$$
Schmidt orthogonal is a sequential orthogonal method, so it is necessary to determine the order of factor orthogonality. Common orthogonal orders include fixed order (the same orthogonal order is taken on different sections), and dynamic order (the same orthogonal order is taken on each section). The orthogonal order is determined according to certain rules). The advantage of the Schmidt orthogonal method is that there is an explicit correspondence between orthogonal factors in the same order. However, there is no unified selection standard for the orthogonal order. The performance after orthogonalization may be affected by the orthogonal order standard and window period parameters. .
# 施密特正交化from sympy.matrices import Matrix, GramSchmidt Schmidt = GramSchmidt(f.apply(lambda x: Matrix(x),axis=0),orthonormal=True) f_Schmidt = pd.DataFrame(index=f.index,columns=f.columns) for i in range(3): f_Schmidt.iloc[:,i]=np.array(Schmidt[i]) res = f_Schmidt.astype(float)
2.Normal orthogonality
Select the orthogonal matrix $$C_{K\times K}=U_{K\times K}$$, then the transition matrix is:
$$$ S_{K\times K}=U_{K\times K}D_{K\times K}^{-1/2}U_{K\times K}'U_{K\times K}=U_{ K\times K}D_{K\times K}^{-1/2} (2) $$$
Among them, $$U*{K\times K}$$ is the eigenvector matrix, used for factor rotation, $$D*{K\times K}^{-1/2}$$ is the diagonal matrix, used for Scaling of rotated factors. The rotation here is consistent with PCA without dimensionality reduction.
# 规范正交def Canonical(self): overlapping_matrix = (time_tag_data.shape[1] - 1) * np.cov(time_tag_data.astype(float)) # 获取特征值和特征向量eigenvalue, eigenvector = np.linalg.eig(overlapping_matrix) # 转换为np中的矩阵eigenvector = np.mat(eigenvector) transition_matrix = np.dot(eigenvector, np.mat(np.diag(eigenvalue ** (-0.5)))) orthogonalization = np.dot(time_tag_data.T.values, transition_matrix) orthogonalization_df = pd.DataFrame(orthogonalization.T,index = pd.MultiIndex.from_product([time_tag_data.index, [time_tag]]),columns=time_tag_data.columns) self.factor_orthogonalization_data = self.factor_orthogonalization_data.append(orthogonalization_df)
3. Symmetrical orthogonal
Since Schmidt orthogonal adopts the same orthogonal order of factors on several past sections, there is an explicit correspondence between the orthogonal factors and the original factors, while the canonical orthogonal selects the principal components on each section. The directions may be inconsistent, resulting in no stable correspondence between factors before and after orthogonality. It can be seen that the effect of the orthogonal combination depends largely on whether there is a stable correspondence between the factors before and after orthogonality.
Symmetric orthogonality reduces modifications to the original factor matrix as much as possible to obtain a set of orthogonal bases. This can maintain the similarity between the orthogonal factors and the causal factors to the greatest extent. And avoid favoring factors earlier in the orthogonal order like Schmidt's orthogonal method.
Select the orthogonal matrix $$C_{K\times K}=I_{K\times K}$$, then the transition matrix is:
$$$ S_{K\times K}=U_{K\times K}D_{K\times K}^{-1/2}U_{K\times K}' (1) $$$
Symmetric orthogonal properties:
Compared with Schmidt orthogonal, symmetric orthogonal does not need to provide an orthogonal order and treats each factor equally.
Among all orthogonal transition matrices, the similarity between the symmetric orthogonal matrix and the original matrix is the greatest, that is, the distance between the before and after orthogonal matrices is the smallest.
# 对称正交def Symmetry(factors): col_name = factors.columns D, U = np.linalg.eig(np.dot(factors.T, factors)) U = np.mat(U) d = np.diag(D**(-0.5)) S = U*d*UT #F_hat = np.dot(factors, S) F_hat = np.mat(factors)*S factors_orthogonal = pd.DataFrame(F_hat, columns=col_name, index=factors.index) return factors_orthogonal res = Symmetry(f)
About LUCIDA & FALCON
Lucida ( https://www.lucida.fund/ ) is an industry-leading quantitative hedge fund that entered the Crypto market in April 2018. It mainly trades CTA/statistical arbitrage/option volatility arbitrage and other strategies, with a current management scale of US$30 million. .
Falcon ( https://falcon.lucida.fund/ ) is a new generation of Web3 investment infrastructure. It is based on a multi-factor model and helps users "select", "buy", "manage" and "sell" crypto assets. Falcon was hatched by Lucida in June 2022.
More content can be found at https://linktr.ee/lucida_and_falcon
Previous articles
Use multi-factor strategies to build a powerful crypto asset portfolio#Factor Validity Test#
Use multi-factor strategies to build powerful crypto asset portfolios#Data Preprocessing#
Construct a powerful crypto asset portfolio using multi-factor strategies#Theoretical Basics#
From Tech Breakthroughs to Market Boom: Understanding the Link in the Crypto Bull Market
What exactly is driving Crytpo’s bull market? Is it a technological upgrade?
Development as the Driving Force: Understanding the Impact on Token Price Performance?
Is "the team doing something" really related to the currency price?
5 million rows of data review Crypto’s three-year bull market @LUCIDA
5 Million Rows of Data Recap: Investigating The Crypto Market's 3-Year Bull Run @LUCIDA
LUCIDA: Use multi-factor models to select tracks and currencies
LUCIDA × SnapFingers DAO: 21 top public chains in three-year bull market recap
LUCIDA × OKLink: The value of on-chain data to secondary market investment