## Attacks on Rainbow

### UOV

#### Signature

\[ P : \mathbb{F}^n_q \rightarrow \mathbb{F}^m_q\\ Sign(M) = s, P(s) = H(M|| salt) \]

\(P\) is a multivariate quadratic map

\(P = (p_1(x_1, x_2, \dots, x_n), \dots, p_m(x_1, x_2, \dots, x_n))\)

\(p_1(x_1, \dots, x_n) = \sum_{i=1}^n \sum_{j=i}^{n-m} \alpha_{i, j} x_i x_j\)

#### Trapdoor

\[ P : \mathbb{F}^n_q \rightarrow \mathbb{F}^m_q\\ \forall \vec o \in O, P(\vec o) = 0, dim(O) = m \]

If we know \(O\), then we could know any \(\vec t\), we could find \(\vec s\) which satisfy \(P(\vec s) = \vec t\).

- choose random vector \(\vec v\in \mathbb{F}_q^n\)
- build equation \(P(\vec v + \vec o) = P(\vec s)\), then \(P(\vec v + \vec o) = P(\vec v) + P(\vec o) + P'(\vec v, \vec o)\)
- \(P(\vec v)\) is fixed and \(P(\vec o) = 0\), so we just need to solve \(P'(\vec v, \vec o)\) which is linear function of \(\vec o\).
- If there’s a solution, then we find a correct \(\vec s = \vec v + \vec o\), else we choose another random vector and try again.

#### MQP

Multivariate Qudratic Problem.

For \(P: \mathbb{F}_q^n \rightarrow \mathbb{F}_q^m\), known $t ^m_q $, want \(s \in \mathbb{F}^n_q , P(s) = t\).

We often use \(F_4/F_5\) or \(XL\) these kind of Grobner-basis-like algorithm.

#### MRP

MinRank Problem.

For \(n\)-by-\(m\) matrices \(L_1, \dots, L_k\), and a target rank \(r\), want a linear combination \(\sum_{i=1}^k y_iL_i\) has rank at most \(r\).

##### Wiedemann’s Algorihtm

An algorithm could calculate non-zero kernel vector of matrix.

##### XL Algorithm

Details in Chinese could be seen here.

The key thought of XL Algorithm is to use multiply to generate some same monomials, and take these monomials as independent variables, then use Gaussian elimination like Grobner basis, which is equivalent to Macaulay matrix.

#### UOV Attacks

##### Reconciliation attack

The key point is ‘impose \(m\) affine constraints on the entries of \(\vec{\bf{o}}\)’

##### Kipnis-Shamir attack

If \(n > 2m\), the probability \(\frac{q-1}{|O|}\) is calculated from follow.

In \(O\), for any vector \(\vec{\bf{x}}\), it could take \(q-1\) eigen values \(1, 2, \dots, q-1\).

##### Intersection attack

extension of Kipnis-Shamir attack.

#### Rainbow

#### Trapdoor

The trapdoor of Rainbow is \(O_1, O_2, W\)，given \(\vec t\), we want a \(\vec s\) which lets \(P(\vec s) = \vec t\)

choose random vector \(\vec v \in \mathbb{F}_q^n\), and find a \(\vec o_1 \in O_1\) which satisfies \(\forall \vec o_2 \in O_2, P(\vec v + \vec o_1 + \vec o_2) + W = \vec t + W\), which equals \(P(\vec v) + P(\vec o_1) + P'(\vec v, \vec o_1) + W = \vec t+W\), \(P(\vec v)\) is fixed and \(P(\vec o_1) \in W\).

So we just need to solve a linear equations \(P'(\vec v, \vec o_1) = \vec t\) on \(\mathbb{F}_q^m/ W\), go to the second step if we find a solution, else we find another \(\vec v\) and try again.

If we got the solution \(\vec o_1\), then we could use the equation \(P(\vec v + \vec o_1 + \vec o_2) = \vec t\), which equals \(P(\vec v + \vec o_1) - t + P(\vec o_2) + P'(\vec v + \vec o_1, \vec o_2) = 0\)， it’s a linear equation, if there’s a solution, then we got a \(\vec s\), else we go back to the first step.

In the first step of calculating trapdoor information, I’m confused that how could we calculate \(P(\overline o_1), \overline o_1 \in O_1/O_2\), \(\overline o_1 = \{\vec o_1 + \vec o_2 \ | \ \vec o_2 \in O_2 \}, \vec o_1 \in O_1\).

Then I connect the context, I think it means that we want \(\vec o_1 \in O_1\) which lets \(\forall \ \vec o_2 \in O_2\), \(P(\vec v + \vec o_1 + \vec o_2) + W = \vec t + W\).

The number of constraints is \(m - dim W\) because this system lies on \(F^m_q/W\).

#### Previous Rainbow Attacks

##### UOV attack

use Kipnis-Shamir attack to get \(O_2\), then use \(P'(x, ): O_2 \rightarrow W\) to get \(W\), finally use \(P(O_1) \in W\) with Kipnis-Shamir attack to get \(O_1\).

##### MinRank/HighRank attack

According to \(rank(M) + nullity(M) = dim(M)\), the rank of \(M_v\) should certainly be \(n - nullity(M) \leq n - dim(O_2)\), because kernel of \(O_2\) is in \(M_v\), then \(nullity(M) = dim(ker(M)) \geq dim(O_2)\).

##### Rainbow band separation attack

similar to Reconciliation attack in UOV.

#### New Attack

##### Simple Attack

build following system to find a vector \(\vec o \in O_2\)

\(D_x : \mathbb{F}_q^n \rightarrow \mathbb{F}_q^m: y\rightarrow P'(x, y), \quad B \sim Ker(D_x), D_x(Bx)=0\)

\(\left\{ \begin{aligned} D_x(\vec o) = 0 \\ P(\vec o) = 0 \end{aligned} \right. \Rightarrow \tilde P(x) = P(Bx)\)

For odd characteristic, it’s normal to use XL solve \(m\) homogeneous equations in \(n-m\) variables.

For even characteristic, we have \(P'(x, x)= 0\), because \(P'(x, x) = P(2x) - 2P(x)\), and due to all \(p_i(x)\) in \(P(x)\) is quadratic, then \(P(2x) = 4P(x)\), so \(P'(x, x) = 2P(x)\) which vanishes in characteristic 2. Then \(D_x(x) = 0\), so \(x \in ker(D_x)\).

Firstly, I was confused that why then we could get \(\exists \tilde x, \forall y, \tilde P(\tilde x + y) = \tilde P(\tilde x) + \tilde P(y)\), which equals \(P'(B\tilde x, By) = 0\). Because \(B\) is a \(n \times (n-m)\) matrix, there’s no kernel for it, then there’s no \(\tilde x\) let \(B \tilde x = 0\), then how could \(P'(B\tilde x, By)\) always be zero. Then I relaized that we could find a \(\tilde x\) let \(B\tilde x = x\), then \(P'(B\tilde x, By) = P'(x, By) = D_x(By) = 0\), this works only if \(x \in ker(D_x)\).

Then we get \(m-1\) homogeneous equations like $p_1(By)a_i -p_i(By) a_1 = 0 $ in \(n-m-1\) variables, the complexity has been reduced by this transform.

use \(\vec o\) to complete the attack

\(V\) as a change of variables here means, for a vector \(\vec x = \vec w + \vec x', \vec w \in W\), then \(V(\vec x) = V(\vec w') + V(\vec x')\), the first \(m-o_2\) of \(V(w')\) are zeros.

\(U\) as a change of variables here means, for a vector \((x_0, x_1, \dots, x_n) \in \mathbb{F}_q^n\), let \(\hat x_0 = (x_0', x_1', \dots, x_{n-o_2}', 0, 0, \dots, 0), \hat x_1 = (0, 0, \dots, 0, x_{n-o_2+1}', \dots, x_{}')\), then \(U(x) = \hat x_0 + \hat x_1\), and \(\hat x_1\) is in \(O_2\).

In general, \(V^{-1}\) and \(U\) are similar to nullity matrices for \(W\) and \(O_2\).

Knonw the \(V\) and \(U\) clearly, the left would be clear.

\(P \cdot U(x) = P(\hat x_0 + \hat x_1) = P'(\hat x_0, \hat x_1) + P(\hat x_0) + P(\hat x_1) = P'(\hat x_0, \hat x_1) + P(\hat x_0)\), we know that \(P'(\hat x_0, \hat x_1) \in W\), so under \(V\), the first \(m-o_2\) which is \(F_1(x)\) would only depends on \(P(\hat x_0)\). \(\forall\ U(y) \in O_2\), \(U(y) + \hat x_1\) could be taken as \(\hat x_1'\), so \(F_1(x+y) = F_1(x)\).

Then we reduce the problem which is finding \(x\) for given \(P(x)\) to finding \(x\) for given \(F_1(x)\).

##### Combination with rectangular MinRank attack

Straightly speaking, the combination method put the equation \(D_x(\vec o) = 0\) in MinRank system.

I recall the MinRank attack here, for matrices like: \[ L_i = \begin{pmatrix} P'(e_1, e_i) \\ \cdots \\ P'(e_n, e_i) \end{pmatrix} \] If there’s a linear combination \(o_i\), let \(\sum_{i=1}^n o_iL_i\) has rank at most \(dim(W) = o_2\), if \(\vec o \notin O_2\) , the possibility of this event is about \(C_n^{n-o_2} * (\frac{1}{n})^{n-o_2}\), which equals the event that choose random \(n\) vectors in \(\mathbb{F}_q^n\) with \(n-o_2\) linear dependence. Generally, the \(\vec o\) is probably in \(O_2\).

In combination, for matrices like: \[ \tilde L_i = \begin{pmatrix} P'(e_1, b_i) \\ \cdots \\ P'(e_n, b_i) \end{pmatrix} \] the solution \(x_i\) of this MinRank problem satisfies \(\sum x_ib_i\) probably in \(O_2\), then the dimension has been reduced to \(n-m\).