1. Introduction
1.1. Classical setting
1.1.1. Multiple zeta values
Multiple zeta values of Euler (MZV's for short) are real positive numbers given by
 $$\begin{align*}\zeta(n_1,\dots,n_r)=\sum_{0<k_1<\dots<k_r} \frac{1}{k_1^{n_1} \dots k_r^{n_r}}, \quad \text{where } n_i \geq 1, n_r \geq 2. \end{align*}$$
$$\begin{align*}\zeta(n_1,\dots,n_r)=\sum_{0<k_1<\dots<k_r} \frac{1}{k_1^{n_1} \dots k_r^{n_r}}, \quad \text{where } n_i \geq 1, n_r \geq 2. \end{align*}$$
Here, r is called the depth and 
 $w=n_1+\dots +n_r$
 is called the weight of the presentation
$w=n_1+\dots +n_r$
 is called the weight of the presentation 
 $\zeta (n_1,\dots ,n_r)$
. These values cover the special values
$\zeta (n_1,\dots ,n_r)$
. These values cover the special values 
 $\zeta (n)$
 for
$\zeta (n)$
 for 
 $n \geq 2$
 of the Riemann zeta function and have been studied intensively, especially in the last three decades with important and deep connections to different branches of mathematics and physics, for example, arithmetic geometry, knot theory and higher energy physics. We refer the reader to [Reference Burgos Gil and Fresan7, Reference Zagier43] for more details.
$n \geq 2$
 of the Riemann zeta function and have been studied intensively, especially in the last three decades with important and deep connections to different branches of mathematics and physics, for example, arithmetic geometry, knot theory and higher energy physics. We refer the reader to [Reference Burgos Gil and Fresan7, Reference Zagier43] for more details.
 The main goal of this theory is to understand all 
 $\mathbb Q$
-linear relations between MZV's. Goncharov [Reference Goncharov19, Conjecture 4.2] conjectures that all
$\mathbb Q$
-linear relations between MZV's. Goncharov [Reference Goncharov19, Conjecture 4.2] conjectures that all 
 $\mathbb Q$
-linear relations between MZV's can be derived from those between MZV's of the same weight. As the next step, precise conjectures formulated by Zagier [Reference Zagier43] and Hoffman [Reference Hoffman23] predict the dimension and an explicit basis for the
$\mathbb Q$
-linear relations between MZV's can be derived from those between MZV's of the same weight. As the next step, precise conjectures formulated by Zagier [Reference Zagier43] and Hoffman [Reference Hoffman23] predict the dimension and an explicit basis for the 
 $\mathbb Q$
-vector space
$\mathbb Q$
-vector space 
 $\mathcal {Z}_k$
 spanned by MZV's of weight k for
$\mathcal {Z}_k$
 spanned by MZV's of weight k for 
 $k \in \mathbb {N}$
.
$k \in \mathbb {N}$
.
Conjecture 1.1 (Zagier’s conjecture)
 We define a Fibonacci-like sequence of integers 
 $d_k$
 as follows. Letting
$d_k$
 as follows. Letting 
 $d_0=1, d_1=0$
 and
$d_0=1, d_1=0$
 and 
 $d_2=1$
, we define
$d_2=1$
, we define 
 $d_k=d_{k-2}+d_{k-3}$
 for
$d_k=d_{k-2}+d_{k-3}$
 for 
 $k \geq 3$
. Then for
$k \geq 3$
. Then for 
 $k \in \mathbb {N}$
, we have
$k \in \mathbb {N}$
, we have 
 $$\begin{align*}\dim_{\mathbb Q} \mathcal Z_k = d_k. \end{align*}$$
$$\begin{align*}\dim_{\mathbb Q} \mathcal Z_k = d_k. \end{align*}$$
Conjecture 1.2 (Hoffman’s conjecture)
 The 
 $\mathbb Q$
-vector space
$\mathbb Q$
-vector space 
 $\mathcal Z_k$
 is generated by the basis consisting of MZV's of weight k of the form
$\mathcal Z_k$
 is generated by the basis consisting of MZV's of weight k of the form 
 $\zeta (n_1,\dots ,n_r)$
 with
$\zeta (n_1,\dots ,n_r)$
 with 
 $n_i \in \{2,3\}$
.
$n_i \in \{2,3\}$
.
 The algebraic part of these conjectures which concerns upper bounds for 
 $\dim _{\mathbb Q} \mathcal Z_k$
 was solved by Terasoma [Reference Terasoma34], Deligne–Goncharov [Reference Deligne and Goncharov17] and Brown [Reference Brown5] using the theory of mixed Tate motives.
$\dim _{\mathbb Q} \mathcal Z_k$
 was solved by Terasoma [Reference Terasoma34], Deligne–Goncharov [Reference Deligne and Goncharov17] and Brown [Reference Brown5] using the theory of mixed Tate motives.
Theorem 1.3 (Deligne–Goncharov, Terasoma)
 For 
 $k \in \mathbb {N}$
, we have
$k \in \mathbb {N}$
, we have 
 $\dim _{\mathbb Q} \mathcal Z_k \leq d_k$
.
$\dim _{\mathbb Q} \mathcal Z_k \leq d_k$
.
Theorem 1.4 (Brown)
 The 
 $\mathbb Q$
-vector space
$\mathbb Q$
-vector space 
 $\mathcal Z_k$
 is generated by MZV's of weight k of the form
$\mathcal Z_k$
 is generated by MZV's of weight k of the form 
 $\zeta (n_1,\dots ,n_r)$
 with
$\zeta (n_1,\dots ,n_r)$
 with 
 $n_i \in \{2,3\}$
.
$n_i \in \{2,3\}$
.
 Unfortunately, the transcendental part which concerns lower bounds for 
 $\dim _{\mathbb Q} \mathcal Z_k$
 is completely open. We refer the reader to [Reference Burgos Gil and Fresan7, Reference Deligne16, Reference Zagier43] for more details and more exhaustive references.
$\dim _{\mathbb Q} \mathcal Z_k$
 is completely open. We refer the reader to [Reference Burgos Gil and Fresan7, Reference Deligne16, Reference Zagier43] for more details and more exhaustive references.
1.1.2. Alternating multiple zeta values
There exists a variant of MZV's called the alternating multiple zeta values (AMZV's for short), also known as Euler sums. They are real numbers given by
 $$\begin{align*}\zeta \begin{pmatrix} \epsilon_1 & \dots & \epsilon_r \\ n_1 & \dots & n_r \end{pmatrix}=\sum_{0<k_1<\dots<k_r} \frac{\epsilon_1^{k_1} \dots \epsilon_r^{k_r}}{k_1^{n_1} \dots k_r^{n_r}}, \end{align*}$$
$$\begin{align*}\zeta \begin{pmatrix} \epsilon_1 & \dots & \epsilon_r \\ n_1 & \dots & n_r \end{pmatrix}=\sum_{0<k_1<\dots<k_r} \frac{\epsilon_1^{k_1} \dots \epsilon_r^{k_r}}{k_1^{n_1} \dots k_r^{n_r}}, \end{align*}$$
where 
 $\epsilon _i \in \{\pm 1\}$
,
$\epsilon _i \in \{\pm 1\}$
, 
 $n_i \in \mathbb {N}$
 and
$n_i \in \mathbb {N}$
 and 
 $(n_r,\epsilon _r) \neq (1,1)$
. Similar to MZV's, these values have been studied by Broadhurst, Deligne-–Goncharov, Hoffman, Kaneko—Tsumura and many others because of the many connections in different contexts. We refer the reader to [Reference Harada21, Reference Hoffman24, Reference Zhao44] for further references.
$(n_r,\epsilon _r) \neq (1,1)$
. Similar to MZV's, these values have been studied by Broadhurst, Deligne-–Goncharov, Hoffman, Kaneko—Tsumura and many others because of the many connections in different contexts. We refer the reader to [Reference Harada21, Reference Hoffman24, Reference Zhao44] for further references.
 As before, it is expected that all 
 $\mathbb Q$
-linear relations between AMZV's can be derived from those between AMZV's of the same weight. In particular, it is natural to ask whether one could formulate conjectures similar to those of Zagier and Hoffman for AMZV's of fixed weight. By the work of Deligne–Goncharov [Reference Deligne and Goncharov17], the sharp upper bounds are achieved:
$\mathbb Q$
-linear relations between AMZV's can be derived from those between AMZV's of the same weight. In particular, it is natural to ask whether one could formulate conjectures similar to those of Zagier and Hoffman for AMZV's of fixed weight. By the work of Deligne–Goncharov [Reference Deligne and Goncharov17], the sharp upper bounds are achieved:
Theorem 1.5 (Deligne–Goncharov)
 For 
 $k \in \mathbb {N}$
, if we denote by
$k \in \mathbb {N}$
, if we denote by 
 $\mathcal A_k$
 the
$\mathcal A_k$
 the 
 $\mathbb Q$
-vector space spanned by AMZV's of weight k, then
$\mathbb Q$
-vector space spanned by AMZV's of weight k, then 
 $\dim _{\mathbb Q} \mathcal A_k \leq F_{k+1}$
. Here,
$\dim _{\mathbb Q} \mathcal A_k \leq F_{k+1}$
. Here, 
 $F_n$
 is the n-th Fibonacci number defined by
$F_n$
 is the n-th Fibonacci number defined by 
 $F_1=F_2=1$
 and
$F_1=F_2=1$
 and 
 $F_{n+2}=F_{n+1}+F_n$
 for all
$F_{n+2}=F_{n+1}+F_n$
 for all 
 $n \geq 1$
.
$n \geq 1$
.
 The fact that the previous upper bounds would be sharp was also explained by Deligne in [Reference Deligne15] (see also [Reference Deligne and Goncharov17]) using a variant of a conjecture of Grothendieck. In the direction of extending Brown’s theorem for AMZV's, there are several sets of generators for 
 $\mathcal A_k$
 (see, for example, [Reference Charlton12, Reference Deligne15]). However, we mention that these generators are only linear combinations of AMZV's.
$\mathcal A_k$
 (see, for example, [Reference Charlton12, Reference Deligne15]). However, we mention that these generators are only linear combinations of AMZV's.
 Finally, we know nothing about nontrivial lower bounds for 
 $\dim _{\mathbb Q} \mathcal A_k$
.
$\dim _{\mathbb Q} \mathcal A_k$
.
1.2. Function field setting
1.2.1. MZV's of Thakur and analogues of Zagier–Hoffman’s conjectures
 By analogy between number fields and function fields, based on the pioneering work of Carlitz [Reference Carlitz8], Thakur [Reference Thakur35] defined analogues of multiple zeta values in positive characteristic. We now need to introduce some notations. Let 
 $A=\mathbb F_q[\theta ]$
 be the polynomial ring in the variable
$A=\mathbb F_q[\theta ]$
 be the polynomial ring in the variable 
 $\theta $
 over a finite field
$\theta $
 over a finite field 
 $\mathbb F_q$
 of q elements of characteristic
$\mathbb F_q$
 of q elements of characteristic 
 $p>0$
. We denote by
$p>0$
. We denote by 
 $A_+$
 the set of monic polynomials in A. Let
$A_+$
 the set of monic polynomials in A. Let 
 $K=\mathbb F_q(\theta )$
 be the fraction field of A equipped with the rational point
$K=\mathbb F_q(\theta )$
 be the fraction field of A equipped with the rational point 
 $\infty $
. Let
$\infty $
. Let 
 $K_\infty $
 be the completion of K at
$K_\infty $
 be the completion of K at 
 $\infty $
 and
$\infty $
 and 
 $\mathbb {C}_\infty $
 be the completion of a fixed algebraic closure
$\mathbb {C}_\infty $
 be the completion of a fixed algebraic closure 
 $\overline K$
 of K at
$\overline K$
 of K at 
 $\infty $
. We denote by
$\infty $
. We denote by 
 $v_\infty $
 the discrete valuation on K corresponding to the place
$v_\infty $
 the discrete valuation on K corresponding to the place 
 $\infty $
 normalized such that
$\infty $
 normalized such that 
 $v_\infty (\theta )=-1$
, and by
$v_\infty (\theta )=-1$
, and by 
 $\lvert \cdot \rvert _\infty = q^{-v_\infty }$
 the associated absolute value on K. The unique valuation of
$\lvert \cdot \rvert _\infty = q^{-v_\infty }$
 the associated absolute value on K. The unique valuation of 
 $\mathbb C_\infty $
 which extends
$\mathbb C_\infty $
 which extends 
 $v_\infty $
 will still be denoted by
$v_\infty $
 will still be denoted by 
 $v_\infty $
. Finally, we denote by
$v_\infty $
. Finally, we denote by 
 $\overline {\mathbb F}_q$
 the algebraic closure of
$\overline {\mathbb F}_q$
 the algebraic closure of 
 $\mathbb F_q$
 in
$\mathbb F_q$
 in 
 $\overline {K}$
.
$\overline {K}$
.
 Let 
 $\mathbb N=\{1,2,\dots \}$
 be the set of positive integers and
$\mathbb N=\{1,2,\dots \}$
 be the set of positive integers and 
 $\mathbb Z^{\geq 0}=\{0,1,2,\dots \}$
 be the set of nonnegative integers. In [Reference Carlitz8], Carlitz introduced the Carlitz zeta values
$\mathbb Z^{\geq 0}=\{0,1,2,\dots \}$
 be the set of nonnegative integers. In [Reference Carlitz8], Carlitz introduced the Carlitz zeta values 
 $\zeta _A(n)$
 for
$\zeta _A(n)$
 for 
 $n \in \mathbb {N}$
 given by
$n \in \mathbb {N}$
 given by 
 $$\begin{align*}\zeta_A(n) := \sum_{a \in A_+} \frac{1}{a^n} \in K_\infty \end{align*}$$
$$\begin{align*}\zeta_A(n) := \sum_{a \in A_+} \frac{1}{a^n} \in K_\infty \end{align*}$$
which are analogues of classical special zeta values in the function field setting. For any tuple of positive integers 
 $\mathfrak s=(s_1,\ldots ,s_r) \in \mathbb {N}^r$
, Thakur [Reference Thakur35] defined the characteristic p multiple zeta value (MZV for short)
$\mathfrak s=(s_1,\ldots ,s_r) \in \mathbb {N}^r$
, Thakur [Reference Thakur35] defined the characteristic p multiple zeta value (MZV for short) 
 $\zeta _A(\mathfrak {s})$
 or
$\zeta _A(\mathfrak {s})$
 or 
 $\zeta _A(s_1,\ldots ,s_r)$
 by
$\zeta _A(s_1,\ldots ,s_r)$
 by 
 $$ \begin{align*} \zeta_A(\mathfrak{s}):=\sum \frac{1}{a_1^{s_1} \ldots a_r^{s_r}} \in K_\infty, \end{align*} $$
$$ \begin{align*} \zeta_A(\mathfrak{s}):=\sum \frac{1}{a_1^{s_1} \ldots a_r^{s_r}} \in K_\infty, \end{align*} $$
where the sum runs through the set of tuples 
 $(a_1,\ldots ,a_r) \in A_+^r$
 with
$(a_1,\ldots ,a_r) \in A_+^r$
 with 
 $\deg a_1>\cdots >\deg a_r$
. We call r the depth of
$\deg a_1>\cdots >\deg a_r$
. We call r the depth of 
 $\zeta _A(\mathfrak {s})$
 and
$\zeta _A(\mathfrak {s})$
 and 
 $w(\mathfrak {s})=s_1+\dots +s_r$
 the weight of
$w(\mathfrak {s})=s_1+\dots +s_r$
 the weight of 
 $\zeta _A(\mathfrak {s})$
. We note that Carlitz zeta values are exactly depth one MZV's. Thakur [Reference Thakur36] showed that all the MZV's do not vanish. We refer the reader to [Reference Anderson and Thakur3, Reference Anderson and Thakur4, Reference Gezmis and Pellarin18, Reference Rodriguez and Thakur28, Reference Lara Rodriguez and Thakur29, Reference Pellarin33, Reference Thakur35, Reference Thakur37, Reference Thakur38, Reference Thakur39, Reference Thakur, Böckle, Goss, Hartl and Papanikolas40, Reference Yu42] for more details on these objects.
$\zeta _A(\mathfrak {s})$
. We note that Carlitz zeta values are exactly depth one MZV's. Thakur [Reference Thakur36] showed that all the MZV's do not vanish. We refer the reader to [Reference Anderson and Thakur3, Reference Anderson and Thakur4, Reference Gezmis and Pellarin18, Reference Rodriguez and Thakur28, Reference Lara Rodriguez and Thakur29, Reference Pellarin33, Reference Thakur35, Reference Thakur37, Reference Thakur38, Reference Thakur39, Reference Thakur, Böckle, Goss, Hartl and Papanikolas40, Reference Yu42] for more details on these objects.
 As in the classical setting, the main goal of the theory is to understand all linear relations over K between MZV's. We now state analogues of Zagier–Hoffman’s conjectures in positive characteristic formulated by Thakur in [Reference Thakur39, §8] and by Todd in [Reference Todd41]. For 
 $w \in \mathbb {N}$
, we denote by
$w \in \mathbb {N}$
, we denote by 
 $\mathcal Z_w$
 the K-vector space spanned by the MZV's of weight w. We denote by
$\mathcal Z_w$
 the K-vector space spanned by the MZV's of weight w. We denote by 
 $\mathcal T_w$
 the set of
$\mathcal T_w$
 the set of 
 $\zeta _A(\mathfrak {s})$
, where
$\zeta _A(\mathfrak {s})$
, where 
 $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb {N}^r$
 of weight w with
$\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb {N}^r$
 of weight w with 
 $1\leq s_i\leq q$
 for
$1\leq s_i\leq q$
 for 
 $1\leq i\leq r-1$
 and
$1\leq i\leq r-1$
 and 
 $s_r<q$
.
$s_r<q$
.
Conjecture 1.6 (Zagier’s conjecture in positive characteristic)
Letting
 $$ \begin{align*} d(w)=\begin{cases} 1 & \text{ if } w=0, \\ 2^{w-1} & \text{ if } 1 \leq w \leq q-1, \\ 2^{w-1}-1 & \text{ if } w=q, \end{cases} \end{align*} $$
$$ \begin{align*} d(w)=\begin{cases} 1 & \text{ if } w=0, \\ 2^{w-1} & \text{ if } 1 \leq w \leq q-1, \\ 2^{w-1}-1 & \text{ if } w=q, \end{cases} \end{align*} $$
we put 
 $d(w)=\sum _{i=1}^q d(w-i)$
 for
$d(w)=\sum _{i=1}^q d(w-i)$
 for 
 $w>q$
. Then for any
$w>q$
. Then for any 
 $w \in \mathbb {N}$
, we have
$w \in \mathbb {N}$
, we have 
 $$\begin{align*}\dim_K \mathcal Z_w = d(w). \end{align*}$$
$$\begin{align*}\dim_K \mathcal Z_w = d(w). \end{align*}$$
Conjecture 1.7 (Hoffman’s conjecture in positive characteristic)
 A K-basis for 
 $\mathcal Z_w$
 is given by
$\mathcal Z_w$
 is given by 
 $\mathcal T_w$
 consisting of
$\mathcal T_w$
 consisting of 
 $\zeta _A(s_1,\ldots ,s_r)$
 of weight w with
$\zeta _A(s_1,\ldots ,s_r)$
 of weight w with 
 $s_i \leq q$
 for
$s_i \leq q$
 for 
 $1 \leq i <r$
, and
$1 \leq i <r$
, and 
 $s_r<q$
.
$s_r<q$
.
 In [Reference Ngo Dac31], one of the authors succeeded in proving the algebraic part of these conjectures (see [Reference Ngo Dac31, Theorem A]): For all 
 $w \in \mathbb {N}$
, we have
$w \in \mathbb {N}$
, we have 
 $$\begin{align*}\dim_K \mathcal Z_w \leq d(w). \end{align*}$$
$$\begin{align*}\dim_K \mathcal Z_w \leq d(w). \end{align*}$$
This part is based on shuffle relations for MZV's due to Chen and Thakur and some operations introduced by Todd. For the transcendental part, he used the Anderson–Brownawell–Papanikolas criterion in [Reference Anderson, Brownawell and Papanikolas2] and proved sharp lower bounds for small weights 
 $w \leq 2q-2$
 (see [Reference Ngo Dac31, Theorem D]). It has already been noted that it is very difficult to extend his method to general weights (see [Reference Ngo Dac31] for more details).
$w \leq 2q-2$
 (see [Reference Ngo Dac31, Theorem D]). It has already been noted that it is very difficult to extend his method to general weights (see [Reference Ngo Dac31] for more details).
1.2.2. AMZV's in positive characteristic
 Recently, Harada [Reference Harada21] introduced the alternating multiple zeta values in positive characteristic (AMZV's) as follows. Letting 
 $\mathfrak {s}=(s_1,\dots ,s_r) \in \mathbb {N}^n$
 and
$\mathfrak {s}=(s_1,\dots ,s_r) \in \mathbb {N}^n$
 and 
 $\boldsymbol {\varepsilon }=(\varepsilon _1,\dots ,\varepsilon _r) \in (\mathbb F_q^\times )^n$
, we define
$\boldsymbol {\varepsilon }=(\varepsilon _1,\dots ,\varepsilon _r) \in (\mathbb F_q^\times )^n$
, we define 
 $$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_r^{\deg a_r }}{a_1^{s_1} \dots a_r^{s_r}} \in K_{\infty}, \end{align*} $$
$$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_r^{\deg a_r }}{a_1^{s_1} \dots a_r^{s_r}} \in K_{\infty}, \end{align*} $$
where the sum runs through the set of tuples 
 $(a_1,\ldots ,a_r) \in A_+^r$
 with
$(a_1,\ldots ,a_r) \in A_+^r$
 with 
 $\deg a_1>\cdots >\deg a_r$
. The numbers r and
$\deg a_1>\cdots >\deg a_r$
. The numbers r and 
 $w(\mathfrak {s})=s_1+\dots +s_r$
 are called the depth and the weight of
$w(\mathfrak {s})=s_1+\dots +s_r$
 are called the depth and the weight of 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
, respectively. We set
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
, respectively. We set 
 $\zeta _A \begin {pmatrix} \emptyset \\ \emptyset \end {pmatrix} = 1$
. Harada [Reference Harada21] extended basic properties of MZV's to AMZV's, that is, nonvanishing, shuffle relations, period interpretation and linear independence. Again, the main goal of this theory is to determine all linear relations over K between AMZV's. It is natural to ask whether the previous work on analogues of the Zagier–Hoffman conjectures can be extended to this setting. More precisely, if for
$\zeta _A \begin {pmatrix} \emptyset \\ \emptyset \end {pmatrix} = 1$
. Harada [Reference Harada21] extended basic properties of MZV's to AMZV's, that is, nonvanishing, shuffle relations, period interpretation and linear independence. Again, the main goal of this theory is to determine all linear relations over K between AMZV's. It is natural to ask whether the previous work on analogues of the Zagier–Hoffman conjectures can be extended to this setting. More precisely, if for 
 $w \in \mathbb {N}$
 we denote by
$w \in \mathbb {N}$
 we denote by 
 $\mathcal {AZ}_w$
 the K vector space spanned by the AMZV's of weight w, then we would like to determine the dimensions of
$\mathcal {AZ}_w$
 the K vector space spanned by the AMZV's of weight w, then we would like to determine the dimensions of 
 $\mathcal {AZ}_w$
 and show some nice bases of these vector spaces.
$\mathcal {AZ}_w$
 and show some nice bases of these vector spaces.
1.3. Main results
1.3.1. Statements of the main results
In this manuscript, we present complete answers to all the previous conjectures and problems raised in §1.2.
 First, for all w we calculate the dimension of 
 $\mathcal {AZ}_w$
 and give an explicit basis in the spirit of Hoffman.
$\mathcal {AZ}_w$
 and give an explicit basis in the spirit of Hoffman.
Theorem A. We define a Fibonacci-like sequence 
 $s(w)$
 as follows. We put
$s(w)$
 as follows. We put 
 $$ \begin{align*} s(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{align*} $$
$$ \begin{align*} s(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{align*} $$
and for 
 $w>q$
,
$w>q$
, 
 $s(w)=(q-1)\sum \limits _{i = 1}^{q-1} s(w-i) + s(w - q)$
. Then for all
$s(w)=(q-1)\sum \limits _{i = 1}^{q-1} s(w-i) + s(w - q)$
. Then for all 
 $w \in \mathbb {N}$
,
$w \in \mathbb {N}$
, 
 $$\begin{align*}\dim_K \mathcal{AZ}_w = s(w). \end{align*}$$
$$\begin{align*}\dim_K \mathcal{AZ}_w = s(w). \end{align*}$$
 Further, we can exhibit a Hoffman-like basis of 
 $\mathcal {AZ}_w$
.
$\mathcal {AZ}_w$
.
Second, we give a proof of both Conjectures 1.6 and 1.7 which generalizes the previous work of the fourth author [Reference Ngo Dac31].
Theorem B. For all 
 $w \in \mathbb {N}$
,
$w \in \mathbb {N}$
, 
 $\mathcal T_w$
 is a K-basis for
$\mathcal T_w$
 is a K-basis for 
 $\mathcal Z_w$
. In particular,
$\mathcal Z_w$
. In particular, 
 $$\begin{align*}\dim_K \mathcal Z_w = d(w). \end{align*}$$
$$\begin{align*}\dim_K \mathcal Z_w = d(w). \end{align*}$$
We recall that analogues of Goncharov’s conjectures in positive characteristic were proved in [Reference Chang9]. As a consequence, we give a framework for understanding all linear relations over K between MZV's and AMZV's and settle the main goals of these theories.
1.3.2. Ingredients of the proofs
 Let us emphasize here that Theorem A is much harder than Theorem B and that it is not enough to work within the setting of AMZV's. On the one hand, although it is straightforward to extend the algebraic part for AMZV's following the same line in [Reference Ngo Dac31, §2 and §3], we only obtain a weak version of Brown’s theorem in this setting. More precisely, we get a set of generators for 
 $\mathcal {AZ}_w$
 but it is too large to be a basis of this vector space. For small weights, we find ad hoc arguments to produce a smaller set of generators but it does not work for arbitrary weights (see §5.4). Roughly speaking, in [Reference Ngo Dac31, §2 and §3] we have an algorithm which moves forward so that we can express any AMZV as a linear combination of generators. But we lack some precise controls on the coefficients in these expressions so that we cannot go backward and change bases. On the other hand, the transcendental part for AMZV's shares the same difficulties with the case of MZV's as noted above.
$\mathcal {AZ}_w$
 but it is too large to be a basis of this vector space. For small weights, we find ad hoc arguments to produce a smaller set of generators but it does not work for arbitrary weights (see §5.4). Roughly speaking, in [Reference Ngo Dac31, §2 and §3] we have an algorithm which moves forward so that we can express any AMZV as a linear combination of generators. But we lack some precise controls on the coefficients in these expressions so that we cannot go backward and change bases. On the other hand, the transcendental part for AMZV's shares the same difficulties with the case of MZV's as noted above.
 In this paper, we use a completely new approach which is based on the study of alternating Carlitz multiple polylogarithms (ACMPL's for short) defined as follows. We put 
 $\ell _0:=1$
 and
$\ell _0:=1$
 and 
 $\ell _d:=\prod _{i=1}^d (\theta -\theta ^{q^i})$
 for all
$\ell _d:=\prod _{i=1}^d (\theta -\theta ^{q^i})$
 for all 
 $d \in \mathbb N$
. For any tuple
$d \in \mathbb N$
. For any tuple 
 $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb {N}^r$
 and
$\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb {N}^r$
 and 
 $\boldsymbol {\varepsilon }=(\varepsilon _1,\dots ,\varepsilon _r) \in (\mathbb F_q^\times )^r$
, we introduce the corresponding alternating Carlitz multiple polylogarithm by
$\boldsymbol {\varepsilon }=(\varepsilon _1,\dots ,\varepsilon _r) \in (\mathbb F_q^\times )^r$
, we introduce the corresponding alternating Carlitz multiple polylogarithm by 
 $$ \begin{align*} \operatorname{\mathrm{Li}}\begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{d_1> \dots > d_r\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_r^{d_r} }{\ell_{d_1}^{s_1} \dots \ell_{d_r}^{s_r}} \in K_{\infty}. \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Li}}\begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{d_1> \dots > d_r\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_r^{d_r} }{\ell_{d_1}^{s_1} \dots \ell_{d_r}^{s_r}} \in K_{\infty}. \end{align*} $$
We also set 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \emptyset \\ \emptyset \end {pmatrix} = 1$
.
$\operatorname {\mathrm {Li}} \begin {pmatrix} \emptyset \\ \emptyset \end {pmatrix} = 1$
.
 The key result is to establish a nontrivial connection between AMZV's and ACMPL's which allows us to go back and forth between these objects (see Theorem 5.9). To do this, following [Reference Ngo Dac31, §2 and §3] we use stuffle relations to develop an algebraic theory for ACMPL's and obtain a weak version of Brown’s theorem, that is, a set of generators for the K-vector space 
 $\mathcal {AL}_w$
 spanned by ACMPL's of weight w. We observe that this set of generators is exactly the same as that for AMZV's. Thus,
$\mathcal {AL}_w$
 spanned by ACMPL's of weight w. We observe that this set of generators is exactly the same as that for AMZV's. Thus, 
 $\mathcal {AL}_w=\mathcal {AZ}_w$
, which provides a dictionary between AMZV's and ACMPL's.
$\mathcal {AL}_w=\mathcal {AZ}_w$
, which provides a dictionary between AMZV's and ACMPL's.
We then determine all K-linear relations between ACMPL's (see Theorem 4.6). The proof we give here, while using similar tools as in [Reference Ngo Dac31], differs in some crucial points and requires three new ingredients.
 The first new ingredient is the construction of an appropriate Hoffman-like basis 
 $\mathcal {AS}_w$
 of
$\mathcal {AS}_w$
 of 
 $\mathcal {AL}_w$
. In fact, our transcendental method dictates that we must find a complete system of bases
$\mathcal {AL}_w$
. In fact, our transcendental method dictates that we must find a complete system of bases 
 $\mathcal {AS}_w$
 of
$\mathcal {AS}_w$
 of 
 $\mathcal {AL}_w$
 indexed by weights w with strong constraints as given in Theorem 3.4. The failure to find such a system of bases is the main obstacle to generalizing [Reference Ngo Dac31, Theorem D] (see §5.1 and [Reference Ngo Dac31, Remark 6.3] for more details).
$\mathcal {AL}_w$
 indexed by weights w with strong constraints as given in Theorem 3.4. The failure to find such a system of bases is the main obstacle to generalizing [Reference Ngo Dac31, Theorem D] (see §5.1 and [Reference Ngo Dac31, Remark 6.3] for more details).
The second new ingredient is formulating and proving (a strong version of) Brown’s theorem for AMCPLs (see Theorem 2.11). As mentioned before, the method in [Reference Ngo Dac31] only gives a weak version of Brown’s theorem for ACMPL's as the set of generators is not a basis. Roughly speaking, given any ACMPL's we can express it as a linear combination of generators. The fact that stuffle relations for ACMPL's are ‘simpler’ than shuffle relations for AMZV's gives more precise information about the coefficients of these expressions. Consequently, we show that a certain transition matrix is invertible and obtain Brown’s theorem for ACMPL's. This completes the algebraic part for ACMPL's.
 The last new ingredient is proving the transcendental part for ACMPL's in full generality, that is, the ACMPL's in 
 $\mathcal {AS}_w$
 are linearly independent over K (see Theorem 4.4). We emphasize that we do need the full strength of the algebraic part to prove the transcendental part. The proof follows the same line in [Reference Ngo Dac31, §4 and §5] which is formulated in a more general setting in §3. First, we have to consider not only linear relations between ACMPL's in
$\mathcal {AS}_w$
 are linearly independent over K (see Theorem 4.4). We emphasize that we do need the full strength of the algebraic part to prove the transcendental part. The proof follows the same line in [Reference Ngo Dac31, §4 and §5] which is formulated in a more general setting in §3. First, we have to consider not only linear relations between ACMPL's in 
 $\mathcal {AS}_w$
 but also those between ACMPL's in
$\mathcal {AS}_w$
 but also those between ACMPL's in 
 $\mathcal {AS}_w$
 and the suitable power
$\mathcal {AS}_w$
 and the suitable power 
 $\widetilde \pi ^w$
 of the Carlitz period
$\widetilde \pi ^w$
 of the Carlitz period 
 $\widetilde \pi $
. Second, starting from such a nontrivial relation we apply the Anderson–Brownawell–Papanikolas criterion in [Reference Anderson, Brownawell and Papanikolas2] and reduce to solve a system of
$\widetilde \pi $
. Second, starting from such a nontrivial relation we apply the Anderson–Brownawell–Papanikolas criterion in [Reference Anderson, Brownawell and Papanikolas2] and reduce to solve a system of 
 $\sigma $
-linear equations. While in [Reference Ngo Dac31, §4 and §5] this system does not have a nontrivial solution which allows us to conclude, our system has a unique solution for even w (i.e.,
$\sigma $
-linear equations. While in [Reference Ngo Dac31, §4 and §5] this system does not have a nontrivial solution which allows us to conclude, our system has a unique solution for even w (i.e., 
 $q-1$
 divides w). This means that for such w up to a scalar there is a unique linear relation between ACMPL's in
$q-1$
 divides w). This means that for such w up to a scalar there is a unique linear relation between ACMPL's in 
 $\mathcal {AS}_w$
 and
$\mathcal {AS}_w$
 and 
 $\widetilde \pi ^w$
. The last step consists of showing that in this unique relation, the coefficient of
$\widetilde \pi ^w$
. The last step consists of showing that in this unique relation, the coefficient of 
 $\widetilde \pi ^w$
 is nonzero. Unexpectedly, this is a consequence of Brown’s theorem for AMCPLs mentioned above.
$\widetilde \pi ^w$
 is nonzero. Unexpectedly, this is a consequence of Brown’s theorem for AMCPLs mentioned above.
1.3.3. Plan of the paper
We will briefly explain the organization of the manuscript.
- 
• In §2, we recall the definition and basic properties of ACMPL's. We then develop an algebraic theory for these objects and obtain weak and strong Brown’s theorems (see Proposition 2.10 and Theorem 2.11). 
- 
• In §3, we generalize some transcendental results in [Reference Ngo Dac31] and give statements in a more general setting (see Theorem 3.4). 
- 
• In §4, we prove transcendental results for ACMPL's and completely determine all linear relations between ACMPL's (see Theorems 4.4 and 4.6). 
- 
• Finally, in §5 we present two applications and prove the main results, that is, Theorems A and B. The first application is to prove the above connection between ACMPL's and AMZV's and then to determine all linear relations between AMZV's in positive characteristic (see §5.1). The second application is a proof of Zagier–Hoffman’s conjectures in positive characteristic which generalizes the main results of [Reference Ngo Dac31] (see §5.3). 
1.4. Remark
When our work was released in arXiv:2205.07165, Chieh-Yu Chang informed us that Chen, Mishiba and he were working towards a proof of Theorem B (e.g., the MZV version) by using a similar method, and their paper [Reference Chang, Chen and Mishiba10] is now available at arXiv:2205.09929.
2. Weak and strong Brown’s theorems for ACMPL's
In this section, we first extend the work of [Reference Ngo Dac31] and develop an algebraic theory for ACMPL's. Then we prove a weak version of Brown’s theorem for ACMPL's (see Theorem 2.10) which gives a set of generators for the K-vector space spanned by ACMPL's of weight w. The techniques of Sections 2.1–2.3 are similar to those of [Reference Ngo Dac31], and the reader may wish to skip the details.
Contrary to what happens in [Reference Ngo Dac31], it turns out that the previous set of generators is too large to be a basis. Consequently, in §2.4 we introduce another set of generators and prove a strong version of Brown’s theorem for ACMPL's (see Theorem 2.11).
2.1. Analogues of power sums
2.1.1.
 We recall and introduce some notation in [Reference Ngo Dac31]. A tuple 
 $\mathfrak {s}$
 is a sequence of the form
$\mathfrak {s}$
 is a sequence of the form 
 $\mathfrak {s}=(s_1,\dots ,s_n) \in \mathbb N^n$
. We call
$\mathfrak {s}=(s_1,\dots ,s_n) \in \mathbb N^n$
. We call 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth and
$\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth and 
 $w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight of
$w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight of 
 $\mathfrak {s}$
. If
$\mathfrak {s}$
. If 
 $\mathfrak {s}$
 is nonempty, we put
$\mathfrak {s}$
 is nonempty, we put 
 $\mathfrak {s}_- := (s_2, \dotsc , s_n)$
.
$\mathfrak {s}_- := (s_2, \dotsc , s_n)$
.
 Let 
 $\mathfrak {s} $
 and
$\mathfrak {s} $
 and 
 $\mathfrak {t}$
 be two tuples of positive integers. We set
$\mathfrak {t}$
 be two tuples of positive integers. We set 
 $s_i = 0$
 (resp.
$s_i = 0$
 (resp. 
 $t_i = 0$
) for all
$t_i = 0$
) for all 
 $i> \text {depth}(\mathfrak {s})$
 (resp.
$i> \text {depth}(\mathfrak {s})$
 (resp. 
 $i> \text {depth}(\mathfrak {t})$
). We say that
$i> \text {depth}(\mathfrak {t})$
). We say that 
 $\mathfrak {s} \leq \mathfrak {t}$
 if
$\mathfrak {s} \leq \mathfrak {t}$
 if 
 $s_1 + \cdots + s_i \leq t_1 + \cdots + t_i$
 for all
$s_1 + \cdots + s_i \leq t_1 + \cdots + t_i$
 for all 
 $i \in \mathbb {N}$
, and
$i \in \mathbb {N}$
, and 
 $w(\mathfrak {s}) = w(\mathfrak {t})$
. This defines a partial order on tuples of positive integers.
$w(\mathfrak {s}) = w(\mathfrak {t})$
. This defines a partial order on tuples of positive integers.
 For 
 $i \in \mathbb {N}$
, we define
$i \in \mathbb {N}$
, we define 
 $T_i(\mathfrak {s})$
 to be the tuple
$T_i(\mathfrak {s})$
 to be the tuple 
 $(s_1+\dots +s_i,s_{i+1},\dots ,s_n)$
. Further, for
$(s_1+\dots +s_i,s_{i+1},\dots ,s_n)$
. Further, for 
 $i \in \mathbb N$
, if
$i \in \mathbb N$
, if 
 $T_i(\mathfrak {s}) \leq T_i(\mathfrak t)$
, then
$T_i(\mathfrak {s}) \leq T_i(\mathfrak t)$
, then 
 $T_k(\mathfrak {s}) \leq T_k(\mathfrak t)$
 for all
$T_k(\mathfrak {s}) \leq T_k(\mathfrak t)$
 for all 
 $k \geq i$
.
$k \geq i$
.
 Let 
 $\mathfrak {s}=(s_1,\dots ,s_n) \in \mathbb N^n$
 be a tuple of positive integers. We denote by
$\mathfrak {s}=(s_1,\dots ,s_n) \in \mathbb N^n$
 be a tuple of positive integers. We denote by 
 $0 \leq i \leq n$
 the largest integer such that
$0 \leq i \leq n$
 the largest integer such that 
 $s_j \leq q$
 for all
$s_j \leq q$
 for all 
 $1 \leq j \leq i$
 and define the initial tuple
$1 \leq j \leq i$
 and define the initial tuple 
 $\operatorname {\mathrm {Init}}(\mathfrak {s})$
 of
$\operatorname {\mathrm {Init}}(\mathfrak {s})$
 of 
 $\mathfrak {s}$
 to be the tuple
$\mathfrak {s}$
 to be the tuple 
 $$\begin{align*}\operatorname{\mathrm{Init}}(\mathfrak{s}):=(s_1,\dots,s_i). \end{align*}$$
$$\begin{align*}\operatorname{\mathrm{Init}}(\mathfrak{s}):=(s_1,\dots,s_i). \end{align*}$$
In particular, if 
 $s_1>q$
, then
$s_1>q$
, then 
 $i=0$
 and
$i=0$
 and 
 $\operatorname {\mathrm {Init}}(\mathfrak {s})$
 is the empty tuple.
$\operatorname {\mathrm {Init}}(\mathfrak {s})$
 is the empty tuple.
 For two different tuples 
 $\mathfrak {s}$
 and
$\mathfrak {s}$
 and 
 $\mathfrak t$
, we consider the lexicographic order for initial tuples and write
$\mathfrak t$
, we consider the lexicographic order for initial tuples and write 
 $\operatorname {\mathrm {Init}}(\mathfrak t) \preceq \operatorname {\mathrm {Init}}(\mathfrak {s})$
 (resp.
$\operatorname {\mathrm {Init}}(\mathfrak t) \preceq \operatorname {\mathrm {Init}}(\mathfrak {s})$
 (resp. 
 $\operatorname {\mathrm {Init}}(\mathfrak t) \prec \operatorname {\mathrm {Init}}(\mathfrak {s})$
,
$\operatorname {\mathrm {Init}}(\mathfrak t) \prec \operatorname {\mathrm {Init}}(\mathfrak {s})$
, 
 $\operatorname {\mathrm {Init}}(\mathfrak t) \succeq \operatorname {\mathrm {Init}}(\mathfrak {s})$
 and
$\operatorname {\mathrm {Init}}(\mathfrak t) \succeq \operatorname {\mathrm {Init}}(\mathfrak {s})$
 and 
 $\operatorname {\mathrm {Init}}(\mathfrak t) \succ \operatorname {\mathrm {Init}}(\mathfrak {s})$
).
$\operatorname {\mathrm {Init}}(\mathfrak t) \succ \operatorname {\mathrm {Init}}(\mathfrak {s})$
).
2.1.2.
 Letting 
 $\mathfrak {s} = (s_1, \dotsc , s_n) \in \mathbb {N}^{n}$
 and
$\mathfrak {s} = (s_1, \dotsc , s_n) \in \mathbb {N}^{n}$
 and 
 $\boldsymbol {\varepsilon } = (\varepsilon _1, \dotsc , \varepsilon _n) \in (\mathbb {F}_q^{\times })^{n}$
, we set
$\boldsymbol {\varepsilon } = (\varepsilon _1, \dotsc , \varepsilon _n) \in (\mathbb {F}_q^{\times })^{n}$
, we set 
 $\mathfrak {s}_- := (s_2, \dotsc , s_n)$
 and
$\mathfrak {s}_- := (s_2, \dotsc , s_n)$
 and 
 $\boldsymbol {\varepsilon }_- := (\varepsilon _2, \dotsc , \varepsilon _n) $
. By definition, an array
$\boldsymbol {\varepsilon }_- := (\varepsilon _2, \dotsc , \varepsilon _n) $
. By definition, an array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 is an array of the form
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 is an array of the form 
 $$ \begin{align*}\begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dotsb & \varepsilon_n \\ s_1 & \dotsb & s_n \end{pmatrix}.\end{align*} $$
$$ \begin{align*}\begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dotsb & \varepsilon_n \\ s_1 & \dotsb & s_n \end{pmatrix}.\end{align*} $$
We call 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth,
$\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth, 
 $w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight and
$w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight and 
 $\chi (\boldsymbol {\varepsilon }) = \varepsilon _1 \dots \varepsilon _n$
 the character of
$\chi (\boldsymbol {\varepsilon }) = \varepsilon _1 \dots \varepsilon _n$
 the character of 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
.
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
.
 We say that 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 if the following conditions are satisfied:
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 if the following conditions are satisfied: 
- 
1.  $\chi (\boldsymbol {\varepsilon }) = \chi (\boldsymbol {\epsilon })$
, $\chi (\boldsymbol {\varepsilon }) = \chi (\boldsymbol {\epsilon })$
,
- 
2.  $w(\mathfrak {s}) = w(\mathfrak {t})$
, $w(\mathfrak {s}) = w(\mathfrak {t})$
,
- 
3.  $s_1 + \dotsb + s_i \leq t_1 + \dotsb + t_i$
 for all $s_1 + \dotsb + s_i \leq t_1 + \dotsb + t_i$
 for all $i \in \mathbb {N}$
. $i \in \mathbb {N}$
.
We note that this only defines a preorder on arrays.
Remark 2.1. We claim that if 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
, then
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
, then 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) \geq \operatorname {\mathrm {depth}}(\mathfrak {t})$
. In fact, assume that
$\operatorname {\mathrm {depth}}(\mathfrak {s}) \geq \operatorname {\mathrm {depth}}(\mathfrak {t})$
. In fact, assume that 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) < \operatorname {\mathrm {depth}}(\mathfrak {t})$
. Thus,
$\operatorname {\mathrm {depth}}(\mathfrak {s}) < \operatorname {\mathrm {depth}}(\mathfrak {t})$
. Thus, 
 $$ \begin{align*} w(\mathfrak{s}) = s_1 + \cdots + s_{\operatorname{\mathrm{depth}}(\mathfrak{s})} \leq t_1 + \cdots + t_{\operatorname{\mathrm{depth}}(\mathfrak{s})} < t_1 + \cdots + t_{\operatorname{\mathrm{depth}}(\mathfrak{t})} = w(\mathfrak{t}), \end{align*} $$
$$ \begin{align*} w(\mathfrak{s}) = s_1 + \cdots + s_{\operatorname{\mathrm{depth}}(\mathfrak{s})} \leq t_1 + \cdots + t_{\operatorname{\mathrm{depth}}(\mathfrak{s})} < t_1 + \cdots + t_{\operatorname{\mathrm{depth}}(\mathfrak{t})} = w(\mathfrak{t}), \end{align*} $$
which contradicts the condition 
 $w(\mathfrak {s}) = w(\mathfrak {t})$
.
$w(\mathfrak {s}) = w(\mathfrak {t})$
.
2.1.3.
 We recall the power sums and MZV's studied by Thakur [Reference Thakur38]. For 
 $d \in \mathbb {Z}$
 and for
$d \in \mathbb {Z}$
 and for 
 $\mathfrak {s}=(s_1,\dots ,s_n) \in \mathbb {N}^n$
, we introduce
$\mathfrak {s}=(s_1,\dots ,s_n) \in \mathbb {N}^n$
, we introduce 
 $$ \begin{align*} S_d(\mathfrak{s}) := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{align*} $$
$$ \begin{align*} S_d(\mathfrak{s}) := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{align*} $$
and
 $$ \begin{align*} S_{<d}(\mathfrak s) := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d>\deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{align*} $$
$$ \begin{align*} S_{<d}(\mathfrak s) := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d>\deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{align*} $$
We define the multiple zeta value (MZV) by
 $$ \begin{align*} \zeta_A(\mathfrak{s}) := \sum \limits_{d \geq 0} S_d(\mathfrak{s}) = \sum \limits_{d \geq 0} \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K_{\infty}. \end{align*} $$
$$ \begin{align*} \zeta_A(\mathfrak{s}) := \sum \limits_{d \geq 0} S_d(\mathfrak{s}) = \sum \limits_{d \geq 0} \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{1}{a_1^{s_1} \dots a_n^{s_n}} \in K_{\infty}. \end{align*} $$
We put 
 $\zeta _A(\emptyset ) = 1$
. We call
$\zeta _A(\emptyset ) = 1$
. We call 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth and
$\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth and 
 $w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight of
$w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight of 
 $\zeta _A(\mathfrak {s})$
.
$\zeta _A(\mathfrak {s})$
.
 We also recall that 
 $\ell _0 := 1$
 and
$\ell _0 := 1$
 and 
 $\ell _d := \prod ^d_{i=1}(\theta - \theta ^{q^i})$
 for all
$\ell _d := \prod ^d_{i=1}(\theta - \theta ^{q^i})$
 for all 
 $d \in \mathbb {N}$
. Letting
$d \in \mathbb {N}$
. Letting 
 $\mathfrak s = (s_1 , \dots , s_n) \in \mathbb {N}^n$
, for
$\mathfrak s = (s_1 , \dots , s_n) \in \mathbb {N}^n$
, for 
 $d \in \mathbb {Z}$
, we define analogues of power sums by
$d \in \mathbb {Z}$
, we define analogues of power sums by 
 $$ \begin{align*} \operatorname{\mathrm{Si}}_d(\mathfrak s) := \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{1}{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K, \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_d(\mathfrak s) := \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{1}{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K, \end{align*} $$
and
 $$ \begin{align*} \operatorname{\mathrm{Si}}_{<d}(\mathfrak s) := \sum\limits_{d>d_1> \dots > d_n\geq 0} \dfrac{1 }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K. \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_{<d}(\mathfrak s) := \sum\limits_{d>d_1> \dots > d_n\geq 0} \dfrac{1 }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K. \end{align*} $$
We introduce the Carlitz multiple polylogarithm (CMPL for short) by
 $$ \begin{align*} \operatorname{\mathrm{Li}}(\mathfrak{s}) := \sum \limits_{d \geq 0} \operatorname{\mathrm{Si}}_d(\mathfrak{s}) = \sum \limits_{d \geq 0} \ \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{1}{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K_{\infty}. \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Li}}(\mathfrak{s}) := \sum \limits_{d \geq 0} \operatorname{\mathrm{Si}}_d(\mathfrak{s}) = \sum \limits_{d \geq 0} \ \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{1}{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K_{\infty}. \end{align*} $$
We set 
 $\operatorname {\mathrm {Li}}(\emptyset ) = 1$
. We call
$\operatorname {\mathrm {Li}}(\emptyset ) = 1$
. We call 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth and
$\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth and 
 $w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight of
$w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight of 
 $\operatorname {\mathrm {Li}}(\mathfrak {s})$
.
$\operatorname {\mathrm {Li}}(\mathfrak {s})$
.
2.1.4.
 Let 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 be an array. For
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 be an array. For 
 $d \in \mathbb {Z}$
, we define
$d \in \mathbb {Z}$
, we define 
 $$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{align*} $$
and
 $$ \begin{align*} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d>\deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{align*} $$
$$ \begin{align*} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d>\deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{align*} $$
We also introduce
 $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K, \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{d=d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K, \end{align*} $$
and
 $$ \begin{align*} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{d>d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K. \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum\limits_{d>d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K. \end{align*} $$
One verifies easily the following formulas:
 $$ \begin{align} \quad\operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} &= \varepsilon^d \operatorname{\mathrm{Si}}_d(s), \end{align} $$
$$ \begin{align} \quad\operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} &= \varepsilon^d \operatorname{\mathrm{Si}}_d(s), \end{align} $$
 $$ \begin{align} \operatorname{\mathrm{Si}}_d \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} &= \operatorname{\mathrm{Si}}_{d}(s_1, \dots, s_n), \end{align} $$
$$ \begin{align} \operatorname{\mathrm{Si}}_d \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} &= \operatorname{\mathrm{Si}}_{d}(s_1, \dots, s_n), \end{align} $$
 $$ \begin{align} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} &= \operatorname{\mathrm{Si}}_{<d}(s_1, \dots, s_n), \end{align} $$
$$ \begin{align} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} &= \operatorname{\mathrm{Si}}_{<d}(s_1, \dots, s_n), \end{align} $$
 $$ \begin{align} \quad\qquad\qquad\operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \boldsymbol{\varepsilon} \\\mathfrak{s} \end{pmatrix} &= \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_{-} \\ \mathfrak{s}_{-} \end{pmatrix}. \end{align} $$
$$ \begin{align} \quad\qquad\qquad\operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \boldsymbol{\varepsilon} \\\mathfrak{s} \end{pmatrix} &= \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_{-} \\ \mathfrak{s}_{-} \end{pmatrix}. \end{align} $$
Then we define the alternating Carlitz multiple polylogarithm (ACMPL for short) by
 $$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum \limits_{d \geq 0} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum\limits_{d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K_{\infty}. \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} := \sum \limits_{d \geq 0} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum\limits_{d_1> \dots > d_n\geq 0} \dfrac{\varepsilon_1^{d_1} \dots \varepsilon_n^{d_n} }{\ell_{d_1}^{s_1} \dots \ell_{d_n}^{s_n}} \in K_{\infty}. \end{align*} $$
Recall that 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \emptyset \\ \emptyset \end {pmatrix} = 1$
. We call
$\operatorname {\mathrm {Li}} \begin {pmatrix} \emptyset \\ \emptyset \end {pmatrix} = 1$
. We call 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth,
$\operatorname {\mathrm {depth}}(\mathfrak {s}) = n$
 the depth, 
 $w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight and
$w(\mathfrak {s}) = s_1 + \dots + s_n$
 the weight and 
 $\chi (\boldsymbol {\varepsilon }) = \varepsilon _1 \dots \varepsilon _n$
 the character of
$\chi (\boldsymbol {\varepsilon }) = \varepsilon _1 \dots \varepsilon _n$
 the character of 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
.
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
.
Lemma 2.2. For all 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 as above such that
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 as above such that 
 $s_i \leq q$
 for all i, we have
$s_i \leq q$
 for all i, we have 
 $$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$
Therefore,
 $$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix}. \end{align*} $$
$$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix}. \end{align*} $$
Proof. We denote by 
 $\mathcal {J}$
 the set of all arrays
$\mathcal {J}$
 the set of all arrays 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 for some n such that
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 for some n such that 
 $s_1, \dots , s_n \leq q$
.
$s_1, \dots , s_n \leq q$
.
 The second statement follows at once from the first statement. We prove the first statement by induction on 
 $\operatorname {\mathrm {depth}}(\mathfrak {s})$
. For
$\operatorname {\mathrm {depth}}(\mathfrak {s})$
. For 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) = 1$
, we let
$\operatorname {\mathrm {depth}}(\mathfrak {s}) = 1$
, we let 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ s \end {pmatrix} $
 with
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ s \end {pmatrix} $
 with 
 $s \le q$
. It follows from special cases of power sums in [Reference Thakur37, §3.3] that for all
$s \le q$
. It follows from special cases of power sums in [Reference Thakur37, §3.3] that for all 
 $d \in \mathbb {Z}$
,
$d \in \mathbb {Z}$
, 
 $ S_{d} \begin {pmatrix} \varepsilon \\ s \end {pmatrix} = \dfrac {\varepsilon ^d}{\ell ^s_d} = \operatorname {\mathrm {Si}}_{d} \begin {pmatrix} \varepsilon \\ s \end {pmatrix}. $
 Suppose that the first statement holds for all arrays
$ S_{d} \begin {pmatrix} \varepsilon \\ s \end {pmatrix} = \dfrac {\varepsilon ^d}{\ell ^s_d} = \operatorname {\mathrm {Si}}_{d} \begin {pmatrix} \varepsilon \\ s \end {pmatrix}. $
 Suppose that the first statement holds for all arrays 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {J}$
with
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {J}$
with 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) = n - 1$
 and for all
$\operatorname {\mathrm {depth}}(\mathfrak {s}) = n - 1$
 and for all 
 $ d \in \mathbb {Z}$
. Let
$ d \in \mathbb {Z}$
. Let 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
be an element of
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
be an element of 
 $\mathcal {J}$
. Note that if
$\mathcal {J}$
. Note that if 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {J}$
, then
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {J}$
, then 
 $ \begin {pmatrix} \boldsymbol {\varepsilon }_{-} \\ \mathfrak {s}_{-} \end {pmatrix} \in \mathcal {J}$
. It follows from induction hypothesis and the fact
$ \begin {pmatrix} \boldsymbol {\varepsilon }_{-} \\ \mathfrak {s}_{-} \end {pmatrix} \in \mathcal {J}$
. It follows from induction hypothesis and the fact 
 $s_1 \leq q$
 that for all
$s_1 \leq q$
 that for all 
 $d \in \mathbb {Z}$
$d \in \mathbb {Z}$
 
 $$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = S_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_{-} \\ \mathfrak{s}_{-} \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_{-} \\ \mathfrak{s}_{-} \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix}. \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = S_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_{-} \\ \mathfrak{s}_{-} \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_{-} \\ \mathfrak{s}_{-} \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix}. \end{align*} $$
This proves the lemma.
2.1.5.
 Let 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
,
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
, 
 $\begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix}$
be two arrays. We recall
$\begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix}$
be two arrays. We recall 
 $s_i = 0$
 and
$s_i = 0$
 and 
 $\varepsilon _i = 1$
 for all
$\varepsilon _i = 1$
 for all 
 $i> \operatorname {\mathrm {depth}}(\mathfrak {s})$
;
$i> \operatorname {\mathrm {depth}}(\mathfrak {s})$
; 
 $t_i = 0$
 and
$t_i = 0$
 and 
 $\epsilon _i = 1$
 for all
$\epsilon _i = 1$
 for all 
 $i> \operatorname {\mathrm {depth}}(\mathfrak {t})$
. We define the following operation
$i> \operatorname {\mathrm {depth}}(\mathfrak {t})$
. We define the following operation 
 $$ \begin{align*} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} + \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} := \begin{pmatrix} \boldsymbol{\varepsilon} \boldsymbol{\epsilon} \\ \mathfrak{s} + \mathfrak{t} \end{pmatrix}, \end{align*} $$
$$ \begin{align*} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} + \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} := \begin{pmatrix} \boldsymbol{\varepsilon} \boldsymbol{\epsilon} \\ \mathfrak{s} + \mathfrak{t} \end{pmatrix}, \end{align*} $$
where 
 $\boldsymbol {\varepsilon } \boldsymbol {\epsilon }$
 and
$\boldsymbol {\varepsilon } \boldsymbol {\epsilon }$
 and 
 $\mathfrak {s} + \mathfrak {t}$
 are defined by component multiplication and component addition, respectively.
$\mathfrak {s} + \mathfrak {t}$
 are defined by component multiplication and component addition, respectively.
We now consider some formulas related to analogues of power sums. It is easily seen that
 $$ \begin{align} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \epsilon \\ s + t \end{pmatrix} , \end{align} $$
$$ \begin{align} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \epsilon \\ s + t \end{pmatrix} , \end{align} $$
hence, for 
 $\mathfrak {t} = (t_1, \dots , t_n)$
,
$\mathfrak {t} = (t_1, \dots , t_n)$
, 
 $$ \begin{align} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \epsilon_1 & \boldsymbol{\epsilon}_{-} \\ s + t_1 & \mathfrak{t}_{-} \end{pmatrix}. \end{align} $$
$$ \begin{align} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \epsilon_1 & \boldsymbol{\epsilon}_{-} \\ s + t_1 & \mathfrak{t}_{-} \end{pmatrix}. \end{align} $$
More generally, we deduce the following proposition which will be used frequently later.
Proposition 2.3. Let 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
,
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, 
 $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
be two arrays. Then we have the following:
$ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
be two arrays. Then we have the following: 
- 
1. There exist  $f_i \in \mathbb {F}_q$
 and arrays $f_i \in \mathbb {F}_q$
 and arrays $ \begin {pmatrix} \boldsymbol {\mu }_i \\ \mathfrak {u}_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }_i \\ \mathfrak {u}_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }_i \\ \mathfrak {u}_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $ \begin {pmatrix} \boldsymbol {\mu }_i \\ \mathfrak {u}_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $\operatorname {\mathrm {depth}}(\mathfrak {u}_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $\operatorname {\mathrm {depth}}(\mathfrak {u}_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$ $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$
- 
2. There exist  $f^{\prime }_i \in \mathbb {F}_q$
 and arrays $f^{\prime }_i \in \mathbb {F}_q$
 and arrays $ \begin {pmatrix} \boldsymbol {\mu }^{\prime }_i \\ \mathfrak {u}^{\prime }_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }^{\prime }_i \\ \mathfrak {u}^{\prime }_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }^{\prime }_i \\ \mathfrak {u}^{\prime }_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $ \begin {pmatrix} \boldsymbol {\mu }^{\prime }_i \\ \mathfrak {u}^{\prime }_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $\operatorname {\mathrm {depth}}(\mathfrak {u}^{\prime }_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $\operatorname {\mathrm {depth}}(\mathfrak {u}^{\prime }_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $$ \begin{align*} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f^{\prime}_i \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\mu}^{\prime}_i \\ \mathfrak{u}^{\prime}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$ $$ \begin{align*} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f^{\prime}_i \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\mu}^{\prime}_i \\ \mathfrak{u}^{\prime}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$
- 
3. There exist  $f^{\prime \prime }_i \in \mathbb {F}_q$
 and arrays $f^{\prime \prime }_i \in \mathbb {F}_q$
 and arrays $ \begin {pmatrix} \boldsymbol {\mu }^{\prime \prime }_i \\ \mathfrak {u}^{\prime \prime }_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }^{\prime \prime }_i \\ \mathfrak {u}^{\prime \prime }_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }^{\prime \prime }_i \\ \mathfrak {u}^{\prime \prime }_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $ \begin {pmatrix} \boldsymbol {\mu }^{\prime \prime }_i \\ \mathfrak {u}^{\prime \prime }_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $\operatorname {\mathrm {depth}}(\mathfrak {u}^{\prime \prime }_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $\operatorname {\mathrm {depth}}(\mathfrak {u}^{\prime \prime }_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f^{\prime\prime}_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\mu}^{\prime\prime}_i \\ \mathfrak{u}^{\prime\prime}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$ $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f^{\prime\prime}_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\mu}^{\prime\prime}_i \\ \mathfrak{u}^{\prime\prime}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$
Proof. The proof follows the same line as in [Reference Ngo Dac31, Proposition 2.1]. We omit the proof here and refer the reader to [Reference Im, Kim, Le, Ngo Dac and Pham25, Proposition 1.3] for more details.
 We denote by 
 $\mathcal {AL}$
 (resp.
$\mathcal {AL}$
 (resp. 
 $\mathcal {L}$
) the K-vector space generated by the ACMPL's (resp. by the CMPL's) and by
$\mathcal {L}$
) the K-vector space generated by the ACMPL's (resp. by the CMPL's) and by 
 $\mathcal {AL}_w$
 (resp.
$\mathcal {AL}_w$
 (resp. 
 $\mathcal {L}_w$
) the K-vector space generated by the ACMPL's of weight w (resp. by the CMPL's of weight w). It follows from Proposition 2.3 that
$\mathcal {L}_w$
) the K-vector space generated by the ACMPL's of weight w (resp. by the CMPL's of weight w). It follows from Proposition 2.3 that 
 $\mathcal {AL}$
 is a K-algebra. By considering only arrays with trivial characters, Proposition 2.3 implies that
$\mathcal {AL}$
 is a K-algebra. By considering only arrays with trivial characters, Proposition 2.3 implies that 
 $\mathcal {L}$
 is also a K-algebra.
$\mathcal {L}$
 is also a K-algebra.
2.2. Operators 
 $\mathcal B^*$
,
$\mathcal B^*$
, 
 $\mathcal C$
 and
$\mathcal C$
 and 
 $\mathcal {BC}$
$\mathcal {BC}$
 In this section, we extend operators 
 $\mathcal B^*$
 and
$\mathcal B^*$
 and 
 $\mathcal C$
 of Todd [Reference Todd41] and the operator
$\mathcal C$
 of Todd [Reference Todd41] and the operator 
 $\mathcal {BC}$
 of Ngo Dac [Reference Ngo Dac31] in the case of ACMPL's.
$\mathcal {BC}$
 of Ngo Dac [Reference Ngo Dac31] in the case of ACMPL's.
Definition 2.4. A binary relation is a K-linear combination of the form
 $$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} =0 \quad \text{for all } d \in \mathbb{Z}, \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} =0 \quad \text{for all } d \in \mathbb{Z}, \end{align*} $$
where 
 $a_i,b_i \in K$
 and
$a_i,b_i \in K$
 and 
 $ \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} , \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays of the same weight.
$ \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} , \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays of the same weight.
 A binary relation is called a fixed relation if 
 $b_i = 0$
 for all i.
$b_i = 0$
 for all i.
 We denote by 
 $\mathfrak {BR}_{w}$
 the set of all binary relations of weight w. One verifies at once that
$\mathfrak {BR}_{w}$
 the set of all binary relations of weight w. One verifies at once that 
 $\mathfrak {BR}_{w}$
 is a K-vector space. It follows from the fundamental relation in [Reference Thakur37, §3.4.6] and Lemma 2.2, an important example of binary relations
$\mathfrak {BR}_{w}$
 is a K-vector space. It follows from the fundamental relation in [Reference Thakur37, §3.4.6] and Lemma 2.2, an important example of binary relations 
 $$ \begin{align*} R_{\varepsilon} \colon \quad \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\\ q \end{pmatrix} + \varepsilon^{-1}D_1 \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} =0, \end{align*} $$
$$ \begin{align*} R_{\varepsilon} \colon \quad \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\\ q \end{pmatrix} + \varepsilon^{-1}D_1 \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} =0, \end{align*} $$
where 
 $D_1 = \theta ^q - \theta \in A$
.
$D_1 = \theta ^q - \theta \in A$
.
 For later definitions, let 
 $R \in \mathfrak {BR}_w$
 be a binary relation of the form
$R \in \mathfrak {BR}_w$
 be a binary relation of the form 
 $$ \begin{align} R(d) \colon \quad \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} =0, \end{align} $$
$$ \begin{align} R(d) \colon \quad \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} =0, \end{align} $$
where 
 $a_i,b_i \in K$
 and
$a_i,b_i \in K$
 and 
 $ \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} , \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays of the same weight. We now define some operators on K-vector spaces of binary relations.
$ \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} , \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays of the same weight. We now define some operators on K-vector spaces of binary relations.
2.2.1. Operators 
 $\mathcal B^*$
$\mathcal B^*$
 Let 
 $ \begin {pmatrix} \sigma \\ v \end {pmatrix} $
 be an array. We define an operator
$ \begin {pmatrix} \sigma \\ v \end {pmatrix} $
 be an array. We define an operator 
 $$ \begin{align*} \mathcal B^*_{\sigma,v} \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+v} \end{align*} $$
$$ \begin{align*} \mathcal B^*_{\sigma,v} \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+v} \end{align*} $$
as follows: For each 
 $R \in \mathfrak {BR}_{w}$
 given as in Equation (2.7), the image
$R \in \mathfrak {BR}_{w}$
 given as in Equation (2.7), the image 
 $\mathcal B^*_{\sigma ,v}(R) = \operatorname {\mathrm {Si}}_d \begin {pmatrix} \sigma \\ v \end {pmatrix} \sum _{j < d} R(j)$
 is a fixed relation of the form
$\mathcal B^*_{\sigma ,v}(R) = \operatorname {\mathrm {Si}}_d \begin {pmatrix} \sigma \\ v \end {pmatrix} \sum _{j < d} R(j)$
 is a fixed relation of the form 
 $$ \begin{align*} 0 &= \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \left(\sum \limits_ia_i \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \right) \\ &= \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \\ &= \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma & \boldsymbol{\varepsilon}_i \\ v& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma & \boldsymbol{\epsilon}_i \\ v& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \epsilon_{i1} & \boldsymbol{\epsilon}_{i-} \\ v + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix}. \end{align*} $$
$$ \begin{align*} 0 &= \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \left(\sum \limits_ia_i \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \right) \\ &= \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \\ &= \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma & \boldsymbol{\varepsilon}_i \\ v& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma & \boldsymbol{\epsilon}_i \\ v& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma \epsilon_{i1} & \boldsymbol{\epsilon}_{i-} \\ v + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix}. \end{align*} $$
The last equality follows from Equation (2.6).
 Let 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \sigma _1 & \dots & \sigma _n \\ v_1 & \dots & v_n \end {pmatrix} $
 be an array. We define an operator
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \sigma _1 & \dots & \sigma _n \\ v_1 & \dots & v_n \end {pmatrix} $
 be an array. We define an operator 
 $\mathcal {B}^*_{\Sigma ,V}(R) $
 by
$\mathcal {B}^*_{\Sigma ,V}(R) $
 by 
 $$ \begin{align*} \mathcal B^*_{\Sigma,V}(R) := \mathcal B^*_{\sigma_1,v_1} \circ \dots \circ \mathcal B^*_{\sigma_n,v_n}(R). \end{align*} $$
$$ \begin{align*} \mathcal B^*_{\Sigma,V}(R) := \mathcal B^*_{\sigma_1,v_1} \circ \dots \circ \mathcal B^*_{\sigma_n,v_n}(R). \end{align*} $$
Lemma 2.5. Let 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \sigma _1 & \dots & \sigma _n \\ v_1 & \dots & v_n \end {pmatrix} $
 be an array. Then
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \sigma _1 & \dots & \sigma _n \\ v_1 & \dots & v_n \end {pmatrix} $
 be an array. Then 
 $\mathcal B^*_{\Sigma ,V}(R)$
 is of the form
$\mathcal B^*_{\Sigma ,V}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \Sigma & \boldsymbol{\varepsilon}_i \\ V& \mathfrak{s}_i \end{pmatrix} & + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \Sigma & \boldsymbol{\epsilon}_i \\ V& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma_1 & \dots & \sigma_{n-1} & \sigma_n \epsilon_{i1} &\boldsymbol{\epsilon}_{i-} \\ v_1 & \dots & v_{n-1} & v_n+ t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \Sigma & \boldsymbol{\varepsilon}_i \\ V& \mathfrak{s}_i \end{pmatrix} & + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \Sigma & \boldsymbol{\epsilon}_i \\ V& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma_1 & \dots & \sigma_{n-1} & \sigma_n \epsilon_{i1} &\boldsymbol{\epsilon}_{i-} \\ v_1 & \dots & v_{n-1} & v_n+ t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0. \end{align*} $$
Proof. From the definition and Equation (2.6), we have 
 $\mathcal {B}^*_{\sigma _n,v_n}(R)$
 is of the form
$\mathcal {B}^*_{\sigma _n,v_n}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma_n & \boldsymbol{\varepsilon}_i \\ v_n& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma_n & \boldsymbol{\epsilon}_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\boldsymbol{\epsilon}_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma_n & \boldsymbol{\varepsilon}_i \\ v_n& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma_n & \boldsymbol{\epsilon}_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\boldsymbol{\epsilon}_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix} = 0. \end{align*} $$
Apply the operator 
 $\mathcal B^*_{\sigma _1,v_1} \circ \dots \circ \mathcal B^*_{\sigma _{n - 1},v_{n - 1}}$
 to
$\mathcal B^*_{\sigma _1,v_1} \circ \dots \circ \mathcal B^*_{\sigma _{n - 1},v_{n - 1}}$
 to 
 $\mathcal {B}^*_{\sigma _n,v_n}(R)$
, the result then follows from the definition.
$\mathcal {B}^*_{\sigma _n,v_n}(R)$
, the result then follows from the definition.
2.2.2. Operators 
 $\mathcal C$
$\mathcal C$
 Let 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 be an array of weight v. We define an operator
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 be an array of weight v. We define an operator 
 $$ \begin{align*} \mathcal C_{\Sigma,V}(R) \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+v} \end{align*} $$
$$ \begin{align*} \mathcal C_{\Sigma,V}(R) \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+v} \end{align*} $$
as follows: For each 
 $R \in \mathfrak {BR}_{w}$
 given as in Equation (2.7), the image
$R \in \mathfrak {BR}_{w}$
 given as in Equation (2.7), the image 
 $\mathcal C_{\Sigma ,V}(R) = R(d) \operatorname {\mathrm {Si}}_{<d+1} \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 is a binary relation of the form
$\mathcal C_{\Sigma ,V}(R) = R(d) \operatorname {\mathrm {Si}}_{<d+1} \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 is a binary relation of the form 
 $$ \begin{align*} 0 &= \left( \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \right) \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i c_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} + \sum \limits_i c^{\prime}_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\mu}^{\prime}_i \\ \mathfrak{u}^{\prime}_i \end{pmatrix}. \end{align*} $$
$$ \begin{align*} 0 &= \left( \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \right) \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i c_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} + \sum \limits_i c^{\prime}_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \boldsymbol{\mu}^{\prime}_i \\ \mathfrak{u}^{\prime}_i \end{pmatrix}. \end{align*} $$
The last equality follows from Proposition 2.3.
 In particular, the following proposition gives the form of 
 $\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
.
$\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
.
Proposition 2.6. Let 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix}$
 be an array with
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix}$
 be an array with 
 $V = (v_1,V_{-})$
 and
$V = (v_1,V_{-})$
 and 
 $\Sigma = (\sigma _1, \Sigma _{-})$
. Then
$\Sigma = (\sigma _1, \Sigma _{-})$
. Then 
 $\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
 is of the form
$\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
 is of the form 
 $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \Sigma \\ q & V \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \Sigma \\ q & V \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{align*} $$
where 
 $b_i \in A$
 are divisible by
$b_i \in A$
 are divisible by 
 $D_1$
 and
$D_1$
 and 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
are arrays satisfying
$ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
are arrays satisfying 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} \leq \begin {pmatrix} 1 \\ q - 1 \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 for all i.
$ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} \leq \begin {pmatrix} 1 \\ q - 1 \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 for all i.
Proof. We see that 
 $\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
 is of the form
$\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
 is of the form 
 $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \varepsilon^{-1} D_1 \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \varepsilon^{-1} D_1 \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} = 0. \end{align*} $$
It follows from Equation (2.6) and Proposition 2.3 that
 $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \Sigma \\ q & V \end{pmatrix}, \\ \varepsilon^{-1}D_1 \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} , \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \operatorname{\mathrm{Si}}_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} \operatorname{\mathrm{Si}}_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \Sigma \\ q & V \end{pmatrix}, \\ \varepsilon^{-1}D_1 \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} , \end{align*} $$
where 
 $b_i \in A$
 are divisible by
$b_i \in A$
 are divisible by 
 $D_1$
 and
$D_1$
 and 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying
$ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} \leq \begin {pmatrix} 1 \\ q - 1 \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 for all i. This proves the proposition.
$ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} \leq \begin {pmatrix} 1 \\ q - 1 \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 for all i. This proves the proposition.
2.2.3. Operators 
 $\mathcal {BC}$
$\mathcal {BC}$
 Let 
 $\varepsilon \in \mathbb {F}_q^{\times }$
. We define an operator
$\varepsilon \in \mathbb {F}_q^{\times }$
. We define an operator 
 $$ \begin{align*} \mathcal{BC}_{\varepsilon,q} \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+q} \end{align*} $$
$$ \begin{align*} \mathcal{BC}_{\varepsilon,q} \colon \mathfrak{BR}_{w} \longrightarrow \mathfrak{BR}_{w+q} \end{align*} $$
as follows: For each 
 $R \in \mathfrak {BR}_{w}$
 given as in Equation (2.7), the image
$R \in \mathfrak {BR}_{w}$
 given as in Equation (2.7), the image 
 $\mathcal {BC}_{\varepsilon ,q}(R)$
 is a binary relation given by
$\mathcal {BC}_{\varepsilon ,q}(R)$
 is a binary relation given by 
 $$ \begin{align*} \mathcal{BC}_{\varepsilon,q}(R) = \mathcal B^*_{\varepsilon,q}(R) - \sum\limits_i b_i \mathcal C_{\boldsymbol{\epsilon}_i,\mathfrak{t}_i} (R_{\varepsilon}). \end{align*} $$
$$ \begin{align*} \mathcal{BC}_{\varepsilon,q}(R) = \mathcal B^*_{\varepsilon,q}(R) - \sum\limits_i b_i \mathcal C_{\boldsymbol{\epsilon}_i,\mathfrak{t}_i} (R_{\varepsilon}). \end{align*} $$
 Let us clarify the definition of 
 $\mathcal {BC}_{\varepsilon ,q}$
. We know that
$\mathcal {BC}_{\varepsilon ,q}$
. We know that 
 $\mathcal B^*_{\varepsilon ,q}(R)$
 is of the form
$\mathcal B^*_{\varepsilon ,q}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \boldsymbol{\varepsilon}_i \\ q& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ q& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\epsilon_{i1} & \boldsymbol{\epsilon}_{i-} \\ q + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \boldsymbol{\varepsilon}_i \\ q& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ q& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\epsilon_{i1} & \boldsymbol{\epsilon}_{i-} \\ q + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0. \end{align*} $$
Moreover, 
 $\mathcal C_{\boldsymbol {\epsilon }_i,\mathfrak {t}_i} (R_{\varepsilon })$
 is of the form
$\mathcal C_{\boldsymbol {\epsilon }_i,\mathfrak {t}_i} (R_{\varepsilon })$
 is of the form 
 $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ q& \mathfrak{t}_i \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\epsilon_{i1} & \boldsymbol{\epsilon}_{i-} \\ q + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} + \varepsilon^{-1}D_1 \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon \\ 1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} 1 \\ q-1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ q& \mathfrak{t}_i \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon\epsilon_{i1} & \boldsymbol{\epsilon}_{i-} \\ q + t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} + \varepsilon^{-1}D_1 \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon \\ 1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} 1 \\ q-1 \end{pmatrix} \operatorname{\mathrm{Si}}_{<d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} $$
Combining with Proposition 2.3, Part 2, we have that 
 $\mathcal {BC}_{\varepsilon ,q}(R)$
 is of the form
$\mathcal {BC}_{\varepsilon ,q}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \boldsymbol{\varepsilon}_i \\ q& \mathfrak{s}_i \end{pmatrix} + \sum \limits_{i,j} b_{ij} \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_{ij} \\ 1& \mathfrak{t}_{ij} \end{pmatrix} =0, \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon& \boldsymbol{\varepsilon}_i \\ q& \mathfrak{s}_i \end{pmatrix} + \sum \limits_{i,j} b_{ij} \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_{ij} \\ 1& \mathfrak{t}_{ij} \end{pmatrix} =0, \end{align*} $$
where 
 $b_{ij} \in K$
 and
$b_{ij} \in K$
 and 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_{ij} \\ \mathfrak {t}_{ij} \end {pmatrix} $
are arrays satisfying
$ \begin {pmatrix} \boldsymbol {\epsilon }_{ij} \\ \mathfrak {t}_{ij} \end {pmatrix} $
are arrays satisfying 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_{ij} \\ \mathfrak {t}_{ij} \end {pmatrix} \leq \begin {pmatrix} 1 \\ q-1 \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon }_{i} \\ \mathfrak {t}_{i} \end {pmatrix} $
 for all j.
$ \begin {pmatrix} \boldsymbol {\epsilon }_{ij} \\ \mathfrak {t}_{ij} \end {pmatrix} \leq \begin {pmatrix} 1 \\ q-1 \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon }_{i} \\ \mathfrak {t}_{i} \end {pmatrix} $
 for all j.
2.3. A weak version of Brown’s theorem for ACMPL's
2.3.1. Preparatory results
Proposition 2.7. 1) Let 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 be an array such that
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 be an array such that 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_{k-1})$
 for some
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_{k-1})$
 for some 
 $1 \leq k \leq n$
, and let
$1 \leq k \leq n$
, and let 
 $\varepsilon $
 be an element in
$\varepsilon $
 be an element in 
 $\mathbb {F}_q^{\times }$
. Then
$\mathbb {F}_q^{\times }$
. Then 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be decomposed as follows:
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be decomposed as follows: 
 $$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{ - \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} }_{\text{type 1}} + \underbrace{\sum\limits_i b_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}_i' \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{ - \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} }_{\text{type 1}} + \underbrace{\sum\limits_i b_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}_i' \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align*} $$
where 
 $ b_i, c_i \in A$
 are divisible by
$ b_i, c_i \in A$
 are divisible by 
 $D_1$
 such that for all i, the following properties are satisfied:
$D_1$
 such that for all i, the following properties are satisfied: 
- 
• For all arrays  $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side, $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side, $$ \begin{align*} \operatorname{\mathrm{depth}}(\mathfrak{t}) \geq \operatorname{\mathrm{depth}}(\mathfrak{s}) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\mathfrak{s}). \end{align*} $$ $$ \begin{align*} \operatorname{\mathrm{depth}}(\mathfrak{t}) \geq \operatorname{\mathrm{depth}}(\mathfrak{s}) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\mathfrak{s}). \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix} $
 of type $1$
 with respect to $1$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have Moreover, for all $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have Moreover, for all $$ \begin{align*} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix}. \end{align*} $$ $$ \begin{align*} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} = \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix}. \end{align*} $$ $k \leq \ell \leq n$
, $k \leq \ell \leq n$
, $$ \begin{align*} s^{\prime}_{1} + \dots + s^{\prime}_\ell < s_1 + \dots + s_\ell. \end{align*} $$ $$ \begin{align*} s^{\prime}_{1} + \dots + s^{\prime}_\ell < s_1 + \dots + s_\ell. \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\epsilon }' \\ \mathfrak {t}' \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\epsilon }' \\ \mathfrak {t}' \end {pmatrix} $
 of type $2$
 with respect to $2$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, for all $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, for all $k \leq \ell \leq n$
, $k \leq \ell \leq n$
, $$ \begin{align*} t^{\prime}_{1} + \dots + t^{\prime}_\ell < s_1 + \dots + s_\ell. \end{align*} $$ $$ \begin{align*} t^{\prime}_{1} + \dots + t^{\prime}_\ell < s_1 + \dots + s_\ell. \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\mu } \\ \mathfrak {u} \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\mu } \\ \mathfrak {u} \end {pmatrix} $
 of type $3$
 with respect to $3$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $\operatorname {\mathrm {Init}}(\mathfrak {s}) \prec \operatorname {\mathrm {Init}}(\mathfrak {u})$
. $\operatorname {\mathrm {Init}}(\mathfrak {s}) \prec \operatorname {\mathrm {Init}}(\mathfrak {u})$
.
 2) Let 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _k \\ s_1 & \dots & s_k \end {pmatrix} $
 be an array such that
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _k \\ s_1 & \dots & s_k \end {pmatrix} $
 be an array such that 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = \mathfrak {s}$
 and
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = \mathfrak {s}$
 and 
 $s_k = q$
. Then
$s_k = q$
. Then 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be decomposed as follows:
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be decomposed as follows: 
 $$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{\sum\limits_i b_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}^{\prime}_i \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{\sum\limits_i b_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}^{\prime}_i \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align*} $$
where 
 $ b_i, c_i \in A$
 divisible by
$ b_i, c_i \in A$
 divisible by 
 $D_1$
 such that for all i, the following properties are satisfied:
$D_1$
 such that for all i, the following properties are satisfied: 
- 
• For all arrays  $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side, $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side, $$ \begin{align*} \operatorname{\mathrm{depth}}(\mathfrak{t}) \geq \operatorname{\mathrm{depth}}(\mathfrak{s}) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\mathfrak{s}). \end{align*} $$ $$ \begin{align*} \operatorname{\mathrm{depth}}(\mathfrak{t}) \geq \operatorname{\mathrm{depth}}(\mathfrak{s}) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\mathfrak{s}). \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\epsilon }' \\ \mathfrak {t}' \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\epsilon }' \\ \mathfrak {t}' \end {pmatrix} $
 of type $2$
 with respect to $2$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, $$ \begin{align*} t^{\prime}_{1} + \dots + t^{\prime}_k < s_1 + \dots + s_k. \end{align*} $$ $$ \begin{align*} t^{\prime}_{1} + \dots + t^{\prime}_k < s_1 + \dots + s_k. \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\mu } \\ \mathfrak {u} \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\mu } \\ \mathfrak {u} \end {pmatrix} $
 of type $3$
 with respect to $3$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $\operatorname {\mathrm {Init}}(\mathfrak {s}) \prec \operatorname {\mathrm {Init}}(\mathfrak {u})$
. $\operatorname {\mathrm {Init}}(\mathfrak {s}) \prec \operatorname {\mathrm {Init}}(\mathfrak {u})$
.
Proof. The proof follows the same line as in [Reference Ngo Dac31, Proposition 2.12 and 2.13]. We outline the proof here and refer the reader to [Reference Im, Kim, Le, Ngo Dac and Pham25] for more details. For Part 1, since 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_{k-1})$
, we get
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_{k-1})$
, we get 
 $s_k> q$
. Set
$s_k> q$
. Set 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \varepsilon ^{-1} \varepsilon _{k} & \varepsilon _{k+1} &\dots & \varepsilon _n \\ s_k - q & s_{k+1} &\dots & s_n \end {pmatrix} $
. By Proposition 2.6,
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \varepsilon ^{-1} \varepsilon _{k} & \varepsilon _{k+1} &\dots & \varepsilon _n \\ s_k - q & s_{k+1} &\dots & s_n \end {pmatrix} $
. By Proposition 2.6, 
 $\mathcal {C}_{\Sigma ,V}(R_{\varepsilon })$
 is of the form
$\mathcal {C}_{\Sigma ,V}(R_{\varepsilon })$
 is of the form 
 $$ \begin{align} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon & \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{align} $$
$$ \begin{align} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} + \sum \limits_i b_i \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon & \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{align} $$
where 
 $b_i \in A$
 divisible by
$b_i \in A$
 divisible by 
 $D_1$
 and
$D_1$
 and 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying for all i,
$ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying for all i, 
 $$ \begin{align*} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon^{-1} \varepsilon_{k} & \varepsilon_{k+1} &\dots & \varepsilon_n \\ s_k - 1 & s_{k+1} &\dots & s_n \end{pmatrix}. \end{align*} $$
$$ \begin{align*} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon^{-1} \varepsilon_{k} & \varepsilon_{k+1} &\dots & \varepsilon_n \\ s_k - 1 & s_{k+1} &\dots & s_n \end{pmatrix}. \end{align*} $$
 For 
 $m \in \mathbb {N}$
, we denote by
$m \in \mathbb {N}$
, we denote by 
 $q^{\{m\}}$
 the sequence of length m with all terms equal to q. We agree by convention that
$q^{\{m\}}$
 the sequence of length m with all terms equal to q. We agree by convention that 
 $q^{\{0\}}$
 is the empty sequence. Setting
$q^{\{0\}}$
 is the empty sequence. Setting 
 $s_0 = 0$
, we may assume that there exists a maximal index j with
$s_0 = 0$
, we may assume that there exists a maximal index j with 
 $0 \leq j \leq k-1$
 such that
$0 \leq j \leq k-1$
 such that 
 $s_j < q$
, hence
$s_j < q$
, hence 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_j, q^{\{k-j-1\}})$
. Then the operator
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_j, q^{\{k-j-1\}})$
. Then the operator 
 $ \mathcal {BC}_{\varepsilon _{j+1},q} \circ \dots \circ \mathcal {BC}_{\varepsilon _{k-1},q}$
 applied to the relation (2.8) gives
$ \mathcal {BC}_{\varepsilon _{j+1},q} \circ \dots \circ \mathcal {BC}_{\varepsilon _{k-1},q}$
 applied to the relation (2.8) gives 
 $$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon_{k} & \dots & \varepsilon_n \\ q & \dots & q & s_{k} & \dots & s_n \end{pmatrix} & + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \epsilon_{k+1} & \dots & \epsilon_n \\ q & \dots & q & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon_{k} & \dots & \varepsilon_n \\ q & \dots & q & s_{k} & \dots & s_n \end{pmatrix} & + \operatorname{\mathrm{Si}}_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \epsilon_{k+1} & \dots & \epsilon_n \\ q & \dots & q & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} \\ & + \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Si}}_{d+1} \begin{pmatrix} \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align*} $$
where 
 $b_{i_1 \dots i_{k-j}} \in A$
 are divisible by
$b_{i_1 \dots i_{k-j}} \in A$
 are divisible by 
 $D_1$
 and
$D_1$
 and 
 $$ \begin{align} \begin{pmatrix} \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1}& \dots & \varepsilon_n \\ q& \dots & q & q & s_{k} - 1 & s_{k+1} &\dots & s_n \end{pmatrix}. \end{align} $$
$$ \begin{align} \begin{pmatrix} \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1}& \dots & \varepsilon_n \\ q& \dots & q & q & s_{k} - 1 & s_{k+1} &\dots & s_n \end{pmatrix}. \end{align} $$
 Set 
 $ \begin {pmatrix} \Sigma ' \\ V' \end {pmatrix} = \begin {pmatrix} \varepsilon _{1} &\dots & \varepsilon _j \\ s_1 &\dots & s_j \end {pmatrix} $
. Applying
$ \begin {pmatrix} \Sigma ' \\ V' \end {pmatrix} = \begin {pmatrix} \varepsilon _{1} &\dots & \varepsilon _j \\ s_1 &\dots & s_j \end {pmatrix} $
. Applying 
 $\mathcal {B}^*_{\Sigma ',V'}$
 to the above relation and using Lemma 2.5, we can deduce that
$\mathcal {B}^*_{\Sigma ',V'}$
 to the above relation and using Lemma 2.5, we can deduce that 
 $$ \begin{align} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} =& - \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\\notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}. \end{align} $$
$$ \begin{align} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} =& - \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{k-1} & \varepsilon & \varepsilon^{-1}\varepsilon_{k} & \varepsilon_{k+1} & \dots & \varepsilon_n \\ s_1 & \dots & s_{k-1} & q & s_k- q & s_{k+1} & \dots & s_n \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\\notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}. \end{align} $$
The first term, the second term and the third term on the right-hand side of Equation (2.10) are referred to as type 1, type 2 and type 3, respectively. From Equation (2.9) and Remark 2.1, one verifies that the arrays of type 1, type 2 and type 3 satisfy the desired conditions. We have proved Part 1.
 The proof of Part 
 $2$
 follows the same arguments as that of Part 1. We first begin with the relation
$2$
 follows the same arguments as that of Part 1. We first begin with the relation 
 $R_{\varepsilon _k}$
. Next, we apply
$R_{\varepsilon _k}$
. Next, we apply 
 $ \mathcal {BC}_{\varepsilon _{j+1},q} \circ \dots \circ \mathcal {BC}_{\varepsilon _{k-1},q}$
 to
$ \mathcal {BC}_{\varepsilon _{j+1},q} \circ \dots \circ \mathcal {BC}_{\varepsilon _{k-1},q}$
 to 
 $R_{\varepsilon _k}$
 and then apply
$R_{\varepsilon _k}$
 and then apply 
 $\mathcal {B}^*_{\Sigma ',V'}$
. We can deduce that
$\mathcal {B}^*_{\Sigma ',V'}$
. We can deduce that 
 $$ \begin{align} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} =& - \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}, \end{align} $$
$$ \begin{align} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} =& - \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \\ \notag & - \sum \limits_i b_{i_1 \dots i_{k-j}} \operatorname{\mathrm{Li}} \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}, \end{align} $$
where 
 $b_{i_1 \dots i_{k-j}} \in A$
 are divisible by
$b_{i_1 \dots i_{k-j}} \in A$
 are divisible by 
 $D_1$
 and
$D_1$
 and 
 $$ \begin{align} \begin{pmatrix} \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_{k} & 1 \\ q& \dots & q & q - 1 \end{pmatrix}. \end{align} $$
$$ \begin{align} \begin{pmatrix} \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_{k} & 1 \\ q& \dots & q & q - 1 \end{pmatrix}. \end{align} $$
The first term and the second term on the right-hand side of Equation (2.11) are referred to as type 2 and type 3, respectively. From Equation (2.12) and Remark 2.1, one verifies that the arrays of type 2 and type 3 satisfy the desired conditions. We finish the proof.
We recall the following definition of [Reference Ngo Dac31] (see [Reference Ngo Dac31, Definition 3.1]):
Definition 2.8. Let 
 $k \in \mathbb N$
 and
$k \in \mathbb N$
 and 
 $\mathfrak {s}$
 be a tuple of positive integers. We say that
$\mathfrak {s}$
 be a tuple of positive integers. We say that 
 $\mathfrak {s}$
 is k-admissible if it satisfies the following two conditions:
$\mathfrak {s}$
 is k-admissible if it satisfies the following two conditions: 
- 
1)  $s_1,\dots ,s_k \leq q$
. $s_1,\dots ,s_k \leq q$
.
- 
2)  $\mathfrak {s}$
 is not of the form $\mathfrak {s}$
 is not of the form $(s_1,\dots ,s_r)$
 with $(s_1,\dots ,s_r)$
 with $r \leq k$
, $r \leq k$
, $s_1,\dots ,s_{r-1} \leq q$
, and $s_1,\dots ,s_{r-1} \leq q$
, and $s_r=q$
. $s_r=q$
.
Here, we recall 
 $s_i=0$
 for
$s_i=0$
 for 
 $i> \operatorname {\mathrm {depth}}(\mathfrak {s})$
. By convention the empty array
$i> \operatorname {\mathrm {depth}}(\mathfrak {s})$
. By convention the empty array 
 $\begin {pmatrix} \emptyset \\ \emptyset \end {pmatrix}$
 is always k-admissible.
$\begin {pmatrix} \emptyset \\ \emptyset \end {pmatrix}$
 is always k-admissible.
An array is k-admissible if the corresponding tuple is k-admissible.
Proposition 2.9. For all 
 $k \in \mathbb {N}$
 and for all arrays
$k \in \mathbb {N}$
 and for all arrays 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
,
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be expressed as a K-linear combination of
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be expressed as a K-linear combination of 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that 
 $\mathfrak {t}$
 is k-admissible.
$\mathfrak {t}$
 is k-admissible.
Proof. The proof follows the same line as that of [Reference Ngo Dac31, Proposition 3.2]. We outline the proof here and refer the reader to [Reference Im, Kim, Le, Ngo Dac and Pham25] for more details. We consider the following statement: 
 $(H_k) \quad $
 For all arrays
$(H_k) \quad $
 For all arrays 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we can express
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we can express 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 as a K-linear combination of
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 as a K-linear combination of 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that 
 $\mathfrak {t}$
 is k-admissible.
$\mathfrak {t}$
 is k-admissible.
 We will show that 
 $(H_k)$
 holds for all
$(H_k)$
 holds for all 
 $k \in \mathbb {N}$
 by induction on k. For
$k \in \mathbb {N}$
 by induction on k. For 
 $k = 1$
, we consider all the cases for the first component
$k = 1$
, we consider all the cases for the first component 
 $s_1$
 of
$s_1$
 of 
 $\mathfrak {s}$
. If
$\mathfrak {s}$
. If 
 $s_1 \leq q$
, then either
$s_1 \leq q$
, then either 
 $\mathfrak {s}$
 is
$\mathfrak {s}$
 is 
 $1$
-admissible, or
$1$
-admissible, or 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ q \end {pmatrix}$
. We deduce from the relation
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ q \end {pmatrix}$
. We deduce from the relation 
 $R_{\varepsilon }$
 that
$R_{\varepsilon }$
 that 
 $(H_1)$
 holds for the case
$(H_1)$
 holds for the case 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ q \end {pmatrix}$
. If
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ q \end {pmatrix}$
. If 
 $s_1> q$
, we assume that
$s_1> q$
, we assume that 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
. Set
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
. Set 
 $\begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \varepsilon _2 & \cdots & \varepsilon _n \\ s_1 - q & s_2 & \cdots & s_n \end {pmatrix}$
. Applying
$\begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \varepsilon _2 & \cdots & \varepsilon _n \\ s_1 - q & s_2 & \cdots & s_n \end {pmatrix}$
. Applying 
 $C_{\Sigma ,V}$
 to the relation
$C_{\Sigma ,V}$
 to the relation 
 $R_{1}$
 and using Proposition 2.6, we can deduce that
$R_{1}$
 and using Proposition 2.6, we can deduce that 
 $$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = - \operatorname{\mathrm{Li}} \begin{pmatrix} 1& \varepsilon_1 & \varepsilon_2 & \cdots & \varepsilon_n \\ q & s_1 - q & s_2 & \cdots & s_n \end{pmatrix} - \sum\limits_i b_i \operatorname{\mathrm{Li}} \begin{pmatrix} 1 & \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix}, \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = - \operatorname{\mathrm{Li}} \begin{pmatrix} 1& \varepsilon_1 & \varepsilon_2 & \cdots & \varepsilon_n \\ q & s_1 - q & s_2 & \cdots & s_n \end{pmatrix} - \sum\limits_i b_i \operatorname{\mathrm{Li}} \begin{pmatrix} 1 & \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix}, \end{align*} $$
where 
 $b_i \in K$
 for all i. It proves that
$b_i \in K$
 for all i. It proves that 
 $(H_1)$
 holds.
$(H_1)$
 holds.
 We next assume that 
 $(H_{k - 1})$
 holds. We need to show that
$(H_{k - 1})$
 holds. We need to show that 
 $(H_k)$
 holds. By using the induction hypothesis of
$(H_k)$
 holds. By using the induction hypothesis of 
 $(H_{k - 1})$
, we can restrict our attention to the array
$(H_{k - 1})$
, we can restrict our attention to the array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
, where
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
, where 
 $\mathfrak {s}$
 is not k-admissible and
$\mathfrak {s}$
 is not k-admissible and 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) \geq k$
. We will prove that
$\operatorname {\mathrm {depth}}(\mathfrak {s}) \geq k$
. We will prove that 
 $(H_k)$
 holds for the array
$(H_k)$
 holds for the array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 by induction on
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 by induction on 
 $s_1 + \cdots + s_k$
. The case
$s_1 + \cdots + s_k$
. The case 
 $s_1 + \cdots + s_k = 1$
 is a simple check. Assume that
$s_1 + \cdots + s_k = 1$
 is a simple check. Assume that 
 $(H_k)$
 holds when
$(H_k)$
 holds when 
 $s_1 + \cdots + s_k < s$
. We need to show that
$s_1 + \cdots + s_k < s$
. We need to show that 
 $(H_k)$
 holds when
$(H_k)$
 holds when 
 $s_1 + \cdots + s_k = s$
. To do so, we give the following algorithm:
$s_1 + \cdots + s_k = s$
. To do so, we give the following algorithm:
 
Algorithm: We begin with an array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
where
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
where 
 $\mathfrak {s}$
 is not k-admissible,
$\mathfrak {s}$
 is not k-admissible, 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) \geq k$
 and
$\operatorname {\mathrm {depth}}(\mathfrak {s}) \geq k$
 and 
 $s_1 + \cdots + s_k = s$
.
$s_1 + \cdots + s_k = s$
.
 
Step 1: From Proposition 2.7, we can decompose 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
as follows:
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
as follows: 
 $$ \begin{align} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{ - \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} }_{\text{type 1}} + \underbrace{\sum\limits_i b_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}_i' \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align} $$
$$ \begin{align} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{ - \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} }_{\text{type 1}} + \underbrace{\sum\limits_i b_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}_i' \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align} $$
where 
 $ b_i, c_i \in A$
. The term of type
$ b_i, c_i \in A$
. The term of type 
 $1$
 disappears when
$1$
 disappears when 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = \mathfrak {s}$
 and
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = \mathfrak {s}$
 and 
 $s_n = q$
.
$s_n = q$
.
 
Step 2: For all arrays 
 $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side of Equation (2.13), if
$ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side of Equation (2.13), if 
 $\mathfrak {t}$
 is either k-admissible or
$\mathfrak {t}$
 is either k-admissible or 
 $\mathfrak {t}$
 satisfies the condition
$\mathfrak {t}$
 satisfies the condition 
 $t_1 + \cdots + t_k < s$
, then we deduce from the induction hypothesis that
$t_1 + \cdots + t_k < s$
, then we deduce from the induction hypothesis that 
 $(H_k)$
 holds for the array
$(H_k)$
 holds for the array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
, and hence we stop the algorithm. Otherwise, there exists an array
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
, and hence we stop the algorithm. Otherwise, there exists an array 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_1 \\ \mathfrak {s}_1 \end {pmatrix}$
where
$\begin {pmatrix} \boldsymbol {\varepsilon }_1 \\ \mathfrak {s}_1 \end {pmatrix}$
where 
 $\mathfrak {s}_1$
 is not k-admissible,
$\mathfrak {s}_1$
 is not k-admissible, 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}_1) \geq k$
 and
$\operatorname {\mathrm {depth}}(\mathfrak {s}_1) \geq k$
 and 
 $s_{11} + \cdots + s_{1k} = s$
. For such an array, we repeat the algorithm for
$s_{11} + \cdots + s_{1k} = s$
. For such an array, we repeat the algorithm for 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_1 \\ \mathfrak {s}_1 \end {pmatrix}$
.
$\begin {pmatrix} \boldsymbol {\varepsilon }_1 \\ \mathfrak {s}_1 \end {pmatrix}$
.
 It remains to show that the above algorithm stops after a finite number of steps. Indeed, assume that the above algorithm does not stop. Then there exists a sequence of arrays 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \boldsymbol {\varepsilon }_0 \\ \mathfrak {s}_0 \end {pmatrix}, \begin {pmatrix} \boldsymbol {\varepsilon }_1 \\ \mathfrak {s}_1 \end {pmatrix}, \begin {pmatrix} \boldsymbol {\varepsilon }_2 \\ \mathfrak {s}_2 \end {pmatrix}, \dotsc $
such that
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \boldsymbol {\varepsilon }_0 \\ \mathfrak {s}_0 \end {pmatrix}, \begin {pmatrix} \boldsymbol {\varepsilon }_1 \\ \mathfrak {s}_1 \end {pmatrix}, \begin {pmatrix} \boldsymbol {\varepsilon }_2 \\ \mathfrak {s}_2 \end {pmatrix}, \dotsc $
such that 
 $\mathfrak {s}_i$
 is not k-admissible and
$\mathfrak {s}_i$
 is not k-admissible and 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}_i) \geq k$
 for all
$\operatorname {\mathrm {depth}}(\mathfrak {s}_i) \geq k$
 for all 
 $i \geq 0$
. Using Proposition 2.7, one verifies that
$i \geq 0$
. Using Proposition 2.7, one verifies that 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_{i + 1} \\ \mathfrak {s}_{i + 1} \end {pmatrix}$
 is of type
$\begin {pmatrix} \boldsymbol {\varepsilon }_{i + 1} \\ \mathfrak {s}_{i + 1} \end {pmatrix}$
 is of type 
 $3$
 with respect to
$3$
 with respect to 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}$
 for all
$\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}$
 for all 
 $i \geq 0$
, hence we obtain an infinite sequence
$i \geq 0$
, hence we obtain an infinite sequence 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}_0) \prec \operatorname {\mathrm {Init}}(\mathfrak {s}_1) \prec \operatorname {\mathrm {Init}}(\mathfrak {s}_2) \prec \cdots $
. For all
$\operatorname {\mathrm {Init}}(\mathfrak {s}_0) \prec \operatorname {\mathrm {Init}}(\mathfrak {s}_1) \prec \operatorname {\mathrm {Init}}(\mathfrak {s}_2) \prec \cdots $
. For all 
 $i \geq 0$
, since
$i \geq 0$
, since 
 $\mathfrak {s}_i$
 is not k-admissible and
$\mathfrak {s}_i$
 is not k-admissible and 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}_i) \geq k$
, we have
$\operatorname {\mathrm {depth}}(\mathfrak {s}_i) \geq k$
, we have 
 $\operatorname {\mathrm {depth}}(\operatorname {\mathrm {Init}}(\mathfrak {s}_i)) \leq k$
, hence
$\operatorname {\mathrm {depth}}(\operatorname {\mathrm {Init}}(\mathfrak {s}_i)) \leq k$
, hence 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}_i) \preceq q^{\{k\}}$
. This shows that
$\operatorname {\mathrm {Init}}(\mathfrak {s}_i) \preceq q^{\{k\}}$
. This shows that 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}_i) = \operatorname {\mathrm {Init}}(\mathfrak {s}_{i + 1})$
 for all i sufficiently large, which is a contradiction.
$\operatorname {\mathrm {Init}}(\mathfrak {s}_i) = \operatorname {\mathrm {Init}}(\mathfrak {s}_{i + 1})$
 for all i sufficiently large, which is a contradiction.
2.3.2. A set of generators 
 $\mathcal {AT}_w$
 for ACMPL's
$\mathcal {AT}_w$
 for ACMPL's
 We recall that 
 $\mathcal {AL}_w$
 is the K-vector space generated by ACMPL's of weight w. We denote by
$\mathcal {AL}_w$
 is the K-vector space generated by ACMPL's of weight w. We denote by 
 $\mathcal {AT}_w$
 the set of all ACMPL's
$\mathcal {AT}_w$
 the set of all ACMPL's 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \operatorname {\mathrm {Li}} \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 of weight w such that
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \operatorname {\mathrm {Li}} \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 of weight w such that 
 $s_1, \dots , s_{n-1} \leq q$
 and
$s_1, \dots , s_{n-1} \leq q$
 and 
 $s_n < q$
.
$s_n < q$
.
 We put 
 $t(w)=|\mathcal {AT}_w|$
. Then one verifies that
$t(w)=|\mathcal {AT}_w|$
. Then one verifies that 
 $$ \begin{align*} t(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{align*} $$
$$ \begin{align*} t(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{align*} $$
and for 
 $w>q$
,
$w>q$
, 
 $t(w)=(q-1)\sum \limits _{i = 1}^q t(w-i)$
.
$t(w)=(q-1)\sum \limits _{i = 1}^q t(w-i)$
.
We are ready to state a weak version of Brown’s theorem for ACMPL's.
Proposition 2.10. The set of all elements 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 such that
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 such that 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AT}_w$
 forms a set of generators for
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AT}_w$
 forms a set of generators for 
 $\mathcal {AL}_w$
.
$\mathcal {AL}_w$
.
Proof. The result follows immediately from Proposition 2.9 in the case of 
 $k = w$
.
$k = w$
.
2.4. A strong version of Brown’s theorem for ACMPL's
2.4.1. Another set of generators 
 $\mathcal {AS}_w$
 for ACMPL's
$\mathcal {AS}_w$
 for ACMPL's
 We consider the set 
 $\mathcal {J}_w$
 consisting of positive tuples
$\mathcal {J}_w$
 consisting of positive tuples 
 $\mathfrak {s} = (s_1, \dots , s_n)$
 of weight w such that
$\mathfrak {s} = (s_1, \dots , s_n)$
 of weight w such that 
 $s_1, \dots , s_{n-1} \leq q$
 and
$s_1, \dots , s_{n-1} \leq q$
 and 
 $s_n <q$
, together with the set
$s_n <q$
, together with the set 
 $\mathcal {J}^{\prime }_w$
 consisting of positive tuples
$\mathcal {J}^{\prime }_w$
 consisting of positive tuples 
 $\mathfrak {s} = (s_1, \dots , s_n)$
 of weight w such that
$\mathfrak {s} = (s_1, \dots , s_n)$
 of weight w such that 
 $ q \nmid s_i$
 for all i. Then there is a bijection
$ q \nmid s_i$
 for all i. Then there is a bijection 
 $$ \begin{align*} \iota \colon \mathcal{J}^{\prime}_w \longrightarrow \mathcal{J}_w \end{align*} $$
$$ \begin{align*} \iota \colon \mathcal{J}^{\prime}_w \longrightarrow \mathcal{J}_w \end{align*} $$
given as follows: For each tuple 
 $\mathfrak {s} = (s_1, \dots , s_n) \in \mathcal {J}^{\prime }_w$
, since
$\mathfrak {s} = (s_1, \dots , s_n) \in \mathcal {J}^{\prime }_w$
, since 
 $q \nmid s_i$
, we can write
$q \nmid s_i$
, we can write 
 $s_i = h_i q + r_i $
, where
$s_i = h_i q + r_i $
, where 
 $0 < r_i < q$
 and
$0 < r_i < q$
 and 
 $h_i \in \mathbb {Z}^{\ge 0}$
. The image
$h_i \in \mathbb {Z}^{\ge 0}$
. The image 
 $\iota (\mathfrak s)$
 is the tuple
$\iota (\mathfrak s)$
 is the tuple 
 $$ \begin{align*} \iota(\mathfrak s) = (\underbrace{q, \dots, q}_{h_1\ \text{times}}, r_1 , \dots, \underbrace{q, \dots, q}_{h_n\ \text{times}}, r_n). \end{align*} $$
$$ \begin{align*} \iota(\mathfrak s) = (\underbrace{q, \dots, q}_{h_1\ \text{times}}, r_1 , \dots, \underbrace{q, \dots, q}_{h_n\ \text{times}}, r_n). \end{align*} $$
Let 
 $\mathcal {AS}_w$
 denote the set of ACMPL's
$\mathcal {AS}_w$
 denote the set of ACMPL's 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 such that
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 such that 
 $\mathfrak s \in \mathcal {J}^{\prime }_w$
. We note that in general,
$\mathfrak s \in \mathcal {J}^{\prime }_w$
. We note that in general, 
 $\mathcal {AS}_w$
 is strictly smaller than
$\mathcal {AS}_w$
 is strictly smaller than 
 $\mathcal {AT}_w$
. The only exceptions are when
$\mathcal {AT}_w$
. The only exceptions are when 
 $q=2$
 or
$q=2$
 or 
 $w \leq q$
.
$w \leq q$
.
2.4.2. Cardinality of 
 $\mathcal {AS}_w$
$\mathcal {AS}_w$
 We now compute 
 $s(w)=|\mathcal {AS}_w|$
. To do so, we denote by
$s(w)=|\mathcal {AS}_w|$
. To do so, we denote by 
 $\mathcal {AJ}_w$
 the set consisting of arrays
$\mathcal {AJ}_w$
 the set consisting of arrays 
 $ \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 of weight w such that
$ \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 of weight w such that 
 $q \nmid s_i$
 for all i and by
$q \nmid s_i$
 for all i and by 
 $\mathcal {AJ}^1_w$
 the set consisting of arrays
$\mathcal {AJ}^1_w$
 the set consisting of arrays 
 $ \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 of weight w such that
$ \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 of weight w such that 
 $s_1, \dots , s_{n-1} \leq q, s_n < q$
 and
$s_1, \dots , s_{n-1} \leq q, s_n < q$
 and 
 $\varepsilon _i = 1$
 whenever
$\varepsilon _i = 1$
 whenever 
 $s_i = q$
 for
$s_i = q$
 for 
 $1 \leq i \leq n$
. We construct a map
$1 \leq i \leq n$
. We construct a map 
 $$ \begin{align*} \varphi \colon \mathcal{AJ}_w \longrightarrow \mathcal{AJ}^1_w \end{align*} $$
$$ \begin{align*} \varphi \colon \mathcal{AJ}_w \longrightarrow \mathcal{AJ}^1_w \end{align*} $$
as follows: For each array 
 $ \begin {pmatrix} \boldsymbol {\varepsilon }\\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} \in \mathcal {AJ}_w$
, since
$ \begin {pmatrix} \boldsymbol {\varepsilon }\\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} \in \mathcal {AJ}_w$
, since 
 $q \nmid s_i$
, we can write
$q \nmid s_i$
, we can write 
 $s_i = (h_i-1) q + r_i $
, where
$s_i = (h_i-1) q + r_i $
, where 
 $0 < r_i < q$
 and
$0 < r_i < q$
 and 
 $h_i \in \mathbb N$
. The image
$h_i \in \mathbb N$
. The image 
 $\varphi \begin {pmatrix} \boldsymbol {\varepsilon }\\ \mathfrak {s} \end {pmatrix} $
 is the array
$\varphi \begin {pmatrix} \boldsymbol {\varepsilon }\\ \mathfrak {s} \end {pmatrix} $
 is the array 
 $$ \begin{align*} \varphi \begin{pmatrix} \boldsymbol{\varepsilon}\\ \mathfrak{s} \end{pmatrix} = \bigg( \underbrace{\begin{pmatrix} 1 & \dots & 1\\ q & \dots & q \end{pmatrix}}_{h_1-1\ \text{times}} \begin{pmatrix} \varepsilon_1 \\ r_1 \end{pmatrix} \dots \underbrace{\begin{pmatrix} 1 & \dots & 1\\ q & \dots & q \end{pmatrix}}_{h_n-1\ \text{times}} \begin{pmatrix} \varepsilon_n \\ r_n \end{pmatrix} \bigg). \end{align*} $$
$$ \begin{align*} \varphi \begin{pmatrix} \boldsymbol{\varepsilon}\\ \mathfrak{s} \end{pmatrix} = \bigg( \underbrace{\begin{pmatrix} 1 & \dots & 1\\ q & \dots & q \end{pmatrix}}_{h_1-1\ \text{times}} \begin{pmatrix} \varepsilon_1 \\ r_1 \end{pmatrix} \dots \underbrace{\begin{pmatrix} 1 & \dots & 1\\ q & \dots & q \end{pmatrix}}_{h_n-1\ \text{times}} \begin{pmatrix} \varepsilon_n \\ r_n \end{pmatrix} \bigg). \end{align*} $$
It is easily seen that 
 $\varphi $
 is a bijection, hence
$\varphi $
 is a bijection, hence 
 $|\mathcal {AS}_w| =|\mathcal {AJ}_w| = |\mathcal {AJ}^1_w|$
. Thus,
$|\mathcal {AS}_w| =|\mathcal {AJ}_w| = |\mathcal {AJ}^1_w|$
. Thus, 
 $s(w) = |\mathcal {AJ}^1_w|$
. One verifies that
$s(w) = |\mathcal {AJ}^1_w|$
. One verifies that 
 $$ \begin{align*} s(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{align*} $$
$$ \begin{align*} s(w) = \begin{cases} (q - 1) q^{w-1}& \text{if } 1 \leq w < q, \\ (q - 1) (q^{w-1} - 1) &\text{if } w = q, \end{cases} \end{align*} $$
and for 
 $w>q$
,
$w>q$
, 
 $$\begin{align*}s(w)=(q-1)\sum \limits_{i = 1}^{q-1} s(w-i) + s(w - q). \end{align*}$$
$$\begin{align*}s(w)=(q-1)\sum \limits_{i = 1}^{q-1} s(w-i) + s(w - q). \end{align*}$$
2.4.3.
We state a strong version of Brown’s theorem for ACMPL's.
Theorem 2.11. The set 
 $\mathcal {AS}_w$
 forms a set of generators for
$\mathcal {AS}_w$
 forms a set of generators for 
 $\mathcal {AL}_w$
. In particular,
$\mathcal {AL}_w$
. In particular, 
 $$\begin{align*}\dim_K \mathcal{AL}_w \leq s(w). \end{align*}$$
$$\begin{align*}\dim_K \mathcal{AL}_w \leq s(w). \end{align*}$$
Proof. We recall that 
 $\mathcal {AT}_w$
 is the set of all ACMPL's
$\mathcal {AT}_w$
 is the set of all ACMPL's 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 with
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 with 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AJ}_w$
.
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AJ}_w$
.
 Let 
 $\text {Li} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \text {Li}\begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} \in \mathcal {AT}_w$
. Then
$\text {Li} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \text {Li}\begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} \in \mathcal {AT}_w$
. Then 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AJ}_w$
, which implies
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AJ}_w$
, which implies 
 $s_1, \dotsc , s_{n - 1} \leq q$
 and
$s_1, \dotsc , s_{n - 1} \leq q$
 and 
 $s_n < q$
. We express
$s_n < q$
. We express 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 in the following form
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 in the following form 
 $$ \begin{align*} \bigg( \underbrace{\begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{h_1 - 1}\\ q & \dots & q \end{pmatrix}}_{h_1 - 1\ \text{times}} \begin{pmatrix} \varepsilon_{h_1} \\ r_1 \end{pmatrix} \dots \underbrace{\begin{pmatrix} \varepsilon_{h_1 + \cdots + h_{\ell - 1} + 1} & \dots & \varepsilon_{h_1 + \cdots + h_{\ell - 1} + (h_\ell - 1)}\\ q & \dots & q \end{pmatrix}}_{h_\ell - 1\ \text{times}} \begin{pmatrix} \varepsilon_{h_1 + \cdots + h_{\ell - 1} + h_\ell} \\ r_\ell \end{pmatrix} \bigg), \end{align*} $$
$$ \begin{align*} \bigg( \underbrace{\begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{h_1 - 1}\\ q & \dots & q \end{pmatrix}}_{h_1 - 1\ \text{times}} \begin{pmatrix} \varepsilon_{h_1} \\ r_1 \end{pmatrix} \dots \underbrace{\begin{pmatrix} \varepsilon_{h_1 + \cdots + h_{\ell - 1} + 1} & \dots & \varepsilon_{h_1 + \cdots + h_{\ell - 1} + (h_\ell - 1)}\\ q & \dots & q \end{pmatrix}}_{h_\ell - 1\ \text{times}} \begin{pmatrix} \varepsilon_{h_1 + \cdots + h_{\ell - 1} + h_\ell} \\ r_\ell \end{pmatrix} \bigg), \end{align*} $$
where 
 $h_1, \dotsc , h_\ell \geq 1$
,
$h_1, \dotsc , h_\ell \geq 1$
, 
 $h_1 + \cdots + h_\ell = n$
 and
$h_1 + \cdots + h_\ell = n$
 and 
 $0 < r_1, \dotsc , r_\ell < q$
. Then we set
$0 < r_1, \dotsc , r_\ell < q$
. Then we set 
 $$ \begin{align*} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} = \begin{pmatrix} \varepsilon^{\prime}_1 & \dots & \varepsilon^{\prime}_\ell \\ s^{\prime}_1 & \dots & s^{\prime}_\ell \end{pmatrix}, \end{align*} $$
$$ \begin{align*} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} = \begin{pmatrix} \varepsilon^{\prime}_1 & \dots & \varepsilon^{\prime}_\ell \\ s^{\prime}_1 & \dots & s^{\prime}_\ell \end{pmatrix}, \end{align*} $$
where 
 $\varepsilon ^{\prime }_i = \varepsilon _{h_1 + \cdots + h_{i - 1} + 1} \cdots \varepsilon _{h_1 + \cdots + h_{i - 1} + h_i}$
 and
$\varepsilon ^{\prime }_i = \varepsilon _{h_1 + \cdots + h_{i - 1} + 1} \cdots \varepsilon _{h_1 + \cdots + h_{i - 1} + h_i}$
 and 
 $s^{\prime }_i = (h_i - 1)q + r_i$
 for
$s^{\prime }_i = (h_i - 1)q + r_i$
 for 
 $1 \leq i \leq \ell $
. We note that
$1 \leq i \leq \ell $
. We note that 
 $\iota (\mathfrak {s}')=\mathfrak {s}$
.
$\iota (\mathfrak {s}')=\mathfrak {s}$
.
 From Proposition 2.7 and Proposition 2.10, we can decompose 
 $\text {Li}\begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix}$
as follows:
$\text {Li}\begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix}$
as follows: 
 $$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} = \sum a_{\boldsymbol{\epsilon},\mathfrak t}^{\boldsymbol{\varepsilon}',\mathfrak{s}'} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} , \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}' \\ \mathfrak{s}' \end{pmatrix} = \sum a_{\boldsymbol{\epsilon},\mathfrak t}^{\boldsymbol{\varepsilon}',\mathfrak{s}'} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} , \end{align*} $$
where 
 $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 ranges over all elements of
$ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 ranges over all elements of 
 $\mathcal {AJ}_w$
 and
$\mathcal {AJ}_w$
 and 
 $a_{\boldsymbol {\epsilon },\mathfrak t}^{\boldsymbol {\varepsilon }',\mathfrak {s}'} \in A$
 satisfying
$a_{\boldsymbol {\epsilon },\mathfrak t}^{\boldsymbol {\varepsilon }',\mathfrak {s}'} \in A$
 satisfying 
 $$ \begin{align*} a_{\boldsymbol{\epsilon},\mathfrak t}^{\boldsymbol{\varepsilon}',\mathfrak{s}'} \equiv \begin{cases} \pm 1 \ (\text{mod } D_1) & \text{if } \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix},\\ 0 \ \ (\text{mod } D_1) & \text{otherwise}. \end{cases} \end{align*} $$
$$ \begin{align*} a_{\boldsymbol{\epsilon},\mathfrak t}^{\boldsymbol{\varepsilon}',\mathfrak{s}'} \equiv \begin{cases} \pm 1 \ (\text{mod } D_1) & \text{if } \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix},\\ 0 \ \ (\text{mod } D_1) & \text{otherwise}. \end{cases} \end{align*} $$
Note that 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix} \in \mathcal {AS}_w$
. Thus, the transition matrix from the set consisting of such
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix} \in \mathcal {AS}_w$
. Thus, the transition matrix from the set consisting of such 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix} $
 as above (we allow repeated elements) to the set consisting of
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix} $
 as above (we allow repeated elements) to the set consisting of 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
with
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
with 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AJ}_w$
 is invertible. It then follows again from Proposition 2.10 that
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AJ}_w$
 is invertible. It then follows again from Proposition 2.10 that 
 $\mathcal {AS}_w$
 is a set of generators for
$\mathcal {AS}_w$
 is a set of generators for 
 $\mathcal {AL}_w$
, as desired.
$\mathcal {AL}_w$
, as desired.
3. Dual t-motives and linear independence
 We continue with the notation given in the Introduction. Further, letting t be another independent variable, we denote by 
 $\mathbb T$
 the Tate algebra in the variable t with coefficients in
$\mathbb T$
 the Tate algebra in the variable t with coefficients in 
 $\mathbb {C}_\infty $
 equipped with the Gauss norm
$\mathbb {C}_\infty $
 equipped with the Gauss norm 
 $\lVert. \rVert _\infty $
 and by
$\lVert. \rVert _\infty $
 and by 
 $\mathbb L$
 the fraction field of
$\mathbb L$
 the fraction field of 
 $\mathbb T$
.
$\mathbb T$
.
 We denote by 
 $\mathcal E$
 the ring of series
$\mathcal E$
 the ring of series 
 $\sum _{n \geq 0} a_nt^n \in \overline K[[t]]$
 such that
$\sum _{n \geq 0} a_nt^n \in \overline K[[t]]$
 such that 
 $\lim _{n \to +\infty } \sqrt [n]{|a_n|_\infty }=0$
 and
$\lim _{n \to +\infty } \sqrt [n]{|a_n|_\infty }=0$
 and 
 $[K_\infty (a_0,a_1,\ldots ):K_\infty ]<\infty $
. Then any
$[K_\infty (a_0,a_1,\ldots ):K_\infty ]<\infty $
. Then any 
 $f \in \mathcal E$
 is an entire function.
$f \in \mathcal E$
 is an entire function.
 For 
 $a \in A=\mathbb F_q[\theta ]$
, we set
$a \in A=\mathbb F_q[\theta ]$
, we set 
 $a(t):=a \rvert _{\theta =t} \in \mathbb F_q[t]$
.
$a(t):=a \rvert _{\theta =t} \in \mathbb F_q[t]$
.
3.1. Dual t-motives
We recall the notion of dual t-motives due to Anderson (see [Reference Brownawell, Papanikolas, Böckle, Goss, Hartl and Papanikolas6, §4] and [Reference Hartl, Juschka, Böckle, Goss, Hartl and Papanikolas22, §5] for more details). We refer the reader to [Reference Anderson1] for the related notion of t-motives.
 For 
 $i \in \mathbb Z$
, we consider the i-fold twisting of
$i \in \mathbb Z$
, we consider the i-fold twisting of 
 $\mathbb {C}_\infty ((t))$
 defined by
$\mathbb {C}_\infty ((t))$
 defined by 
 $$ \begin{align*} \mathbb{C}_\infty((t)) & \rightarrow \mathbb{C}_\infty((t)) \\ f=\sum_j a_j t^j & \mapsto f^{(i)}:=\sum_j a_j^{q^i} t^j. \end{align*} $$
$$ \begin{align*} \mathbb{C}_\infty((t)) & \rightarrow \mathbb{C}_\infty((t)) \\ f=\sum_j a_j t^j & \mapsto f^{(i)}:=\sum_j a_j^{q^i} t^j. \end{align*} $$
We extend i-fold twisting to matrices with entries in 
 $\mathbb {C}_\infty ((t))$
 by twisting entrywise.
$\mathbb {C}_\infty ((t))$
 by twisting entrywise.
 Let 
 $\overline K[t,\sigma ]$
 be the noncommutative
$\overline K[t,\sigma ]$
 be the noncommutative 
 $\overline K[t]$
-algebra generated by the new variable
$\overline K[t]$
-algebra generated by the new variable 
 $\sigma $
 subject to the relation
$\sigma $
 subject to the relation 
 $\sigma f=f^{(-1)}\sigma $
 for all
$\sigma f=f^{(-1)}\sigma $
 for all 
 $f \in \overline K[t]$
.
$f \in \overline K[t]$
.
Definition 3.1. An effective dual t-motive is a 
 $\overline K[t,\sigma ]$
-module
$\overline K[t,\sigma ]$
-module 
 $\mathcal M'$
 which is free and finitely generated over
$\mathcal M'$
 which is free and finitely generated over 
 $\overline K[t]$
 such that for
$\overline K[t]$
 such that for 
 $\ell \gg 0$
 we have
$\ell \gg 0$
 we have 
 $$\begin{align*}(t-\theta)^\ell(\mathcal M'/\sigma \mathcal M') = \{0\}.\end{align*}$$
$$\begin{align*}(t-\theta)^\ell(\mathcal M'/\sigma \mathcal M') = \{0\}.\end{align*}$$
We mention that effective dual t-motives are called Frobenius modules in [Reference Chang, Papanikolas and Yu11, Reference Chen and Harada14, Reference Harada21, Reference Kuan and Lin27]. Note that Hartl and Juschka [Reference Hartl, Juschka, Böckle, Goss, Hartl and Papanikolas22, §4] introduced a more general notion of dual t-motives. In particular, effective dual t-motives are always dual t-motives.
Throughout this paper, we will always work with effective dual t-motives. Therefore, we will sometimes drop the word ‘effective’ where there is no confusion.
 Let 
 $\mathcal M$
 and
$\mathcal M$
 and 
 $\mathcal M'$
 be two effective dual t-motives. Then a morphism of effective dual t-motives
$\mathcal M'$
 be two effective dual t-motives. Then a morphism of effective dual t-motives 
 $\mathcal M \to \mathcal M'$
 is just a homomorphism of left
$\mathcal M \to \mathcal M'$
 is just a homomorphism of left 
 $\overline K[t,\sigma ]$
-modules. We denote by
$\overline K[t,\sigma ]$
-modules. We denote by 
 $\mathcal F$
 the category of effective dual t-motives equipped with the trivial object
$\mathcal F$
 the category of effective dual t-motives equipped with the trivial object 
 $\mathbf {1}$
.
$\mathbf {1}$
.
 We say that an object 
 $\mathcal M$
 of
$\mathcal M$
 of 
 $\mathcal F$
 is given by a matrix
$\mathcal F$
 is given by a matrix 
 $\Phi \in \operatorname {\mathrm {Mat}}_r(\overline K[t])$
 if
$\Phi \in \operatorname {\mathrm {Mat}}_r(\overline K[t])$
 if 
 $\mathcal M$
 is a
$\mathcal M$
 is a 
 $\overline K[t]$
-module free of rank r and the action of
$\overline K[t]$
-module free of rank r and the action of 
 $\sigma $
 is represented by the matrix
$\sigma $
 is represented by the matrix 
 $\Phi $
 on a given
$\Phi $
 on a given 
 $\overline K[t]$
-basis for
$\overline K[t]$
-basis for 
 $\mathcal M$
. We say that an object
$\mathcal M$
. We say that an object 
 $\mathcal M$
 of
$\mathcal M$
 of 
 $\mathcal F$
 is uniformizable or rigid analytically trivial if there exists a matrix
$\mathcal F$
 is uniformizable or rigid analytically trivial if there exists a matrix 
 $\Psi \in \text {GL}_r(\mathbb T)$
 satisfying
$\Psi \in \text {GL}_r(\mathbb T)$
 satisfying 
 $\Psi ^{(-1)}=\Phi \Psi $
. The matrix
$\Psi ^{(-1)}=\Phi \Psi $
. The matrix 
 $\Psi $
 is called a rigid analytic trivialization of
$\Psi $
 is called a rigid analytic trivialization of 
 $\mathcal M$
.
$\mathcal M$
.
We now recall the Anderson–Brownawell–Papanikolas criterion which is crucial in the sequel (see [Reference Anderson, Brownawell and Papanikolas2, Theorem 3.1.1]).
Theorem 3.2 (Anderson–Brownawell–Papanikolas)
 Let 
 $\Phi \in \operatorname {\mathrm {Mat}}_\ell (\overline K[t])$
 be a matrix such that
$\Phi \in \operatorname {\mathrm {Mat}}_\ell (\overline K[t])$
 be a matrix such that 
 $\det \Phi =c(t-\theta )^s$
 for some
$\det \Phi =c(t-\theta )^s$
 for some 
 $c \in \overline K^\times $
 and
$c \in \overline K^\times $
 and 
 $s \in \mathbb Z^{\geq 0}$
. Let
$s \in \mathbb Z^{\geq 0}$
. Let 
 $\psi \in \operatorname {\mathrm {Mat}}_{\ell \times 1}(\mathcal E)$
 be a vector satisfying
$\psi \in \operatorname {\mathrm {Mat}}_{\ell \times 1}(\mathcal E)$
 be a vector satisfying 
 $\psi ^{(-1)}=\Phi \psi $
 and
$\psi ^{(-1)}=\Phi \psi $
 and 
 $\rho \in \operatorname {\mathrm {Mat}}_{1 \times \ell }(\overline K)$
 such that
$\rho \in \operatorname {\mathrm {Mat}}_{1 \times \ell }(\overline K)$
 such that 
 $\rho \psi (\theta )=0$
. Then there exists a vector
$\rho \psi (\theta )=0$
. Then there exists a vector 
 $P \in \operatorname {\mathrm {Mat}}_{1 \times \ell }(\overline K[t])$
 such that
$P \in \operatorname {\mathrm {Mat}}_{1 \times \ell }(\overline K[t])$
 such that 
 $$\begin{align*}P \psi=0 \quad \text{and} \quad P(\theta)=\rho. \end{align*}$$
$$\begin{align*}P \psi=0 \quad \text{and} \quad P(\theta)=\rho. \end{align*}$$
3.2. Some constructions of dual t-motives
3.2.1. General case
 We briefly review some constructions of dual t-motives introduced in [Reference Chang, Papanikolas and Yu11] (see also [Reference Chang9, Reference Chen and Harada14, Reference Harada21]). Let 
 $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 be a tuple of positive integers and
$\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 be a tuple of positive integers and 
 $\mathfrak {Q}=(Q_1,\dots ,Q_r) \in \overline K[t]^r$
 satisfying the condition
$\mathfrak {Q}=(Q_1,\dots ,Q_r) \in \overline K[t]^r$
 satisfying the condition 
 $$ \begin{align} (\lVert Q_1 \rVert_\infty / |\theta|_\infty^{\frac{qs_1}{q-1}})^{q^{i_1}} \ldots (\lVert Q_r \rVert_\infty / |\theta|_\infty^{\frac{qs_r}{q-1}})^{q^{i_r}} \to 0 \end{align} $$
$$ \begin{align} (\lVert Q_1 \rVert_\infty / |\theta|_\infty^{\frac{qs_1}{q-1}})^{q^{i_1}} \ldots (\lVert Q_r \rVert_\infty / |\theta|_\infty^{\frac{qs_r}{q-1}})^{q^{i_r}} \to 0 \end{align} $$
as 
 $0 \leq i_r < \dots < i_1$
 and
$0 \leq i_r < \dots < i_1$
 and 
 $ i_1 \to \infty $
.
$ i_1 \to \infty $
.
 We consider the dual t-motives 
 $\mathcal M_{\mathfrak {s},\mathfrak {Q}}$
 and
$\mathcal M_{\mathfrak {s},\mathfrak {Q}}$
 and 
 $\mathcal M_{\mathfrak {s},\mathfrak {Q}}'$
 attached to
$\mathcal M_{\mathfrak {s},\mathfrak {Q}}'$
 attached to 
 $(\mathfrak {s},\mathfrak {Q})$
 given by the matrices
$(\mathfrak {s},\mathfrak {Q})$
 given by the matrices 
 $$ \begin{align*} \Phi_{\mathfrak{s},\mathfrak{Q}} &= \begin{pmatrix} (t-\theta)^{s_1+\dots+s_r} & 0 & 0 & \dots & 0 \\ Q_1^{(-1)} (t-\theta)^{s_1+\dots+s_r} & (t-\theta)^{s_2+\dots+s_r} & 0 & \dots & 0 \\ 0 & Q_2^{(-1)} (t-\theta)^{s_2+\dots+s_r} & \ddots & & \vdots \\ \vdots & & \ddots & (t-\theta)^{s_r} & 0 \\ 0 & \dots & 0 & Q_r^{(-1)} (t-\theta)^{s_r} & 1 \end{pmatrix} \\ &\in \operatorname{\mathrm{Mat}}_{r+1}(\overline K[t]), \end{align*} $$
$$ \begin{align*} \Phi_{\mathfrak{s},\mathfrak{Q}} &= \begin{pmatrix} (t-\theta)^{s_1+\dots+s_r} & 0 & 0 & \dots & 0 \\ Q_1^{(-1)} (t-\theta)^{s_1+\dots+s_r} & (t-\theta)^{s_2+\dots+s_r} & 0 & \dots & 0 \\ 0 & Q_2^{(-1)} (t-\theta)^{s_2+\dots+s_r} & \ddots & & \vdots \\ \vdots & & \ddots & (t-\theta)^{s_r} & 0 \\ 0 & \dots & 0 & Q_r^{(-1)} (t-\theta)^{s_r} & 1 \end{pmatrix} \\ &\in \operatorname{\mathrm{Mat}}_{r+1}(\overline K[t]), \end{align*} $$
and 
 $\Phi ^{\prime }_{\mathfrak {s},\mathfrak {Q}} \in \operatorname {\mathrm {Mat}}_r(\overline K[t])$
 is the upper left
$\Phi ^{\prime }_{\mathfrak {s},\mathfrak {Q}} \in \operatorname {\mathrm {Mat}}_r(\overline K[t])$
 is the upper left 
 $r \times r$
 submatrix of
$r \times r$
 submatrix of 
 $\Phi _{\mathfrak {s},\mathfrak {Q}}$
.
$\Phi _{\mathfrak {s},\mathfrak {Q}}$
.
 Throughout this paper, we work with the Carlitz period 
 $\widetilde \pi $
 which is a fundamental period of the Carlitz module (see [Reference Goss20, Reference Thakur35]). We fix a choice of
$\widetilde \pi $
 which is a fundamental period of the Carlitz module (see [Reference Goss20, Reference Thakur35]). We fix a choice of 
 $(q-1)$
st root of
$(q-1)$
st root of 
 $(-\theta )$
 and set
$(-\theta )$
 and set 
 $$\begin{align*}\Omega(t):=(-\theta)^{-q/(q-1)} \prod_{i \geq 1} \left(1-\frac{t}{\theta^{q^i}} \right) \in \mathbb T^\times \end{align*}$$
$$\begin{align*}\Omega(t):=(-\theta)^{-q/(q-1)} \prod_{i \geq 1} \left(1-\frac{t}{\theta^{q^i}} \right) \in \mathbb T^\times \end{align*}$$
so that
 $$\begin{align*}\Omega^{(-1)}=(t-\theta)\Omega \quad \text{ and } \quad \frac{1}{\Omega(\theta)}=\widetilde \pi. \end{align*}$$
$$\begin{align*}\Omega^{(-1)}=(t-\theta)\Omega \quad \text{ and } \quad \frac{1}{\Omega(\theta)}=\widetilde \pi. \end{align*}$$
Given 
 $(\mathfrak {s},\mathfrak {Q})$
 as above, Chang introduced the following series (see [Reference Chang9, Lemma 5.3.1] and also [Reference Chang, Papanikolas and Yu11, Eq. (2.3.2)])
$(\mathfrak {s},\mathfrak {Q})$
 as above, Chang introduced the following series (see [Reference Chang9, Lemma 5.3.1] and also [Reference Chang, Papanikolas and Yu11, Eq. (2.3.2)]) 
 $$ \begin{align} \mathfrak{L}(\mathfrak{s};\mathfrak{Q})=\mathfrak{L}(s_1,\dots,s_r;Q_1,\dots,Q_r):=\sum_{i_1> \dots > i_r \geq 0} (\Omega^{s_r} Q_r)^{(i_r)} \dots (\Omega^{s_1} Q_1)^{(i_1)}. \end{align} $$
$$ \begin{align} \mathfrak{L}(\mathfrak{s};\mathfrak{Q})=\mathfrak{L}(s_1,\dots,s_r;Q_1,\dots,Q_r):=\sum_{i_1> \dots > i_r \geq 0} (\Omega^{s_r} Q_r)^{(i_r)} \dots (\Omega^{s_1} Q_1)^{(i_1)}. \end{align} $$
 It is proved that 
 $\mathfrak {L}(\mathfrak {s},\mathfrak {Q}) \in \mathcal E$
 (see [Reference Chang9, Lemma 5.3.1]). Here, we recall that
$\mathfrak {L}(\mathfrak {s},\mathfrak {Q}) \in \mathcal E$
 (see [Reference Chang9, Lemma 5.3.1]). Here, we recall that 
 $\mathcal E$
 denotes the ring of series
$\mathcal E$
 denotes the ring of series 
 $\sum _{n \geq 0} a_nt^n \in \overline K[[t]]$
 such that
$\sum _{n \geq 0} a_nt^n \in \overline K[[t]]$
 such that 
 $\lim _{n \to +\infty } \sqrt [n]{|a_n|_\infty }=0$
 and
$\lim _{n \to +\infty } \sqrt [n]{|a_n|_\infty }=0$
 and 
 $[K_\infty (a_0,a_1,\ldots ):K_\infty ]<\infty $
. In the sequel, we will use the following crucial property of this series (see [Reference Chang9, Lemma 5.3.5] and [Reference Chang, Papanikolas and Yu11, Proposition 2.3.3]): For all
$[K_\infty (a_0,a_1,\ldots ):K_\infty ]<\infty $
. In the sequel, we will use the following crucial property of this series (see [Reference Chang9, Lemma 5.3.5] and [Reference Chang, Papanikolas and Yu11, Proposition 2.3.3]): For all 
 $j \in \mathbb Z^{\geq 0}$
, we have
$j \in \mathbb Z^{\geq 0}$
, we have 
 $$ \begin{align} \mathfrak{L}(\mathfrak{s};\mathfrak{Q}) \left(\theta^{q^j} \right)=\left(\mathfrak{L}(\mathfrak{s};\mathfrak{Q})(\theta)\right)^{q^j}. \end{align} $$
$$ \begin{align} \mathfrak{L}(\mathfrak{s};\mathfrak{Q}) \left(\theta^{q^j} \right)=\left(\mathfrak{L}(\mathfrak{s};\mathfrak{Q})(\theta)\right)^{q^j}. \end{align} $$
Then the matrix given by
 $$ \begin{align*} \Psi_{\mathfrak{s},\mathfrak{Q}} &= \begin{pmatrix} \Omega^{s_1+\dots+s_r} & 0 & 0 & \dots & 0 \\ \mathfrak{L}(s_1;Q_1) \Omega^{s_2+\dots+s_r} & \Omega^{s_2+\dots+s_r} & 0 & \dots & 0 \\ \vdots & \mathfrak{L}(s_2;Q_2) \Omega^{s_3+\dots+s_r} & \ddots & & \vdots \\ \vdots & & \ddots & \ddots & \vdots \\ \mathfrak{L}(s_1,\dots,s_{r-1};Q_1,\dots,Q_{r-1}) \Omega^{s_r} & \mathfrak{L}(s_2,\dots,s_{r-1};Q_2,\dots,Q_{r-1}) \Omega^{s_r} & \dots & \Omega^{s_r}& 0 \\ \mathfrak{L}(s_1,\dots,s_r;Q_1,\dots,Q_r) & \mathfrak{L}(s_2,\dots,s_r;Q_2,\dots,Q_r) & \dots & \mathfrak{L}(s_r;Q_r) & 1 \end{pmatrix} \\ &\in \text{GL}_{r+1}(\mathbb T) \end{align*} $$
$$ \begin{align*} \Psi_{\mathfrak{s},\mathfrak{Q}} &= \begin{pmatrix} \Omega^{s_1+\dots+s_r} & 0 & 0 & \dots & 0 \\ \mathfrak{L}(s_1;Q_1) \Omega^{s_2+\dots+s_r} & \Omega^{s_2+\dots+s_r} & 0 & \dots & 0 \\ \vdots & \mathfrak{L}(s_2;Q_2) \Omega^{s_3+\dots+s_r} & \ddots & & \vdots \\ \vdots & & \ddots & \ddots & \vdots \\ \mathfrak{L}(s_1,\dots,s_{r-1};Q_1,\dots,Q_{r-1}) \Omega^{s_r} & \mathfrak{L}(s_2,\dots,s_{r-1};Q_2,\dots,Q_{r-1}) \Omega^{s_r} & \dots & \Omega^{s_r}& 0 \\ \mathfrak{L}(s_1,\dots,s_r;Q_1,\dots,Q_r) & \mathfrak{L}(s_2,\dots,s_r;Q_2,\dots,Q_r) & \dots & \mathfrak{L}(s_r;Q_r) & 1 \end{pmatrix} \\ &\in \text{GL}_{r+1}(\mathbb T) \end{align*} $$
satisfies
 $$\begin{align*}\Psi_{\mathfrak{s},\mathfrak{Q}}^{(-1)}=\Phi_{\mathfrak{s},\mathfrak{Q}} \Psi_{\mathfrak{s},\mathfrak{Q}}. \end{align*}$$
$$\begin{align*}\Psi_{\mathfrak{s},\mathfrak{Q}}^{(-1)}=\Phi_{\mathfrak{s},\mathfrak{Q}} \Psi_{\mathfrak{s},\mathfrak{Q}}. \end{align*}$$
Thus, 
 $\Psi _{\mathfrak {s},\mathfrak {Q}}$
 is a rigid analytic trivialization associated to the dual t-motive
$\Psi _{\mathfrak {s},\mathfrak {Q}}$
 is a rigid analytic trivialization associated to the dual t-motive 
 $\mathcal M_{\mathfrak {s},\mathfrak {Q}}$
.
$\mathcal M_{\mathfrak {s},\mathfrak {Q}}$
.
 We also denote by 
 $\Psi _{\mathfrak {s},\mathfrak {Q}}'$
 the upper
$\Psi _{\mathfrak {s},\mathfrak {Q}}'$
 the upper 
 $r \times r$
 submatrix of
$r \times r$
 submatrix of 
 $\Psi _{\mathfrak {s},\mathfrak {Q}}$
. It is clear that
$\Psi _{\mathfrak {s},\mathfrak {Q}}$
. It is clear that 
 $\Psi _{\mathfrak {s}}'$
 is a rigid analytic trivialization associated to the dual t-motive
$\Psi _{\mathfrak {s}}'$
 is a rigid analytic trivialization associated to the dual t-motive 
 $\mathcal M_{\mathfrak {s},\mathfrak {Q}}'$
.
$\mathcal M_{\mathfrak {s},\mathfrak {Q}}'$
.
 Further, combined with Equation (3.3), the above construction of dual t-motives implies that 
 $\widetilde \pi ^w \frak L(\mathfrak {s};\mathfrak {Q})(\theta )$
, where
$\widetilde \pi ^w \frak L(\mathfrak {s};\mathfrak {Q})(\theta )$
, where 
 $w=s_1+\dots +s_r$
 has the MZ (multizeta) property in the sense of [Reference Chang9, Definition 3.4.1]. By [Reference Chang9, Proposition 4.3.1], we get
$w=s_1+\dots +s_r$
 has the MZ (multizeta) property in the sense of [Reference Chang9, Definition 3.4.1]. By [Reference Chang9, Proposition 4.3.1], we get
Proposition 3.3. Let 
 $(\mathfrak {s}_i;\mathfrak {Q}_i)$
 as before for
$(\mathfrak {s}_i;\mathfrak {Q}_i)$
 as before for 
 $1 \leq i \leq m$
. We suppose that all the tuples of positive integers
$1 \leq i \leq m$
. We suppose that all the tuples of positive integers 
 $\mathfrak {s}_i$
 have the same weight, say w. Then the following assertions are equivalent:
$\mathfrak {s}_i$
 have the same weight, say w. Then the following assertions are equivalent: 
- 
i)  $\frak L(\mathfrak {s}_1;\mathfrak {Q}_1)(\theta ),\dots ,\frak L(\mathfrak {s}_m;\mathfrak {Q}_m)(\theta )$
 are K-linearly independent. $\frak L(\mathfrak {s}_1;\mathfrak {Q}_1)(\theta ),\dots ,\frak L(\mathfrak {s}_m;\mathfrak {Q}_m)(\theta )$
 are K-linearly independent.
- 
ii)  $\frak L(\mathfrak {s}_1;\mathfrak {Q}_1)(\theta ),\dots ,\frak L(\mathfrak {s}_m;\mathfrak {Q}_m)(\theta )$
 are $\frak L(\mathfrak {s}_1;\mathfrak {Q}_1)(\theta ),\dots ,\frak L(\mathfrak {s}_m;\mathfrak {Q}_m)(\theta )$
 are $\overline K$
-linearly independent. $\overline K$
-linearly independent.
We end this section by mentioning that Chang [Reference Chang9] also proved analogue of Goncharov’s conjecture in this setting.
3.2.2. Dual t-motives connected to MZV's and AMZV's
 Following Anderson and Thakur [Reference Anderson and Thakur4], we introduce dual t-motives connected to MZV's and AMZV's. We briefly review Anderson–Thakur polynomials introduced in [Reference Anderson and Thakur3]. For 
 $k \geq 0$
, we set
$k \geq 0$
, we set 
 $[k]:=\theta ^{q^k}-\theta $
 and
$[k]:=\theta ^{q^k}-\theta $
 and 
 $D_k:= \prod ^k_{\ell =1} [\ell ]^{q^{k-\ell }}$
. For
$D_k:= \prod ^k_{\ell =1} [\ell ]^{q^{k-\ell }}$
. For 
 $n \in \mathbb {N}$
, we write
$n \in \mathbb {N}$
, we write 
 $n-1 = \sum _{j \geq 0} n_j q^j$
 with
$n-1 = \sum _{j \geq 0} n_j q^j$
 with 
 $0 \leq n_j \leq q-1$
 and define
$0 \leq n_j \leq q-1$
 and define 
 $$\begin{align*}\Gamma_n:=\prod_{j \geq 0} D_j^{n_j}. \end{align*}$$
$$\begin{align*}\Gamma_n:=\prod_{j \geq 0} D_j^{n_j}. \end{align*}$$
 We set 
 $\gamma _0(t) :=1$
 and
$\gamma _0(t) :=1$
 and 
 $\gamma _j(t) :=\prod ^j_{\ell =1} (\theta ^{q^j}-t^{q^\ell })$
 for
$\gamma _j(t) :=\prod ^j_{\ell =1} (\theta ^{q^j}-t^{q^\ell })$
 for 
 $j\geq 1$
. Then Anderson–Thakur polynomials
$j\geq 1$
. Then Anderson–Thakur polynomials 
 $\alpha _n(t) \in A[t]$
 are given by the generating series
$\alpha _n(t) \in A[t]$
 are given by the generating series 
 $$\begin{align*}\sum_{n \geq 1} \frac{\alpha_n(t)}{\Gamma_n} x^n:=x\left( 1-\sum_{j \geq 0} \frac{\gamma_j(t)}{D_j} x^{q^j} \right)^{-1}. \end{align*}$$
$$\begin{align*}\sum_{n \geq 1} \frac{\alpha_n(t)}{\Gamma_n} x^n:=x\left( 1-\sum_{j \geq 0} \frac{\gamma_j(t)}{D_j} x^{q^j} \right)^{-1}. \end{align*}$$
Finally, we define 
 $H_n(t)$
 by switching
$H_n(t)$
 by switching 
 $\theta $
 and t
$\theta $
 and t 
 $$ \begin{align*} H_n(t)=\alpha_n(t) \big|_{t=\theta, \, \theta=t}. \end{align*} $$
$$ \begin{align*} H_n(t)=\alpha_n(t) \big|_{t=\theta, \, \theta=t}. \end{align*} $$
By [Reference Anderson and Thakur3, Eq. (3.7.3)], we get
 $$ \begin{align} \deg_\theta H_n \leq \frac{(n-1) q}{q-1}<\frac{nq}{q-1}. \end{align} $$
$$ \begin{align} \deg_\theta H_n \leq \frac{(n-1) q}{q-1}<\frac{nq}{q-1}. \end{align} $$
 Let 
 $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 be a tuple and
$\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 be a tuple and 
 $\mathfrak \epsilon =(\epsilon _1,\ldots ,\epsilon _r) \in (\mathbb {F}_q^\times )^r$
. Recall that
$\mathfrak \epsilon =(\epsilon _1,\ldots ,\epsilon _r) \in (\mathbb {F}_q^\times )^r$
. Recall that 
 $\overline {\mathbb F}_q$
 denotes the algebraic closure of
$\overline {\mathbb F}_q$
 denotes the algebraic closure of 
 $\mathbb F_q$
 in
$\mathbb F_q$
 in 
 $\overline {K}$
. For all
$\overline {K}$
. For all 
 $1 \leq i \leq r$
, we fix a fixed
$1 \leq i \leq r$
, we fix a fixed 
 $(q-1)$
-th root
$(q-1)$
-th root 
 $\gamma _i \in \overline {\mathbb F}_q$
 of
$\gamma _i \in \overline {\mathbb F}_q$
 of 
 $\epsilon _i \in \mathbb F_q^\times $
 and set
$\epsilon _i \in \mathbb F_q^\times $
 and set 
 $Q_{s_i,\epsilon _i}:=\gamma _i H_{s_i}$
. Then we set
$Q_{s_i,\epsilon _i}:=\gamma _i H_{s_i}$
. Then we set 
 $\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}:=(Q_{s_1,\epsilon _1},\dots ,Q_{s_r,\epsilon _r})$
 and put
$\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}:=(Q_{s_1,\epsilon _1},\dots ,Q_{s_r,\epsilon _r})$
 and put 
 $\frak L(\mathfrak {s};\boldsymbol {\epsilon }):=\frak L(\mathfrak {s};\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }})$
. By Equation (3.4), we know that
$\frak L(\mathfrak {s};\boldsymbol {\epsilon }):=\frak L(\mathfrak {s};\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }})$
. By Equation (3.4), we know that 
 $\lVert H_n \rVert _\infty < |\theta |_\infty ^{\tfrac {n q}{q-1}}$
 for all
$\lVert H_n \rVert _\infty < |\theta |_\infty ^{\tfrac {n q}{q-1}}$
 for all 
 $n \in \mathbb {N}$
, thus
$n \in \mathbb {N}$
, thus 
 $\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}$
 satisfies condition (3.1). Thus,w e can define the dual t-motives
$\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}$
 satisfies condition (3.1). Thus,w e can define the dual t-motives 
 $\mathcal M_{\mathfrak {s},\boldsymbol {\epsilon }}=\mathcal M_{\mathfrak {s},\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}}$
 and
$\mathcal M_{\mathfrak {s},\boldsymbol {\epsilon }}=\mathcal M_{\mathfrak {s},\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}}$
 and 
 $\mathcal M_{\mathfrak {s},\boldsymbol {\epsilon }}'=\mathcal M_{\mathfrak {s},\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}}'$
 attached to
$\mathcal M_{\mathfrak {s},\boldsymbol {\epsilon }}'=\mathcal M_{\mathfrak {s},\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}}'$
 attached to 
 $\mathfrak {s}$
 whose matrices and rigid analytic trivializations will be denoted by
$\mathfrak {s}$
 whose matrices and rigid analytic trivializations will be denoted by 
 $(\Phi _{\mathfrak {s},\boldsymbol {\epsilon }},\Psi _{\mathfrak {s},\boldsymbol {\epsilon }})$
 and
$(\Phi _{\mathfrak {s},\boldsymbol {\epsilon }},\Psi _{\mathfrak {s},\boldsymbol {\epsilon }})$
 and 
 $(\Phi _{\mathfrak {s},\boldsymbol {\epsilon }}',\Psi _{\mathfrak {s},\boldsymbol {\epsilon }}')$
, respectively. These dual t-motives are connected to MZV's and AMZV's by the following result (see [Reference Chen and Harada14, Proposition 2.12] for more details):
$(\Phi _{\mathfrak {s},\boldsymbol {\epsilon }}',\Psi _{\mathfrak {s},\boldsymbol {\epsilon }}')$
, respectively. These dual t-motives are connected to MZV's and AMZV's by the following result (see [Reference Chen and Harada14, Proposition 2.12] for more details): 
 $$ \begin{align} \frak L(\mathfrak{s};\boldsymbol{\epsilon})(\theta)=\frac{\gamma_1 \dots \gamma_r \Gamma_{s_1} \dots \Gamma_{s_r} \zeta_A \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{s} \end{pmatrix}}{\widetilde \pi^{w(\mathfrak{s})}}. \end{align} $$
$$ \begin{align} \frak L(\mathfrak{s};\boldsymbol{\epsilon})(\theta)=\frac{\gamma_1 \dots \gamma_r \Gamma_{s_1} \dots \Gamma_{s_r} \zeta_A \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{s} \end{pmatrix}}{\widetilde \pi^{w(\mathfrak{s})}}. \end{align} $$
By a result of Thakur [Reference Thakur37], one can show (see [Reference Harada21, Theorem 2.1]) that 
 $\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {s} \end {pmatrix} \neq 0$
. Thus,
$\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {s} \end {pmatrix} \neq 0$
. Thus, 
 $\frak L(\mathfrak {s};\boldsymbol {\epsilon })(\theta ) \neq 0$
.
$\frak L(\mathfrak {s};\boldsymbol {\epsilon })(\theta ) \neq 0$
.
3.2.3. Dual t-motives connected to CMPL's and ACMPL's
 We keep the notation as above. Let 
 $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 be a tuple and
$\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 be a tuple and 
 $\boldsymbol {\epsilon }=(\epsilon _1,\ldots ,\epsilon _r) \in (\mathbb {F}_q^\times )^r$
. For all
$\boldsymbol {\epsilon }=(\epsilon _1,\ldots ,\epsilon _r) \in (\mathbb {F}_q^\times )^r$
. For all 
 $1 \leq i \leq r$
, we have a fixed
$1 \leq i \leq r$
, we have a fixed 
 $(q-1)$
-th root
$(q-1)$
-th root 
 $\gamma _i$
 of
$\gamma _i$
 of 
 $\epsilon _i \in \mathbb F_q^\times $
 and set
$\epsilon _i \in \mathbb F_q^\times $
 and set 
 $Q_{s_i,\epsilon _i}':=\gamma _i$
. Then we set
$Q_{s_i,\epsilon _i}':=\gamma _i$
. Then we set 
 $\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}':=(Q_{s_1,\epsilon _1}',\dots ,Q_{s_r,\epsilon _r}')$
 and put
$\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}':=(Q_{s_1,\epsilon _1}',\dots ,Q_{s_r,\epsilon _r}')$
 and put 
 $$ \begin{align} \mathfrak{Li}(\mathfrak{s};\boldsymbol{\epsilon})=\frak L(\mathfrak{s};\mathfrak{Q}_{\mathfrak{s},\boldsymbol{\epsilon}}')=\sum_{i_1> \dots > i_r \geq 0} (\gamma_{i_r} \Omega^{s_r})^{(i_r)} \dots (\gamma_{i_1} \Omega^{s_1})^{(i_1)}. \end{align} $$
$$ \begin{align} \mathfrak{Li}(\mathfrak{s};\boldsymbol{\epsilon})=\frak L(\mathfrak{s};\mathfrak{Q}_{\mathfrak{s},\boldsymbol{\epsilon}}')=\sum_{i_1> \dots > i_r \geq 0} (\gamma_{i_r} \Omega^{s_r})^{(i_r)} \dots (\gamma_{i_1} \Omega^{s_1})^{(i_1)}. \end{align} $$
 Thus, we can define the dual t-motives 
 $\mathcal N_{\mathfrak {s},\boldsymbol {\epsilon }}=\mathcal N_{\mathfrak {s},\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}'}$
 and
$\mathcal N_{\mathfrak {s},\boldsymbol {\epsilon }}=\mathcal N_{\mathfrak {s},\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}'}$
 and 
 $\mathcal N_{\mathfrak {s},\boldsymbol {\epsilon }}'=\mathcal N_{\mathfrak {s},\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}'}'$
 attached to
$\mathcal N_{\mathfrak {s},\boldsymbol {\epsilon }}'=\mathcal N_{\mathfrak {s},\mathfrak {Q}_{\mathfrak {s},\boldsymbol {\epsilon }}'}'$
 attached to 
 $(\mathfrak {s},\boldsymbol {\epsilon })$
. These dual t-motives are connected to CMPL's and ACMPL's by the following result (see [Reference Chang9, Lemma 5.3.5] and [Reference Chang, Papanikolas and Yu11, Prop. 2.3.3]):
$(\mathfrak {s},\boldsymbol {\epsilon })$
. These dual t-motives are connected to CMPL's and ACMPL's by the following result (see [Reference Chang9, Lemma 5.3.5] and [Reference Chang, Papanikolas and Yu11, Prop. 2.3.3]): 
 $$ \begin{align} \mathfrak{Li}(\mathfrak{s};\boldsymbol{\epsilon})(\theta)=\frac{\gamma_1 \dots \gamma_r \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{s} \end{pmatrix}}{\widetilde \pi^{w(\mathfrak{s})}}. \end{align} $$
$$ \begin{align} \mathfrak{Li}(\mathfrak{s};\boldsymbol{\epsilon})(\theta)=\frac{\gamma_1 \dots \gamma_r \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{s} \end{pmatrix}}{\widetilde \pi^{w(\mathfrak{s})}}. \end{align} $$
3.3. A result for linear independence
3.3.1. Setup
 Let 
 $w \in \mathbb {N}$
 be a positive integer. Let
$w \in \mathbb {N}$
 be a positive integer. Let 
 $\{(\mathfrak {s}_i;\mathfrak {Q}_i)\}_{1 \leq i \leq n}$
 be a collection of pairs satisfying condition (3.1) such that
$\{(\mathfrak {s}_i;\mathfrak {Q}_i)\}_{1 \leq i \leq n}$
 be a collection of pairs satisfying condition (3.1) such that 
 $\mathfrak {s}_i$
 always has weight w. We write
$\mathfrak {s}_i$
 always has weight w. We write 
 $\mathfrak {s}_i=(s_{i1},\dots ,s_{i \ell _i}) \in \mathbb N^{\ell _i}$
 and
$\mathfrak {s}_i=(s_{i1},\dots ,s_{i \ell _i}) \in \mathbb N^{\ell _i}$
 and 
 $\mathfrak {Q}_i=(Q_{i1},\dots ,Q_{i\ell _i}) \in (\mathbb F_q^\times )^{\ell _i}$
 so that
$\mathfrak {Q}_i=(Q_{i1},\dots ,Q_{i\ell _i}) \in (\mathbb F_q^\times )^{\ell _i}$
 so that 
 $s_{i1}+\dots +s_{i \ell _i}=w$
. We introduce the set of tuples
$s_{i1}+\dots +s_{i \ell _i}=w$
. We introduce the set of tuples 
 $$\begin{align*}I(\mathfrak{s}_i;\mathfrak{Q}_i):=\{\emptyset,(s_{i1};Q_{i1}),\dots,(s_{i1},\dots,s_{i (\ell_i-1)};Q_{i1},\dots,Q_{i(\ell_i-1)})\},\end{align*}$$
$$\begin{align*}I(\mathfrak{s}_i;\mathfrak{Q}_i):=\{\emptyset,(s_{i1};Q_{i1}),\dots,(s_{i1},\dots,s_{i (\ell_i-1)};Q_{i1},\dots,Q_{i(\ell_i-1)})\},\end{align*}$$
and set
 $$\begin{align*}I:=\cup_i I(\mathfrak{s}_i;\mathfrak{Q}_i). \end{align*}$$
$$\begin{align*}I:=\cup_i I(\mathfrak{s}_i;\mathfrak{Q}_i). \end{align*}$$
3.3.2. Linear independence
We are now ready to state the main result of this section.
Theorem 3.4. We keep the above notation. We suppose further that 
 $\{(\mathfrak {s}_i;\mathfrak {Q}_i)\}_{1 \leq i \leq n}$
 satisfies the following conditions:
$\{(\mathfrak {s}_i;\mathfrak {Q}_i)\}_{1 \leq i \leq n}$
 satisfies the following conditions: 
- 
(LW) For any weight  $w'<w$
, the values $w'<w$
, the values $\frak L(\frak t;\mathfrak {Q})(\theta )$
 with $\frak L(\frak t;\mathfrak {Q})(\theta )$
 with $(\frak t;\mathfrak {Q}) \in I$
 and $(\frak t;\mathfrak {Q}) \in I$
 and $w(\frak t)=w'$
 are all K-linearly independent. In particular, $w(\frak t)=w'$
 are all K-linearly independent. In particular, $\frak L(\frak t;\mathfrak {Q})(\theta )$
 is always nonzero. $\frak L(\frak t;\mathfrak {Q})(\theta )$
 is always nonzero.
- 
(LD) There exist  $a \in A$
 and $a \in A$
 and $a_i \in A$
 for $a_i \in A$
 for $1 \leq i \leq n$
 which are not all zero such that $1 \leq i \leq n$
 which are not all zero such that $$ \begin{align*} a+\sum_{i=1}^n a_i \frak L(\mathfrak{s}_i;\mathfrak{Q}_i)(\theta)=0. \end{align*} $$ $$ \begin{align*} a+\sum_{i=1}^n a_i \frak L(\mathfrak{s}_i;\mathfrak{Q}_i)(\theta)=0. \end{align*} $$
 For all 
 $(\frak t;\mathfrak {Q}) \in I$
, we set the following series in t
$(\frak t;\mathfrak {Q}) \in I$
, we set the following series in t 
 $$ \begin{align} f_{\mathfrak t;\mathfrak{Q}}:= \sum_i a_i(t) \mathfrak L(s_{i(k+1)},\dots,s_{i \ell_i};Q_{i(k+1)},\dots,Q_{i \ell_i}), \end{align} $$
$$ \begin{align} f_{\mathfrak t;\mathfrak{Q}}:= \sum_i a_i(t) \mathfrak L(s_{i(k+1)},\dots,s_{i \ell_i};Q_{i(k+1)},\dots,Q_{i \ell_i}), \end{align} $$
where the sum runs through the set of indices i such that 
 $(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i k};Q_{i1},\dots ,Q_{i k})$
 for some
$(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i k};Q_{i1},\dots ,Q_{i k})$
 for some 
 $0 \leq k \leq \ell _i-1$
.
$0 \leq k \leq \ell _i-1$
.
 Then for all 
 $(\mathfrak t;\mathfrak {Q}) \in I$
,
$(\mathfrak t;\mathfrak {Q}) \in I$
, 
 $f_{\mathfrak t;\mathfrak {Q}}(\theta )$
 belongs to K.
$f_{\mathfrak t;\mathfrak {Q}}(\theta )$
 belongs to K.
Remark 3.5. 1) Here, we note that LW stands for Lower Weights and LD for Linear Dependence.
2) With the above notation, we have
 $$\begin{align*}f_{\emptyset}= \sum_i a_i(t) \mathfrak L(\mathfrak{s}_i;\mathfrak{Q}_i). \end{align*}$$
$$\begin{align*}f_{\emptyset}= \sum_i a_i(t) \mathfrak L(\mathfrak{s}_i;\mathfrak{Q}_i). \end{align*}$$
 2) In fact, we improve [Reference Ngo Dac31, Theorem B] in two directions. First, we remove the restriction to Anderson–Thakur polynomials and tuples 
 $\mathfrak {s}_i$
. Second, and more importantly, we allow an additional term a, which is crucial in the sequel. More precisely, in the case of MZV's, while [Reference Ngo Dac31, Theorem B] investigates linear relations between MZV's of weight w, Theorem 3.4 investigates linear relations between MZV's of weight w and suitable powers
$\mathfrak {s}_i$
. Second, and more importantly, we allow an additional term a, which is crucial in the sequel. More precisely, in the case of MZV's, while [Reference Ngo Dac31, Theorem B] investigates linear relations between MZV's of weight w, Theorem 3.4 investigates linear relations between MZV's of weight w and suitable powers 
 $\widetilde \pi ^w$
 of the Carlitz period.
$\widetilde \pi ^w$
 of the Carlitz period.
Proof. The proof will be divided into two steps.
 
Step 1. We first construct a dual t-motive to which we will apply the Anderson–Brownawell–Papanikolas criterion. We recall 
 $a_i(t):=a_i \rvert _{\theta =t} \in \mathbb F_q[t]$
.
$a_i(t):=a_i \rvert _{\theta =t} \in \mathbb F_q[t]$
.
 For each pair 
 $(\mathfrak {s}_i;\mathfrak {Q}_i)$
, we have attached to it a matrix
$(\mathfrak {s}_i;\mathfrak {Q}_i)$
, we have attached to it a matrix 
 $\Phi _{\mathfrak {s}_i,\mathfrak {Q}_i}$
. For
$\Phi _{\mathfrak {s}_i,\mathfrak {Q}_i}$
. For 
 $\mathfrak {s}_i=(s_{i1},\dots ,s_{i \ell _i}) \in \mathbb N^{\ell _i}$
 and
$\mathfrak {s}_i=(s_{i1},\dots ,s_{i \ell _i}) \in \mathbb N^{\ell _i}$
 and 
 $\mathfrak {Q}_i=(Q_{i1},\dots ,Q_{i\ell _i}) \in (\mathbb F_q^\times )^{\ell _i}$
 we recall
$\mathfrak {Q}_i=(Q_{i1},\dots ,Q_{i\ell _i}) \in (\mathbb F_q^\times )^{\ell _i}$
 we recall 
 $$\begin{align*}I(\mathfrak{s}_i;\mathfrak{Q}_i)=\{\emptyset,(s_{i1};Q_{i1}),\dots,(s_{i1},\dots,s_{i (\ell_i-1)};Q_{i1},\dots,Q_{(\ell_i-1)})\}, \end{align*}$$
$$\begin{align*}I(\mathfrak{s}_i;\mathfrak{Q}_i)=\{\emptyset,(s_{i1};Q_{i1}),\dots,(s_{i1},\dots,s_{i (\ell_i-1)};Q_{i1},\dots,Q_{(\ell_i-1)})\}, \end{align*}$$
and 
 $I:=\cup _i I(\mathfrak {s}_i;\mathfrak {Q}_i)$
.
$I:=\cup _i I(\mathfrak {s}_i;\mathfrak {Q}_i)$
.
 We now construct a new matrix 
 $\Phi '$
 indexed by elements of I, say
$\Phi '$
 indexed by elements of I, say 
 $$\begin{align*}\Phi'=\left(\Phi^{\prime}_{(\mathfrak t;\mathfrak{Q}),(\mathfrak t';\mathfrak{Q}')}\right)_{(\mathfrak t;\mathfrak{Q}),(\mathfrak t';\mathfrak{Q}') \in I} \in \operatorname{\mathrm{Mat}}_{|I|}(\overline K[t]). \end{align*}$$
$$\begin{align*}\Phi'=\left(\Phi^{\prime}_{(\mathfrak t;\mathfrak{Q}),(\mathfrak t';\mathfrak{Q}')}\right)_{(\mathfrak t;\mathfrak{Q}),(\mathfrak t';\mathfrak{Q}') \in I} \in \operatorname{\mathrm{Mat}}_{|I|}(\overline K[t]). \end{align*}$$
For the row which corresponds to the empty pair 
 $\emptyset $
, we put
$\emptyset $
, we put 
 $$ \begin{align*} \Phi^{\prime}_{\emptyset,(\mathfrak t';\mathfrak{Q}')}= \begin{cases} (t-\theta)^w & \text{if } (\mathfrak t';\mathfrak{Q}')=\emptyset, \\ 0 & \text{otherwise}. \end{cases} \end{align*} $$
$$ \begin{align*} \Phi^{\prime}_{\emptyset,(\mathfrak t';\mathfrak{Q}')}= \begin{cases} (t-\theta)^w & \text{if } (\mathfrak t';\mathfrak{Q}')=\emptyset, \\ 0 & \text{otherwise}. \end{cases} \end{align*} $$
For the row indexed by 
 $(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i j};Q_{i1},\dots ,Q_{ij})$
 for some i and
$(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i j};Q_{i1},\dots ,Q_{ij})$
 for some i and 
 $1 \leq j \leq \ell _i-1$
, we put
$1 \leq j \leq \ell _i-1$
, we put 
 $$ \begin{align*} \Phi^{\prime}_{(\mathfrak t;\mathfrak{Q}),(\mathfrak t';\mathfrak{Q}')}= \begin{cases} (t-\theta)^{w-w(\mathfrak t')} & \text{if } (\mathfrak t';\mathfrak{Q}')=(\mathfrak t;\mathfrak{Q}), \\ Q_{ij}^{(-1)} (t-\theta)^{w-w(\mathfrak t')} & \text{if } (\mathfrak t';\mathfrak{Q}')=(s_{i1},\dots,s_{i (j-1)};Q_{i1},\dots,Q_{i (j-1)}), \\ 0 & \text{otherwise}. \end{cases} \end{align*} $$
$$ \begin{align*} \Phi^{\prime}_{(\mathfrak t;\mathfrak{Q}),(\mathfrak t';\mathfrak{Q}')}= \begin{cases} (t-\theta)^{w-w(\mathfrak t')} & \text{if } (\mathfrak t';\mathfrak{Q}')=(\mathfrak t;\mathfrak{Q}), \\ Q_{ij}^{(-1)} (t-\theta)^{w-w(\mathfrak t')} & \text{if } (\mathfrak t';\mathfrak{Q}')=(s_{i1},\dots,s_{i (j-1)};Q_{i1},\dots,Q_{i (j-1)}), \\ 0 & \text{otherwise}. \end{cases} \end{align*} $$
Note that 
 $\Phi _{\mathfrak {s}_i,\mathfrak {Q}_i}'=\left (\Phi ^{\prime }_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}')}\right )_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}') \in I(\mathfrak {s}_i;\mathfrak {Q}_i)}$
 for all i.
$\Phi _{\mathfrak {s}_i,\mathfrak {Q}_i}'=\left (\Phi ^{\prime }_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}')}\right )_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}') \in I(\mathfrak {s}_i;\mathfrak {Q}_i)}$
 for all i.
 We define 
 $\Phi \in \operatorname {\mathrm {Mat}}_{|I|+1}(\overline K[t])$
 by
$\Phi \in \operatorname {\mathrm {Mat}}_{|I|+1}(\overline K[t])$
 by 
 $$ \begin{align*} \Phi=\begin{pmatrix} \Phi' & 0 \\ \mathbf{v} & 1 \end{pmatrix} \in \operatorname{\mathrm{Mat}}_{|I|+1}(\overline K[t]), \quad \mathbf{v}=(v_{\mathfrak t,\mathfrak{Q}})_{(\mathfrak t;\mathfrak{Q}) \in I} \in \operatorname{\mathrm{Mat}}_{1 \times|I|}(\overline K[t]), \end{align*} $$
$$ \begin{align*} \Phi=\begin{pmatrix} \Phi' & 0 \\ \mathbf{v} & 1 \end{pmatrix} \in \operatorname{\mathrm{Mat}}_{|I|+1}(\overline K[t]), \quad \mathbf{v}=(v_{\mathfrak t,\mathfrak{Q}})_{(\mathfrak t;\mathfrak{Q}) \in I} \in \operatorname{\mathrm{Mat}}_{1 \times|I|}(\overline K[t]), \end{align*} $$
where
 $$\begin{align*}v_{\mathfrak t;\mathfrak{Q}}=\sum_i a_i(t) Q_{i \ell_i}^{(-1)} (t-\theta)^{w-w(\mathfrak t)}. \end{align*}$$
$$\begin{align*}v_{\mathfrak t;\mathfrak{Q}}=\sum_i a_i(t) Q_{i \ell_i}^{(-1)} (t-\theta)^{w-w(\mathfrak t)}. \end{align*}$$
Here, the sum runs through the set of indices i such that 
 $(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i (\ell _i-1)};Q_{i1},\dots ,Q_{i (\ell _i-1)})$
, and the empty sum is defined to be zero.
$(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i (\ell _i-1)};Q_{i1},\dots ,Q_{i (\ell _i-1)})$
, and the empty sum is defined to be zero.
 We now introduce a rigid analytic trivialization matrix 
 $\Psi $
 for
$\Psi $
 for 
 $\Phi $
. We define
$\Phi $
. We define 
 $\Psi '=\left (\Psi ^{\prime }_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}')}\right )_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}') \in I} \in \text {GL}_{|I|}(\mathbb T)$
 as follows. For the row which corresponds to the empty pair
$\Psi '=\left (\Psi ^{\prime }_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}')}\right )_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}') \in I} \in \text {GL}_{|I|}(\mathbb T)$
 as follows. For the row which corresponds to the empty pair 
 $\emptyset $
, we define
$\emptyset $
, we define 
 $$ \begin{align*} \Psi^{\prime}_{\emptyset,(\mathfrak t';\mathfrak{Q}')}= \begin{cases} \Omega^w & \text{if } (\mathfrak t';\mathfrak{Q}')=\emptyset, \\ 0 & \text{otherwise}. \end{cases} \end{align*} $$
$$ \begin{align*} \Psi^{\prime}_{\emptyset,(\mathfrak t';\mathfrak{Q}')}= \begin{cases} \Omega^w & \text{if } (\mathfrak t';\mathfrak{Q}')=\emptyset, \\ 0 & \text{otherwise}. \end{cases} \end{align*} $$
For the row indexed by 
 $(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i j};Q_{i1},\dots ,Q_{i j})$
 for some i and
$(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i j};Q_{i1},\dots ,Q_{i j})$
 for some i and 
 $1 \leq j \leq \ell _i-1$
, we put
$1 \leq j \leq \ell _i-1$
, we put 
 $$ \begin{align*} &\Psi^{\prime}_{(\mathfrak t;\mathfrak{Q}),(\mathfrak t';\mathfrak{Q}')}= \\ & \begin{cases} \mathfrak L(\mathfrak t;\mathfrak{Q}) \Omega^{w-w(\mathfrak t)} & \text{if } (\mathfrak t';\mathfrak{Q}')=\emptyset, \\ \mathfrak L(s_{i(k+1)},\dots,s_{ij}; & \\ \quad Q_{i(k+1)},\dots,Q_{ij}) \Omega^{w-w(\mathfrak t)} & \text{if } (\mathfrak t';\mathfrak{Q}')=(s_{i1},\dots,s_{i k};Q_{i1},\dots,Q_{i k}) \text{ for some } 1 \leq k \leq j, \\ 0 & \text{otherwise}. \end{cases} \end{align*} $$
$$ \begin{align*} &\Psi^{\prime}_{(\mathfrak t;\mathfrak{Q}),(\mathfrak t';\mathfrak{Q}')}= \\ & \begin{cases} \mathfrak L(\mathfrak t;\mathfrak{Q}) \Omega^{w-w(\mathfrak t)} & \text{if } (\mathfrak t';\mathfrak{Q}')=\emptyset, \\ \mathfrak L(s_{i(k+1)},\dots,s_{ij}; & \\ \quad Q_{i(k+1)},\dots,Q_{ij}) \Omega^{w-w(\mathfrak t)} & \text{if } (\mathfrak t';\mathfrak{Q}')=(s_{i1},\dots,s_{i k};Q_{i1},\dots,Q_{i k}) \text{ for some } 1 \leq k \leq j, \\ 0 & \text{otherwise}. \end{cases} \end{align*} $$
Note that 
 $\Psi _{\mathfrak {s}_i,\mathfrak {Q}_i}'=\left (\Psi ^{\prime }_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}')}\right )_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}') \in I(\mathfrak {s}_i;\mathfrak {Q}_i)}$
 for all i.
$\Psi _{\mathfrak {s}_i,\mathfrak {Q}_i}'=\left (\Psi ^{\prime }_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}')}\right )_{(\mathfrak t;\mathfrak {Q}),(\mathfrak t';\mathfrak {Q}') \in I(\mathfrak {s}_i;\mathfrak {Q}_i)}$
 for all i.
 We define 
 $\Psi \in \text {GL}_{|I|+1}(\mathbb T)$
 by
$\Psi \in \text {GL}_{|I|+1}(\mathbb T)$
 by 
 $$ \begin{align*} \Psi=\begin{pmatrix} \Psi' & 0 \\ \mathbf{f} & 1 \end{pmatrix} \in \text{GL}_{|I|+1}(\mathbb T), \quad \mathbf{f}=(f_{\mathfrak t;\mathfrak{Q}})_{\mathfrak t \in I} \in \operatorname{\mathrm{Mat}}_{1 \times|I|}(\mathbb T). \end{align*} $$
$$ \begin{align*} \Psi=\begin{pmatrix} \Psi' & 0 \\ \mathbf{f} & 1 \end{pmatrix} \in \text{GL}_{|I|+1}(\mathbb T), \quad \mathbf{f}=(f_{\mathfrak t;\mathfrak{Q}})_{\mathfrak t \in I} \in \operatorname{\mathrm{Mat}}_{1 \times|I|}(\mathbb T). \end{align*} $$
Here, we recall (see Equation (3.8))
 $$ \begin{align*} f_{\mathfrak t;\mathfrak{Q}}= \sum_i a_i(t) \mathfrak L(s_{i(k+1)},\dots,s_{i \ell_i};Q_{i(k+1)},\dots,Q_{i \ell_i}), \end{align*} $$
$$ \begin{align*} f_{\mathfrak t;\mathfrak{Q}}= \sum_i a_i(t) \mathfrak L(s_{i(k+1)},\dots,s_{i \ell_i};Q_{i(k+1)},\dots,Q_{i \ell_i}), \end{align*} $$
where the sum runs through the set of indices i such that 
 $(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i k};Q_{i1},\dots ,Q_{i k})$
 for some
$(\mathfrak t;\mathfrak {Q})=(s_{i1},\dots ,s_{i k};Q_{i1},\dots ,Q_{i k})$
 for some 
 $0 \leq k \leq \ell _i-1$
. In particular,
$0 \leq k \leq \ell _i-1$
. In particular, 
 $f_{\emptyset }= \sum _i a_i(t) \mathfrak L(\mathfrak {s}_i;\mathfrak {Q}_i)$
.
$f_{\emptyset }= \sum _i a_i(t) \mathfrak L(\mathfrak {s}_i;\mathfrak {Q}_i)$
.
 By construction and by §3.2, we get 
 $\Psi ^{(-1)}=\Phi \Psi $
, that means
$\Psi ^{(-1)}=\Phi \Psi $
, that means 
 $\Psi $
 is a rigid analytic trivialization for
$\Psi $
 is a rigid analytic trivialization for 
 $\Phi $
.
$\Phi $
.
Step 2. Next, we apply the Anderson–Brownawell–Papanikolas criterion (see Theorem 3.2) to prove Theorem 3.4.
In fact, we define
 $$ \begin{align*} \widetilde \Phi=\begin{pmatrix} 1 & 0 \\ 0 & \Phi \end{pmatrix} \in \operatorname{\mathrm{Mat}}_{|I|+2}(\overline K[t]) \end{align*} $$
$$ \begin{align*} \widetilde \Phi=\begin{pmatrix} 1 & 0 \\ 0 & \Phi \end{pmatrix} \in \operatorname{\mathrm{Mat}}_{|I|+2}(\overline K[t]) \end{align*} $$
and consider the vector constructed from the first column vector of 
 $\Psi $
$\Psi $
 
 $$ \begin{align*} \widetilde \psi=\begin{pmatrix} 1 \\ \Psi_{(\mathfrak t;\mathfrak{Q}),\emptyset}' \\ f_\emptyset \end{pmatrix}_{(\mathfrak t;\mathfrak{Q}) \in I}. \end{align*} $$
$$ \begin{align*} \widetilde \psi=\begin{pmatrix} 1 \\ \Psi_{(\mathfrak t;\mathfrak{Q}),\emptyset}' \\ f_\emptyset \end{pmatrix}_{(\mathfrak t;\mathfrak{Q}) \in I}. \end{align*} $$
Then we have 
 $\widetilde \psi ^{(-1)}=\widetilde \Phi \widetilde \psi $
.
$\widetilde \psi ^{(-1)}=\widetilde \Phi \widetilde \psi $
.
 We also observe that for all 
 $(\mathfrak t;\mathfrak {Q}) \in I$
 we have
$(\mathfrak t;\mathfrak {Q}) \in I$
 we have 
 $\Psi _{(\mathfrak t;\mathfrak {Q}),\emptyset }'=\mathfrak L(\mathfrak t;\mathfrak {Q}) \Omega ^{w-w(\mathfrak t)}$
. Further,
$\Psi _{(\mathfrak t;\mathfrak {Q}),\emptyset }'=\mathfrak L(\mathfrak t;\mathfrak {Q}) \Omega ^{w-w(\mathfrak t)}$
. Further, 
 $$ \begin{align*} a+f_\emptyset(\theta)=a+\sum_i a_i \mathfrak L(\mathfrak{s}_i;\mathfrak{Q}_i)(\theta)=0. \end{align*} $$
$$ \begin{align*} a+f_\emptyset(\theta)=a+\sum_i a_i \mathfrak L(\mathfrak{s}_i;\mathfrak{Q}_i)(\theta)=0. \end{align*} $$
By Theorem 3.2 with 
 $\rho =(a,0,\dots ,0,1)$
, we deduce that there exists
$\rho =(a,0,\dots ,0,1)$
, we deduce that there exists 
 $\mathbf {h}=(g_0,g_{\mathfrak t,\mathfrak {Q}},g) \in \operatorname {\mathrm {Mat}}_{1 \times (|I|+2)}(\overline K[t])$
 such that
$\mathbf {h}=(g_0,g_{\mathfrak t,\mathfrak {Q}},g) \in \operatorname {\mathrm {Mat}}_{1 \times (|I|+2)}(\overline K[t])$
 such that 
 $\mathbf {h} \psi =0$
 and that
$\mathbf {h} \psi =0$
 and that 
 $g_{\mathfrak t,\mathfrak {Q}}(\theta )=0$
 for
$g_{\mathfrak t,\mathfrak {Q}}(\theta )=0$
 for 
 $(\mathfrak t,\mathfrak {Q}) \in I$
,
$(\mathfrak t,\mathfrak {Q}) \in I$
, 
 $g_0(\theta )=a$
 and
$g_0(\theta )=a$
 and 
 $g(\theta )=1 \neq 0$
. If we put
$g(\theta )=1 \neq 0$
. If we put 
 $ \mathbf {g}:=(1/g)\mathbf {h} \in \operatorname {\mathrm {Mat}}_{1 \times (|I|+2)}(\overline K(t))$
, then all the entries of
$ \mathbf {g}:=(1/g)\mathbf {h} \in \operatorname {\mathrm {Mat}}_{1 \times (|I|+2)}(\overline K(t))$
, then all the entries of 
 $ \mathbf {g}$
 are regular at
$ \mathbf {g}$
 are regular at 
 $t=\theta $
.
$t=\theta $
.
Now, we have
 $$ \begin{align} ( \mathbf{g}- \mathbf{g}^{(-1)} \widetilde \Phi) \widetilde \psi= \mathbf{g} \widetilde \psi-( \mathbf{g} \widetilde \psi)^{(-1)}=0. \end{align} $$
$$ \begin{align} ( \mathbf{g}- \mathbf{g}^{(-1)} \widetilde \Phi) \widetilde \psi= \mathbf{g} \widetilde \psi-( \mathbf{g} \widetilde \psi)^{(-1)}=0. \end{align} $$
We write 
 $ \mathbf {g}- \mathbf {g}^{(-1)} \widetilde \Phi =(B_0,B_{\mathfrak t,\mathfrak {Q}},0)_{\mathfrak t \in I}$
. We claim that
$ \mathbf {g}- \mathbf {g}^{(-1)} \widetilde \Phi =(B_0,B_{\mathfrak t,\mathfrak {Q}},0)_{\mathfrak t \in I}$
. We claim that 
 $B_0=0$
 and
$B_0=0$
 and 
 $B_{\mathfrak t,\mathfrak {Q}}=0$
 for all
$B_{\mathfrak t,\mathfrak {Q}}=0$
 for all 
 $(\mathfrak t;\mathfrak {Q}) \in I$
. In fact, expanding Equation (3.9) we obtain
$(\mathfrak t;\mathfrak {Q}) \in I$
. In fact, expanding Equation (3.9) we obtain 
 $$ \begin{align} B_0+\sum_{\mathfrak t \in I} B_{\mathfrak t,\mathfrak{Q}} \mathfrak L(\mathfrak t;\mathfrak{Q}) \Omega^{w-w(\mathfrak t)}=0. \end{align} $$
$$ \begin{align} B_0+\sum_{\mathfrak t \in I} B_{\mathfrak t,\mathfrak{Q}} \mathfrak L(\mathfrak t;\mathfrak{Q}) \Omega^{w-w(\mathfrak t)}=0. \end{align} $$
 By Equation (3.3), we see that for 
 $(\mathfrak t;\mathfrak {Q}) \in I$
 and
$(\mathfrak t;\mathfrak {Q}) \in I$
 and 
 $j \in \mathbb {N}$
,
$j \in \mathbb {N}$
, 
 $$ \begin{align*} \mathfrak{L}(\mathfrak t;\mathfrak{Q})(\theta^{q^j})=(\mathfrak{L}(\mathfrak t;\mathfrak{Q})(\theta))^{q^j} \end{align*} $$
$$ \begin{align*} \mathfrak{L}(\mathfrak t;\mathfrak{Q})(\theta^{q^j})=(\mathfrak{L}(\mathfrak t;\mathfrak{Q})(\theta))^{q^j} \end{align*} $$
which is nonzero by Condition 
 $(LW)$
.
$(LW)$
.
 First, as the function 
 $\Omega $
 has a simple zero at
$\Omega $
 has a simple zero at 
 $t=\theta ^{q^k}$
 for
$t=\theta ^{q^k}$
 for 
 $k \in \mathbb {N}$
, specializing Equation (3.10) at
$k \in \mathbb {N}$
, specializing Equation (3.10) at 
 $t=\theta ^{q^j}$
 yields
$t=\theta ^{q^j}$
 yields 
 $B_0(\theta ^{q^j})=0$
 for
$B_0(\theta ^{q^j})=0$
 for 
 $j \geq 1$
. Since
$j \geq 1$
. Since 
 $B_0$
 belongs to
$B_0$
 belongs to 
 $\overline K(t)$
, it follows that
$\overline K(t)$
, it follows that 
 $B_0=0$
.
$B_0=0$
.
 Next, we put 
 $w_0:=\max _{(\mathfrak t;\mathfrak {Q}) \in I} w(\mathfrak t)$
 and denote by
$w_0:=\max _{(\mathfrak t;\mathfrak {Q}) \in I} w(\mathfrak t)$
 and denote by 
 $I(w_0)$
 the set of
$I(w_0)$
 the set of 
 $(\mathfrak t;\mathfrak {Q}) \in I$
 such that
$(\mathfrak t;\mathfrak {Q}) \in I$
 such that 
 $w(\mathfrak t)=w_0$
. Then dividing Equation (3.10) by
$w(\mathfrak t)=w_0$
. Then dividing Equation (3.10) by 
 $\Omega ^{w-w_0}$
 yields
$\Omega ^{w-w_0}$
 yields 
 $$ \begin{align} \sum_{(\mathfrak t;\mathfrak{Q}) \in I} B_{\mathfrak t,\mathfrak{Q}} \mathfrak L(\mathfrak t;\mathfrak{Q}) \Omega^{w_0-w(\mathfrak t)}=\sum_{(\mathfrak t;\mathfrak{Q}) \in I(w_0)} B_{\mathfrak t,\mathfrak{Q}} \mathfrak L(\mathfrak t;\mathfrak{Q})+\sum_{(\mathfrak t;\mathfrak{Q}) \in I \setminus I(w_0)} B_{\mathfrak t,\mathfrak{Q}} \mathfrak L(\mathfrak t;\mathfrak{Q}) \Omega^{w_0-w(\mathfrak t)}=0. \end{align} $$
$$ \begin{align} \sum_{(\mathfrak t;\mathfrak{Q}) \in I} B_{\mathfrak t,\mathfrak{Q}} \mathfrak L(\mathfrak t;\mathfrak{Q}) \Omega^{w_0-w(\mathfrak t)}=\sum_{(\mathfrak t;\mathfrak{Q}) \in I(w_0)} B_{\mathfrak t,\mathfrak{Q}} \mathfrak L(\mathfrak t;\mathfrak{Q})+\sum_{(\mathfrak t;\mathfrak{Q}) \in I \setminus I(w_0)} B_{\mathfrak t,\mathfrak{Q}} \mathfrak L(\mathfrak t;\mathfrak{Q}) \Omega^{w_0-w(\mathfrak t)}=0. \end{align} $$
Since each 
 $B_{\mathfrak t,\mathfrak {Q}}$
 belongs to
$B_{\mathfrak t,\mathfrak {Q}}$
 belongs to 
 $\overline K(t)$
, they are defined at
$\overline K(t)$
, they are defined at 
 $t=\theta ^{q^j}$
 for
$t=\theta ^{q^j}$
 for 
 $j \gg 1$
. Note that the function
$j \gg 1$
. Note that the function 
 $\Omega $
 has a simple zero at
$\Omega $
 has a simple zero at 
 $t=\theta ^{q^k}$
 for
$t=\theta ^{q^k}$
 for 
 $k \in \mathbb {N}$
. Specializing Equation (3.11) at
$k \in \mathbb {N}$
. Specializing Equation (3.11) at 
 $t=\theta ^{q^j}$
 and using Equation (3.3) yields
$t=\theta ^{q^j}$
 and using Equation (3.3) yields 
 $$\begin{align*}\sum_{(\mathfrak t;\mathfrak{Q}) \in I(w_0)} B_{\mathfrak t,\mathfrak{Q}}(\theta^{q^j}) (\mathfrak L(\mathfrak t;\mathfrak{Q})(\theta))^{q^j}=0 \end{align*}$$
$$\begin{align*}\sum_{(\mathfrak t;\mathfrak{Q}) \in I(w_0)} B_{\mathfrak t,\mathfrak{Q}}(\theta^{q^j}) (\mathfrak L(\mathfrak t;\mathfrak{Q})(\theta))^{q^j}=0 \end{align*}$$
for 
 $j \gg 1$
.
$j \gg 1$
.
 We claim that 
 $B_{\mathfrak t,\mathfrak {Q}}(\theta ^{q^j})=0$
 for
$B_{\mathfrak t,\mathfrak {Q}}(\theta ^{q^j})=0$
 for 
 $j \gg 1$
 and for all
$j \gg 1$
 and for all 
 $(\mathfrak t;\mathfrak {Q}) \in I(w_0)$
. Otherwise, we get a nontrivial
$(\mathfrak t;\mathfrak {Q}) \in I(w_0)$
. Otherwise, we get a nontrivial 
 $\overline K$
-linear relation between
$\overline K$
-linear relation between 
 $\mathfrak L(\mathfrak t;\mathfrak {Q})(\theta )$
 with
$\mathfrak L(\mathfrak t;\mathfrak {Q})(\theta )$
 with 
 $(\frak t;\mathfrak {Q}) \in I$
 of weight
$(\frak t;\mathfrak {Q}) \in I$
 of weight 
 $w_0$
. By Proposition 3.3, we deduce a nontrivial K-linear relation between
$w_0$
. By Proposition 3.3, we deduce a nontrivial K-linear relation between 
 $\mathfrak L(\mathfrak t;\mathfrak {Q})(\theta )$
 with
$\mathfrak L(\mathfrak t;\mathfrak {Q})(\theta )$
 with 
 $(\frak t;\mathfrak {Q}) \in I(w_0)$
, which contradicts with Condition
$(\frak t;\mathfrak {Q}) \in I(w_0)$
, which contradicts with Condition 
 $(LW)$
. Now, we know that
$(LW)$
. Now, we know that 
 $B_{\mathfrak t,\mathfrak {Q}}(\theta ^{q^j})=0$
 for
$B_{\mathfrak t,\mathfrak {Q}}(\theta ^{q^j})=0$
 for 
 $j \gg 1$
 and for all
$j \gg 1$
 and for all 
 $(\mathfrak t;\mathfrak {Q}) \in I(w_0)$
. Since each
$(\mathfrak t;\mathfrak {Q}) \in I(w_0)$
. Since each 
 $B_{\mathfrak t,\mathfrak {Q}}$
 belongs to
$B_{\mathfrak t,\mathfrak {Q}}$
 belongs to 
 $\overline K(t)$
, it follows that
$\overline K(t)$
, it follows that 
 $B_{\mathfrak t,\mathfrak {Q}}=0$
 for all
$B_{\mathfrak t,\mathfrak {Q}}=0$
 for all 
 $(\mathfrak t;\mathfrak {Q}) \in I(w_0)$
.
$(\mathfrak t;\mathfrak {Q}) \in I(w_0)$
.
 Next, we put 
 $w_1:=\max _{(\mathfrak t;\mathfrak {Q}) \in I \setminus I(w_0)} w(\mathfrak t)$
 and denote by
$w_1:=\max _{(\mathfrak t;\mathfrak {Q}) \in I \setminus I(w_0)} w(\mathfrak t)$
 and denote by 
 $I(w_1)$
 the set of
$I(w_1)$
 the set of 
 $(\mathfrak t;\mathfrak {Q}) \in I$
 such that
$(\mathfrak t;\mathfrak {Q}) \in I$
 such that 
 $w(\mathfrak t)=w_1$
. Dividing Equation (3.10) by
$w(\mathfrak t)=w_1$
. Dividing Equation (3.10) by 
 $\Omega ^{w-w_1}$
 and specializing at
$\Omega ^{w-w_1}$
 and specializing at 
 $t=\theta ^{q^j}$
 yields
$t=\theta ^{q^j}$
 yields 
 $$\begin{align*}\sum_{(\mathfrak t;\mathfrak{Q}) \in I(w_1)} B_{\mathfrak t,\mathfrak{Q}}(\theta^{q^j}) (\mathfrak L(\mathfrak t;\mathfrak{Q})(\theta))^{q^j}=0 \end{align*}$$
$$\begin{align*}\sum_{(\mathfrak t;\mathfrak{Q}) \in I(w_1)} B_{\mathfrak t,\mathfrak{Q}}(\theta^{q^j}) (\mathfrak L(\mathfrak t;\mathfrak{Q})(\theta))^{q^j}=0 \end{align*}$$
for 
 $j \gg 1$
. Since
$j \gg 1$
. Since 
 $w_1<w$
, by Proposition 3.3 and Condition
$w_1<w$
, by Proposition 3.3 and Condition 
 $(LW)$
 again we deduce that
$(LW)$
 again we deduce that 
 $B_{\mathfrak t,\mathfrak {Q}}(\theta ^{q^j})=0$
 for
$B_{\mathfrak t,\mathfrak {Q}}(\theta ^{q^j})=0$
 for 
 $j \gg 1$
 and for all
$j \gg 1$
 and for all 
 $(\mathfrak t;\mathfrak {Q}) \in I(w_1)$
. Since each
$(\mathfrak t;\mathfrak {Q}) \in I(w_1)$
. Since each 
 $B_{\mathfrak t,\mathfrak {Q}}$
 belongs to
$B_{\mathfrak t,\mathfrak {Q}}$
 belongs to 
 $\overline K(t)$
, it follows that
$\overline K(t)$
, it follows that 
 $B_{\mathfrak t,\mathfrak {Q}}=0$
 for all
$B_{\mathfrak t,\mathfrak {Q}}=0$
 for all 
 $(\mathfrak t;\mathfrak {Q}) \in I(w_1)$
. Repeating the previous arguments, we deduce that
$(\mathfrak t;\mathfrak {Q}) \in I(w_1)$
. Repeating the previous arguments, we deduce that 
 $B_{\mathfrak t,\mathfrak {Q}}=0$
 for all
$B_{\mathfrak t,\mathfrak {Q}}=0$
 for all 
 $(\mathfrak t;\mathfrak {Q}) \in I$
 as required.
$(\mathfrak t;\mathfrak {Q}) \in I$
 as required.
 We have proved that 
 $ \mathbf {g}- \mathbf {g}^{(-1)} \widetilde \Phi =0$
. Thus,
$ \mathbf {g}- \mathbf {g}^{(-1)} \widetilde \Phi =0$
. Thus, 
 $$ \begin{align*} \begin{pmatrix} 1 & 0 & 0 \\ 0 & \text{Id} & 0 \\ g_0/g & (g_{\mathfrak t,\mathfrak{Q}}/g)_{(\mathfrak t;\mathfrak{Q}) \in I} & 1 \end{pmatrix}^{(-1)} \begin{pmatrix} 1 & 0 \\ 0 & \Phi \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \Phi' & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & \text{Id} & 0 \\ g_0/g & (g_{\mathfrak t,\mathfrak{Q}}/g)_{(\mathfrak t;\mathfrak{Q}) \in I} & 1 \end{pmatrix}. \end{align*} $$
$$ \begin{align*} \begin{pmatrix} 1 & 0 & 0 \\ 0 & \text{Id} & 0 \\ g_0/g & (g_{\mathfrak t,\mathfrak{Q}}/g)_{(\mathfrak t;\mathfrak{Q}) \in I} & 1 \end{pmatrix}^{(-1)} \begin{pmatrix} 1 & 0 \\ 0 & \Phi \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \Phi' & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & \text{Id} & 0 \\ g_0/g & (g_{\mathfrak t,\mathfrak{Q}}/g)_{(\mathfrak t;\mathfrak{Q}) \in I} & 1 \end{pmatrix}. \end{align*} $$
By [Reference Chang, Papanikolas and Yu11, Prop. 2.2.1], we see that the common denominator b of 
 $g_0/g$
 and
$g_0/g$
 and 
 $g_{\mathfrak t,\mathfrak {Q}}/g$
 for
$g_{\mathfrak t,\mathfrak {Q}}/g$
 for 
 $(\mathfrak t,\mathfrak {Q}) \in I$
 belongs to
$(\mathfrak t,\mathfrak {Q}) \in I$
 belongs to 
 $\mathbb F_q[t] \setminus \{0\}$
. If we put
$\mathbb F_q[t] \setminus \{0\}$
. If we put 
 $\delta _0=bg_0/g$
 and
$\delta _0=bg_0/g$
 and 
 $\delta _{\mathfrak t,\mathfrak {Q}}=b g_{\mathfrak t,\mathfrak {Q}}/g$
 for
$\delta _{\mathfrak t,\mathfrak {Q}}=b g_{\mathfrak t,\mathfrak {Q}}/g$
 for 
 $(\mathfrak t,\mathfrak {Q}) \in I$
 which belong to
$(\mathfrak t,\mathfrak {Q}) \in I$
 which belong to 
 $\overline K[t]$
 and
$\overline K[t]$
 and 
 $\delta :=(\delta _{\mathfrak t,\mathfrak {Q}})_{\mathfrak t \in I} \in \operatorname {\mathrm {Mat}}_{1 \times |I|}(\overline K[t])$
, then
$\delta :=(\delta _{\mathfrak t,\mathfrak {Q}})_{\mathfrak t \in I} \in \operatorname {\mathrm {Mat}}_{1 \times |I|}(\overline K[t])$
, then 
 $\delta _0^{(-1)}=\delta _0$
 and
$\delta _0^{(-1)}=\delta _0$
 and 
 $$ \begin{align} \begin{pmatrix} \text{Id} & 0 \\ \delta & 1 \end{pmatrix}^{(-1)} \begin{pmatrix} \Phi' & 0 \\ b \mathbf{v} & 1 \end{pmatrix} = \begin{pmatrix} \Phi' & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \text{Id} & 0 \\ \delta & 1 \end{pmatrix}. \end{align} $$
$$ \begin{align} \begin{pmatrix} \text{Id} & 0 \\ \delta & 1 \end{pmatrix}^{(-1)} \begin{pmatrix} \Phi' & 0 \\ b \mathbf{v} & 1 \end{pmatrix} = \begin{pmatrix} \Phi' & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \text{Id} & 0 \\ \delta & 1 \end{pmatrix}. \end{align} $$
 If we put 
 $X:=\begin {pmatrix} \text {Id} & 0 \\ \delta & 1 \end {pmatrix} \begin {pmatrix} \Psi ' & 0 \\ b \mathbf {f} & 1 \end {pmatrix}$
, then
$X:=\begin {pmatrix} \text {Id} & 0 \\ \delta & 1 \end {pmatrix} \begin {pmatrix} \Psi ' & 0 \\ b \mathbf {f} & 1 \end {pmatrix}$
, then 
 $X^{(-1)}=\begin {pmatrix} \Phi ' & 0 \\ 0 & 1 \end {pmatrix} X$
. By [Reference Papanikolas32, §4.1.6], there exist
$X^{(-1)}=\begin {pmatrix} \Phi ' & 0 \\ 0 & 1 \end {pmatrix} X$
. By [Reference Papanikolas32, §4.1.6], there exist 
 $\nu _{\mathfrak t,\mathfrak {Q}} \in \mathbb F_q(t)$
 for
$\nu _{\mathfrak t,\mathfrak {Q}} \in \mathbb F_q(t)$
 for 
 $(\mathfrak t,\mathfrak {Q}) \in I$
 such that if we set
$(\mathfrak t,\mathfrak {Q}) \in I$
 such that if we set 
 $\nu =(\nu _{\mathfrak t,\mathfrak {Q}})_{(\mathfrak t,\mathfrak {Q}) \in I} \in \operatorname {\mathrm {Mat}}_{1 \times |I|}(\mathbb F_q(t))$
,
$\nu =(\nu _{\mathfrak t,\mathfrak {Q}})_{(\mathfrak t,\mathfrak {Q}) \in I} \in \operatorname {\mathrm {Mat}}_{1 \times |I|}(\mathbb F_q(t))$
, 
 $$ \begin{align*} X=\begin{pmatrix} \Psi' & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \text{Id} & 0 \\ \nu & 1 \end{pmatrix}. \end{align*} $$
$$ \begin{align*} X=\begin{pmatrix} \Psi' & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \text{Id} & 0 \\ \nu & 1 \end{pmatrix}. \end{align*} $$
Thus, the equation 
 $\begin {pmatrix} \text {Id} & 0 \\ \delta & 1 \end {pmatrix} \begin {pmatrix} \Psi ' & 0 \\ b \mathbf {f} & 1 \end {pmatrix}=\begin {pmatrix} \Psi ' & 0 \\ 0 & 1 \end {pmatrix} \begin {pmatrix} \text {Id} & 0 \\ \nu & 1 \end {pmatrix}$
 implies
$\begin {pmatrix} \text {Id} & 0 \\ \delta & 1 \end {pmatrix} \begin {pmatrix} \Psi ' & 0 \\ b \mathbf {f} & 1 \end {pmatrix}=\begin {pmatrix} \Psi ' & 0 \\ 0 & 1 \end {pmatrix} \begin {pmatrix} \text {Id} & 0 \\ \nu & 1 \end {pmatrix}$
 implies 
 $$ \begin{align} \delta \Psi'+b \mathbf{f}=\nu. \end{align} $$
$$ \begin{align} \delta \Psi'+b \mathbf{f}=\nu. \end{align} $$
The left-hand side belongs to 
 $\mathbb T$
, so does the right-hand side. Thus,
$\mathbb T$
, so does the right-hand side. Thus, 
 $\nu =(\nu _{\mathfrak t,\mathfrak {Q}})_{(\mathfrak t,\mathfrak {Q}) \in I} \in \operatorname {\mathrm {Mat}}_{1 \times |I|}(\mathbb F_q[t])$
. For any
$\nu =(\nu _{\mathfrak t,\mathfrak {Q}})_{(\mathfrak t,\mathfrak {Q}) \in I} \in \operatorname {\mathrm {Mat}}_{1 \times |I|}(\mathbb F_q[t])$
. For any 
 $j \in \mathbb {N}$
, by specializing Equation (3.13) at
$j \in \mathbb {N}$
, by specializing Equation (3.13) at 
 $t=\theta ^{q^j}$
 and using Equation (3.3) and the fact that
$t=\theta ^{q^j}$
 and using Equation (3.3) and the fact that 
 $\Omega $
 has a simple zero at
$\Omega $
 has a simple zero at 
 $t=\theta ^{q^j}$
 we deduce that
$t=\theta ^{q^j}$
 we deduce that 
 $$\begin{align*}\mathbf{f}(\theta)=\nu(\theta)/b(\theta). \end{align*}$$
$$\begin{align*}\mathbf{f}(\theta)=\nu(\theta)/b(\theta). \end{align*}$$
Thus, for all 
 $(\mathfrak t,\mathfrak {Q}) \in I$
,
$(\mathfrak t,\mathfrak {Q}) \in I$
, 
 $f_{\mathfrak t;\mathfrak {Q}}(\theta )$
 given as in Equation (3.8) belongs to K.
$f_{\mathfrak t;\mathfrak {Q}}(\theta )$
 given as in Equation (3.8) belongs to K.
4. Linear relations between ACMPL's
In this section, we use freely the notation of §2 and §3.2.3.
4.1. Preliminaries
 We begin this section by proving several auxiliary lemmas which will be useful in the sequel. We recall that 
 $\overline {\mathbb F}_q$
 denotes the algebraic closure of
$\overline {\mathbb F}_q$
 denotes the algebraic closure of 
 $\mathbb F_q$
 in
$\mathbb F_q$
 in 
 $\overline {K}$
.
$\overline {K}$
.
Lemma 4.1. Let 
 $\epsilon _i \in \mathbb F_q^\times $
 be different elements. We denote by
$\epsilon _i \in \mathbb F_q^\times $
 be different elements. We denote by 
 $\gamma _i \in \overline {\mathbb {F}}_q$
 a
$\gamma _i \in \overline {\mathbb {F}}_q$
 a 
 $(q-1)$
-th root of
$(q-1)$
-th root of 
 $\epsilon _i$
. Then
$\epsilon _i$
. Then 
 $\gamma _i$
 are all
$\gamma _i$
 are all 
 $\mathbb F_q$
-linearly independent.
$\mathbb F_q$
-linearly independent.
Proof. We know that 
 $\mathbb F_q^\times $
 is cyclic as a multiplicative group. Let
$\mathbb F_q^\times $
 is cyclic as a multiplicative group. Let 
 $\epsilon $
 be a generating element of
$\epsilon $
 be a generating element of 
 $\mathbb F_q^\times $
 so that
$\mathbb F_q^\times $
 so that 
 $\mathbb F_q^\times =\langle \epsilon \rangle $
. Let
$\mathbb F_q^\times =\langle \epsilon \rangle $
. Let 
 $\gamma $
 be the associated
$\gamma $
 be the associated 
 $(q-1)$
-th root of
$(q-1)$
-th root of 
 $\epsilon $
. Then for all
$\epsilon $
. Then for all 
 $1 \leq i \leq q-1$
 it follows that
$1 \leq i \leq q-1$
 it follows that 
 $\gamma ^i$
 is a
$\gamma ^i$
 is a 
 $(q-1)$
-th root of
$(q-1)$
-th root of 
 $\epsilon ^i$
. Thus, it suffices to show that the polynomial
$\epsilon ^i$
. Thus, it suffices to show that the polynomial 
 $P(X)=X^{q-1}-\epsilon $
 is irreducible in
$P(X)=X^{q-1}-\epsilon $
 is irreducible in 
 $\mathbb F_q[X]$
. Suppose that this is not the case, write
$\mathbb F_q[X]$
. Suppose that this is not the case, write 
 $P(X)=P_1(X)P_2(X)$
 with
$P(X)=P_1(X)P_2(X)$
 with 
 $1 \leq \deg P_1<q-1$
. Since the roots of
$1 \leq \deg P_1<q-1$
. Since the roots of 
 $P(X)$
 are of the form
$P(X)$
 are of the form 
 $\alpha \gamma $
 with
$\alpha \gamma $
 with 
 $\alpha \in \mathbb F_q^\times $
, those of
$\alpha \in \mathbb F_q^\times $
, those of 
 $P_1(X)$
 are also of this form. Looking at the constant term of
$P_1(X)$
 are also of this form. Looking at the constant term of 
 $P_1(X)$
, we deduce that
$P_1(X)$
, we deduce that 
 $\gamma ^{\deg P_1} \in \mathbb F_q^\times $
. If we put
$\gamma ^{\deg P_1} \in \mathbb F_q^\times $
. If we put 
 $m=\text {gcd}(\deg P_1,q-1)$
, then
$m=\text {gcd}(\deg P_1,q-1)$
, then 
 $1 \leq m<q-1$
 and
$1 \leq m<q-1$
 and 
 $\gamma ^m \in \mathbb F_q^\times $
. Letting
$\gamma ^m \in \mathbb F_q^\times $
. Letting 
 $\beta :=\gamma ^m \in \mathbb F_q^\times $
, we get
$\beta :=\gamma ^m \in \mathbb F_q^\times $
, we get 
 $\beta ^{\frac {q-1}{m}}=\gamma ^{q-1}=\epsilon $
. Since
$\beta ^{\frac {q-1}{m}}=\gamma ^{q-1}=\epsilon $
. Since 
 $1 \leq m<q-1$
, we get a contradiction with the fact that
$1 \leq m<q-1$
, we get a contradiction with the fact that 
 $\mathbb F_q^\times =\langle \epsilon \rangle $
. The proof is finished.
$\mathbb F_q^\times =\langle \epsilon \rangle $
. The proof is finished.
Lemma 4.2. Let 
 $ \operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {s}_i \end {pmatrix} \in \mathcal {AL}_w$
 and
$ \operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {s}_i \end {pmatrix} \in \mathcal {AL}_w$
 and 
 $a_i \in K$
 satisfying
$a_i \in K$
 satisfying 
 $$ \begin{align*} \sum_i a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align*} $$
$$ \begin{align*} \sum_i a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align*} $$
For 
 $\epsilon \in \mathbb F_q^\times $
, we denote by
$\epsilon \in \mathbb F_q^\times $
, we denote by 
 $I(\epsilon )=\{i:\, \chi (\boldsymbol {\epsilon }_i)=\epsilon \}$
 the set of indices whose corresponding character equals
$I(\epsilon )=\{i:\, \chi (\boldsymbol {\epsilon }_i)=\epsilon \}$
 the set of indices whose corresponding character equals 
 $\epsilon $
. Then for all
$\epsilon $
. Then for all 
 $\epsilon \in \mathbb F_q^\times $
,
$\epsilon \in \mathbb F_q^\times $
, 
 $$\begin{align*}\sum_{i \in I(\epsilon)} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align*}$$
$$\begin{align*}\sum_{i \in I(\epsilon)} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align*}$$
Proof. We keep the notation of Lemma 4.1. Suppose that we have a relation
 $$\begin{align*}\sum_i \gamma_i a_i=0\end{align*}$$
$$\begin{align*}\sum_i \gamma_i a_i=0\end{align*}$$
with 
 $a_i \in K_\infty $
. By Lemma 4.1 and the fact that
$a_i \in K_\infty $
. By Lemma 4.1 and the fact that 
 $K_\infty =\mathbb F_q((1/\theta ))$
, we deduce that
$K_\infty =\mathbb F_q((1/\theta ))$
, we deduce that 
 $a_i=0$
 for all i.
$a_i=0$
 for all i.
 By Equation (3.7), the relation 
 $\sum _i a_i \mathfrak {Li}(\mathfrak {s}_i;\boldsymbol {\epsilon }_i)(\theta )=0$
 is equivalent to the following one
$\sum _i a_i \mathfrak {Li}(\mathfrak {s}_i;\boldsymbol {\epsilon }_i)(\theta )=0$
 is equivalent to the following one 
 $$\begin{align*}\sum_i a_i \gamma_{i1} \dots \gamma_{i\ell_i} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{s}_i \end{pmatrix}=0. \end{align*}$$
$$\begin{align*}\sum_i a_i \gamma_{i1} \dots \gamma_{i\ell_i} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{s}_i \end{pmatrix}=0. \end{align*}$$
By the previous discussion, for all 
 $\epsilon \in \mathbb F_q^\times $
,
$\epsilon \in \mathbb F_q^\times $
, 
 $$\begin{align*}\sum_{i \in I(\epsilon)} a_i \gamma_{i1} \dots \gamma_{i\ell_i} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{s}_i \end{pmatrix}=0. \end{align*}$$
$$\begin{align*}\sum_{i \in I(\epsilon)} a_i \gamma_{i1} \dots \gamma_{i\ell_i} \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{s}_i \end{pmatrix}=0. \end{align*}$$
By Equation (3.7), again we deduce the desired relation
 $$\begin{align*}\sum_{i \in I(\epsilon)} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align*}$$
$$\begin{align*}\sum_{i \in I(\epsilon)} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align*}$$
Lemma 4.3. Let 
 $m \in \mathbb N$
,
$m \in \mathbb N$
, 
 $\varepsilon \in \mathbb F_q^\times $
,
$\varepsilon \in \mathbb F_q^\times $
, 
 $\delta \in \overline K[t]$
 and
$\delta \in \overline K[t]$
 and 
 $F(t,\theta ) \in \overline {\mathbb {F}}_q[t,\theta ]$
 (resp.
$F(t,\theta ) \in \overline {\mathbb {F}}_q[t,\theta ]$
 (resp. 
 $F(t,\theta ) \in \mathbb {F}_q[t,\theta ]$
) satisfying
$F(t,\theta ) \in \mathbb {F}_q[t,\theta ]$
) satisfying 
 $$\begin{align*}\varepsilon\delta = \delta^{(-1)}(t-\theta)^m+F^{(-1)}(t,\theta). \end{align*}$$
$$\begin{align*}\varepsilon\delta = \delta^{(-1)}(t-\theta)^m+F^{(-1)}(t,\theta). \end{align*}$$
Then 
 $\delta \in \overline {\mathbb {F}}_q[t,\theta ]$
 (resp.
$\delta \in \overline {\mathbb {F}}_q[t,\theta ]$
 (resp. 
 $\delta \in \mathbb {F}_q[t,\theta ]$
) and
$\delta \in \mathbb {F}_q[t,\theta ]$
) and 
 $$\begin{align*}\deg_\theta \delta \leq \max\left\{\frac{qm}{q-1},\frac{\deg_\theta F(t,\theta)}{q}\right\}. \end{align*}$$
$$\begin{align*}\deg_\theta \delta \leq \max\left\{\frac{qm}{q-1},\frac{\deg_\theta F(t,\theta)}{q}\right\}. \end{align*}$$
Proof. The proof follows the same line as that of [Reference Kuan and Lin27, Theorem 2] where it is shown that if 
 $F(t,\theta ) \in \mathbb {F}_q[t,\theta ]$
 and
$F(t,\theta ) \in \mathbb {F}_q[t,\theta ]$
 and 
 $\varepsilon =1$
, then
$\varepsilon =1$
, then 
 $\delta \in \mathbb {F}_q[t,\theta ]$
. We write down the proof for the case
$\delta \in \mathbb {F}_q[t,\theta ]$
. We write down the proof for the case 
 $F(t,\theta ) \in \overline {\mathbb {F}}_q[t,\theta ]$
 for the convenience of the reader.
$F(t,\theta ) \in \overline {\mathbb {F}}_q[t,\theta ]$
 for the convenience of the reader.
 By twisting once the equality 
 $\varepsilon \delta = \delta ^{(-1)}(t-\theta )^m+F^{(-1)}(t,\theta )$
 and the fact that
$\varepsilon \delta = \delta ^{(-1)}(t-\theta )^m+F^{(-1)}(t,\theta )$
 and the fact that 
 $\varepsilon ^q=\varepsilon $
, we get
$\varepsilon ^q=\varepsilon $
, we get 
 $$\begin{align*}\varepsilon\delta^{(1)} = \delta(t-\theta^q)^m+F(t,\theta). \end{align*}$$
$$\begin{align*}\varepsilon\delta^{(1)} = \delta(t-\theta^q)^m+F(t,\theta). \end{align*}$$
We put 
 $n=\deg _t \delta $
 and express
$n=\deg _t \delta $
 and express 
 $$\begin{align*}\delta=a_n t^n + \dots +a_1t+a_0 \in \overline K[t] \end{align*}$$
$$\begin{align*}\delta=a_n t^n + \dots +a_1t+a_0 \in \overline K[t] \end{align*}$$
with 
 $a_0,\dots ,a_n \in \overline K$
. For
$a_0,\dots ,a_n \in \overline K$
. For 
 $i<0$
, we put
$i<0$
, we put 
 $a_i=0$
.
$a_i=0$
.
 Since 
 $\deg _t \delta ^{(1)}=\deg _t \delta =n < \delta (t-\theta ^q)^m=n+m$
, it follows that
$\deg _t \delta ^{(1)}=\deg _t \delta =n < \delta (t-\theta ^q)^m=n+m$
, it follows that 
 $\deg _t F(t,\theta )=n+m$
. Thus, we write
$\deg _t F(t,\theta )=n+m$
. Thus, we write 
 $F(t,\theta )=b_{n+m} t^{n+m}+\dots +b_1 t+b_0$
 with
$F(t,\theta )=b_{n+m} t^{n+m}+\dots +b_1 t+b_0$
 with 
 $b_0,\dots ,b_{n+m} \in \overline {\mathbb {F}}_q[\theta ]$
. Plugging into the previous equation, we obtain
$b_0,\dots ,b_{n+m} \in \overline {\mathbb {F}}_q[\theta ]$
. Plugging into the previous equation, we obtain 
 $$\begin{align*}\varepsilon(a_n^q t^n + \dots +a_0^q) = (a_n t^n + \dots +a_0)(t-\theta^q)^m+b_{n+m} t^{n+m}+\dots+b_0. \end{align*}$$
$$\begin{align*}\varepsilon(a_n^q t^n + \dots +a_0^q) = (a_n t^n + \dots +a_0)(t-\theta^q)^m+b_{n+m} t^{n+m}+\dots+b_0. \end{align*}$$
 Comparing the coefficients 
 $t^j$
 for
$t^j$
 for 
 $n+1 \leq j \leq n+m$
 yields
$n+1 \leq j \leq n+m$
 yields 
 $$\begin{align*}a_{j-m}+\sum_{i=j-m+1}^{n} {m \choose j-i} (-\theta^q)^{m-j+i} a_i+b_j=0. \end{align*}$$
$$\begin{align*}a_{j-m}+\sum_{i=j-m+1}^{n} {m \choose j-i} (-\theta^q)^{m-j+i} a_i+b_j=0. \end{align*}$$
Since 
 $b_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all
$b_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all 
 $n+1 \leq j \leq n+m$
, we can show by descending induction that
$n+1 \leq j \leq n+m$
, we can show by descending induction that 
 $a_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all
$a_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all 
 $n+1-m \leq j \leq n$
.
$n+1-m \leq j \leq n$
.
 If 
 $n+1-m \leq 0$
, then we are done. Otherwise, comparing the coefficients
$n+1-m \leq 0$
, then we are done. Otherwise, comparing the coefficients 
 $t^j$
 for
$t^j$
 for 
 $m \leq j \leq n$
 yields
$m \leq j \leq n$
 yields 
 $$\begin{align*}a_{j-m}+\sum_{i=j-m+1}^{n} {m \choose j-i} (-\theta^q)^{m-j+i} a_i+b_j-\varepsilon a_j^q=0. \end{align*}$$
$$\begin{align*}a_{j-m}+\sum_{i=j-m+1}^{n} {m \choose j-i} (-\theta^q)^{m-j+i} a_i+b_j-\varepsilon a_j^q=0. \end{align*}$$
Since 
 $b_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all
$b_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all 
 $m \leq j \leq n$
 and
$m \leq j \leq n$
 and 
 $a_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all
$a_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all 
 $n+1-m \leq j \leq n$
, we can show by descending induction that
$n+1-m \leq j \leq n$
, we can show by descending induction that 
 $a_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all
$a_j \in \overline {\mathbb {F}}_q[\theta ]$
 for all 
 $0 \leq j \leq n-m$
. We conclude that
$0 \leq j \leq n-m$
. We conclude that 
 $\delta \in \overline {\mathbb {F}}_q[t,\theta ]$
.
$\delta \in \overline {\mathbb {F}}_q[t,\theta ]$
.
 We now show that 
 $\deg _\theta \delta \leq \max \{\frac {qm}{q-1},\frac {\deg _\theta F(t,\theta )}{q}\}$
. Otherwise, suppose that
$\deg _\theta \delta \leq \max \{\frac {qm}{q-1},\frac {\deg _\theta F(t,\theta )}{q}\}$
. Otherwise, suppose that 
 $\deg _\theta \delta> \max \{\frac {qm}{q-1},\frac {\deg _\theta F(t,\theta )}{q}\}$
. Then
$\deg _\theta \delta> \max \{\frac {qm}{q-1},\frac {\deg _\theta F(t,\theta )}{q}\}$
. Then 
 $\deg _\theta \delta ^{(1)}=q \deg _\theta \delta $
. It implies that
$\deg _\theta \delta ^{(1)}=q \deg _\theta \delta $
. It implies that 
 $\deg _\theta \delta ^{(1)}> \deg _\theta (\delta (t-\theta ^q)^m)=\deg _\theta \delta +qm$
 and
$\deg _\theta \delta ^{(1)}> \deg _\theta (\delta (t-\theta ^q)^m)=\deg _\theta \delta +qm$
 and 
 $\deg _\theta \delta ^{(1)}>\deg _\theta F(t,\theta )$
. Hence, we get
$\deg _\theta \delta ^{(1)}>\deg _\theta F(t,\theta )$
. Hence, we get 
 $$\begin{align*}\deg_\theta (\varepsilon \delta^{(1)})= \deg_\theta \delta^{(1)}> \deg_\theta(\delta(t-\theta^q)^m+F(t,\theta)), \end{align*}$$
$$\begin{align*}\deg_\theta (\varepsilon \delta^{(1)})= \deg_\theta \delta^{(1)}> \deg_\theta(\delta(t-\theta^q)^m+F(t,\theta)), \end{align*}$$
which is a contradiction.
4.2. Linear relations: statement of the main result
Theorem 4.4. Let 
 $w \in \mathbb {N}$
. We recall that the set
$w \in \mathbb {N}$
. We recall that the set 
 $\mathcal {J}^{\prime }_w$
 consists of positive tuples
$\mathcal {J}^{\prime }_w$
 consists of positive tuples 
 $\mathfrak {s} = (s_1, \dots , s_n)$
 of weight w such that
$\mathfrak {s} = (s_1, \dots , s_n)$
 of weight w such that 
 $ q \nmid s_i$
 for all i. Suppose that we have a nontrivial relation
$ q \nmid s_i$
 for all i. Suppose that we have a nontrivial relation 
 $$ \begin{align*} a+\sum_{\mathfrak{s}_i \in \mathcal{J}^{\prime}_w} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0, \quad \text{for}\ a, a_i \in K. \end{align*} $$
$$ \begin{align*} a+\sum_{\mathfrak{s}_i \in \mathcal{J}^{\prime}_w} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0, \quad \text{for}\ a, a_i \in K. \end{align*} $$
Then 
 $q-1 \mid w$
 and
$q-1 \mid w$
 and 
 $a \neq 0$
.
$a \neq 0$
.
 Further, if 
 $q-1 \mid w$
, then there is a unique relation
$q-1 \mid w$
, then there is a unique relation 
 $$ \begin{align*} 1+\sum_{\mathfrak{s}_i \in \mathcal{J}^{\prime}_w} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0, \quad \text{for}\ a_i \in K. \end{align*} $$
$$ \begin{align*} 1+\sum_{\mathfrak{s}_i \in \mathcal{J}^{\prime}_w} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0, \quad \text{for}\ a_i \in K. \end{align*} $$
Also, for indices 
 $(\mathfrak {s}_i; \boldsymbol {\epsilon }_i)$
 with nontrivial coefficient
$(\mathfrak {s}_i; \boldsymbol {\epsilon }_i)$
 with nontrivial coefficient 
 $a_i$
, we have
$a_i$
, we have 
 $\boldsymbol {\epsilon }_i = (1, \dots , 1)$
.
$\boldsymbol {\epsilon }_i = (1, \dots , 1)$
.
 In particular, the ACMPL's in 
 $\mathcal {AS}_w$
 are linearly independent over K.
$\mathcal {AS}_w$
 are linearly independent over K.
Remark 4.5. We emphasize that although Theorem 4.4 is a purely transcendental result, it is crucial that we need the full strength of algebraic theory for ACMPL's (i.e., Theorem 2.11) to conclude (see the last step of the proof).
As a direct consequence of Theorem 4.4, we obtain:
Theorem 4.6. Let 
 $w \in \mathbb {N}$
. Then the ACMPL's in
$w \in \mathbb {N}$
. Then the ACMPL's in 
 $\mathcal {AS}_w$
 form a basis for
$\mathcal {AS}_w$
 form a basis for 
 $\mathcal {AL}_w$
. In particular,
$\mathcal {AL}_w$
. In particular, 
 $$\begin{align*}\dim_K \mathcal{AL}_w=s(w). \end{align*}$$
$$\begin{align*}\dim_K \mathcal{AL}_w=s(w). \end{align*}$$
4.3. Proof of Theorem 4.4
 
 ${}$
${}$
 We outline the ideas of the proof. Starting from such a nontrivial relation, we apply the Anderson–Brownawell–Papanikolas criterion in [Reference Anderson, Brownawell and Papanikolas2] and reduce to the solution of a system of 
 $\sigma $
-linear equations. In contrast to [Reference Ngo Dac31, §4 and §5], this system has a unique solution when
$\sigma $
-linear equations. In contrast to [Reference Ngo Dac31, §4 and §5], this system has a unique solution when 
 $q-1$
 divides w. We first show that for such a weight w up to a scalar in
$q-1$
 divides w. We first show that for such a weight w up to a scalar in 
 $K^\times $
 there is at most one linear relation between ACMPL's in
$K^\times $
 there is at most one linear relation between ACMPL's in 
 $\mathcal {AS}_w$
 and
$\mathcal {AS}_w$
 and 
 $\widetilde \pi ^w$
. Second, we show a linear relation between ACMPL's in
$\widetilde \pi ^w$
. Second, we show a linear relation between ACMPL's in 
 $\mathcal {AS}_w$
 and
$\mathcal {AS}_w$
 and 
 $\widetilde \pi ^w$
 where the coefficient of
$\widetilde \pi ^w$
 where the coefficient of 
 $\widetilde \pi ^w$
 is nonzero. For this, we use Brown’s theorem for AMCPLs, that is, Theorem 2.11.
$\widetilde \pi ^w$
 is nonzero. For this, we use Brown’s theorem for AMCPLs, that is, Theorem 2.11.
 We are back to the proof of Theorem 4.4. We claim that if 
 $q-1 \nmid w$
, then any linear relation
$q-1 \nmid w$
, then any linear relation 
 $$\begin{align*}a+\sum_{\mathfrak{s}_i \in \mathcal{J}^{\prime}_w} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0 \end{align*}$$
$$\begin{align*}a+\sum_{\mathfrak{s}_i \in \mathcal{J}^{\prime}_w} a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0 \end{align*}$$
with 
 $a,a_i \in K$
 implies that
$a,a_i \in K$
 implies that 
 $a=0$
. In fact, if we recall that
$a=0$
. In fact, if we recall that 
 $\overline {\mathbb F}_q$
 denotes the algebraic closure of
$\overline {\mathbb F}_q$
 denotes the algebraic closure of 
 $\mathbb F_q$
 in
$\mathbb F_q$
 in 
 $\overline {K}$
, then the claim follows from Equation (3.7) and that
$\overline {K}$
, then the claim follows from Equation (3.7) and that 
 $\widetilde {\pi }^w\not \in \overline{\mathbb{F}}_q \left(\left( \frac{1}{\theta}\right)\right)$
 since
$\widetilde {\pi }^w\not \in \overline{\mathbb{F}}_q \left(\left( \frac{1}{\theta}\right)\right)$
 since 
 $q-1 \nmid w$
.
$q-1 \nmid w$
.
 The proof is by induction on the weight 
 $w \in \mathbb {N}$
. For
$w \in \mathbb {N}$
. For 
 $w=1$
, we distinguish two cases:
$w=1$
, we distinguish two cases: 
- 
• If  $q>2$
, then by the previous remark it suffices to show that if then $q>2$
, then by the previous remark it suffices to show that if then $$\begin{align*}a+\sum_i a_i \mathfrak{Li}(1;\epsilon_i)(\theta)=0, \end{align*}$$ $$\begin{align*}a+\sum_i a_i \mathfrak{Li}(1;\epsilon_i)(\theta)=0, \end{align*}$$ $a_i=0$
 for all i. In fact, it follows immediately from Lemma 4.2. $a_i=0$
 for all i. In fact, it follows immediately from Lemma 4.2.
- 
• If  $q=2$
, then $q=2$
, then $w=q-1=1$
. Then the theorem holds from the facts that there is only one index $w=q-1=1$
. Then the theorem holds from the facts that there is only one index $(\mathfrak {s}_1;\boldsymbol {\epsilon }_1)=(1,1)$
 and that $(\mathfrak {s}_1;\boldsymbol {\epsilon }_1)=(1,1)$
 and that $\operatorname {\mathrm {Li}}(1)=\zeta _A(1)=-D^{-1}_1 \widetilde \pi $
. $\operatorname {\mathrm {Li}}(1)=\zeta _A(1)=-D^{-1}_1 \widetilde \pi $
.
 Suppose that Theorem 4.4 holds for all 
 $w'<w$
. We now prove that it holds for w. Suppose that we have a linear relation
$w'<w$
. We now prove that it holds for w. Suppose that we have a linear relation 
 $$ \begin{align} a+\sum_i a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align} $$
$$ \begin{align} a+\sum_i a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align} $$
 By Lemma 4.2 and its proof, we can suppose further that 
 $\boldsymbol {\epsilon }_i$
 has the same character, that is, there exists
$\boldsymbol {\epsilon }_i$
 has the same character, that is, there exists 
 $\epsilon \in \mathbb F_q^\times $
 such that for all i,
$\epsilon \in \mathbb F_q^\times $
 such that for all i, 
 $$ \begin{align} \chi(\boldsymbol{\epsilon}_i)=\epsilon_{i1} \dots \epsilon_{i\ell_i}=\epsilon. \end{align} $$
$$ \begin{align} \chi(\boldsymbol{\epsilon}_i)=\epsilon_{i1} \dots \epsilon_{i\ell_i}=\epsilon. \end{align} $$
We now apply Theorem 3.4 to our setting of ACMPL's. We recall that by Equation (3.6),
 $$\begin{align*}\mathfrak{Li}(\mathfrak{s};\boldsymbol{\epsilon})=\frak L(\mathfrak{s};\mathfrak{Q}_{\mathfrak{s},\boldsymbol{\epsilon}}'), \end{align*}$$
$$\begin{align*}\mathfrak{Li}(\mathfrak{s};\boldsymbol{\epsilon})=\frak L(\mathfrak{s};\mathfrak{Q}_{\mathfrak{s},\boldsymbol{\epsilon}}'), \end{align*}$$
and also
 $$ \begin{align} I(\mathfrak{s}_i; \boldsymbol{\epsilon}_i) &= \{\emptyset, (s_{i1}; \epsilon_{i1}), \dots, (s_{i1}, \dots, s_{i(\ell_{i}-1)}; \epsilon_{i1}, \dots, \epsilon_{i(\ell_i-1)})\}, \notag \\ I &= \cup_i I(\mathfrak{s}_i;\boldsymbol{\epsilon}_i). \end{align} $$
$$ \begin{align} I(\mathfrak{s}_i; \boldsymbol{\epsilon}_i) &= \{\emptyset, (s_{i1}; \epsilon_{i1}), \dots, (s_{i1}, \dots, s_{i(\ell_{i}-1)}; \epsilon_{i1}, \dots, \epsilon_{i(\ell_i-1)})\}, \notag \\ I &= \cup_i I(\mathfrak{s}_i;\boldsymbol{\epsilon}_i). \end{align} $$
We know that the hypothesis are verified:
- 
(LW) By the induction hypothesis, for any weight  $w'<w$
, the values $w'<w$
, the values $\mathfrak {Li}(\frak t;\boldsymbol {\epsilon })(\theta )$
 with $\mathfrak {Li}(\frak t;\boldsymbol {\epsilon })(\theta )$
 with $(\frak t;\boldsymbol {\epsilon }) \in I$
 and $(\frak t;\boldsymbol {\epsilon }) \in I$
 and $w(\frak t)=w'$
 are all K-linearly independent. $w(\frak t)=w'$
 are all K-linearly independent.
- 
(LD) By Equation (4.1), there exist  $a \in A$
 and $a \in A$
 and $a_i \in A$
 for $a_i \in A$
 for $1 \leq i \leq n$
 which are not all zero such that $1 \leq i \leq n$
 which are not all zero such that $$ \begin{align*} a+\sum_{i=1}^n a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align*} $$ $$ \begin{align*} a+\sum_{i=1}^n a_i \mathfrak{Li}(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)(\theta)=0. \end{align*} $$
Thus, Theorem 3.4 implies that for all 
 $(\mathfrak t;\boldsymbol {\epsilon }) \in I$
,
$(\mathfrak t;\boldsymbol {\epsilon }) \in I$
, 
 $f_{\mathfrak t;\boldsymbol {\epsilon }}(\theta )$
 belongs to K where
$f_{\mathfrak t;\boldsymbol {\epsilon }}(\theta )$
 belongs to K where 
 $f_{\mathfrak t;\boldsymbol {\epsilon }}$
 is given by
$f_{\mathfrak t;\boldsymbol {\epsilon }}$
 is given by 
 $$ \begin{align*} f_{\mathfrak t;\boldsymbol{\epsilon}}:= \sum_i a_i(t) \mathfrak{Li}(s_{i(k+1)},\dots,s_{i \ell_i};\epsilon_{i(k+1)},\dots,\epsilon_{i \ell_i}). \end{align*} $$
$$ \begin{align*} f_{\mathfrak t;\boldsymbol{\epsilon}}:= \sum_i a_i(t) \mathfrak{Li}(s_{i(k+1)},\dots,s_{i \ell_i};\epsilon_{i(k+1)},\dots,\epsilon_{i \ell_i}). \end{align*} $$
Here, the sum runs through the set of indices i such that 
 $(\mathfrak t;\boldsymbol {\epsilon })=(s_{i1},\dots ,s_{i k};\epsilon _{i1},\dots ,\epsilon _{i k})$
 for some
$(\mathfrak t;\boldsymbol {\epsilon })=(s_{i1},\dots ,s_{i k};\epsilon _{i1},\dots ,\epsilon _{i k})$
 for some 
 $0 \leq k \leq \ell _i-1$
.
$0 \leq k \leq \ell _i-1$
.
 We derive a direct consequence of the previous rationality result. Let 
 $(\mathfrak t;\boldsymbol {\epsilon }) \in I$
 and
$(\mathfrak t;\boldsymbol {\epsilon }) \in I$
 and 
 $\mathfrak t \neq \emptyset $
. Then
$\mathfrak t \neq \emptyset $
. Then 
 $(\mathfrak t;\boldsymbol {\epsilon })=(s_{i1},\dots ,s_{i k};\epsilon _{i1},\dots ,\epsilon _{i k})$
 for some i and
$(\mathfrak t;\boldsymbol {\epsilon })=(s_{i1},\dots ,s_{i k};\epsilon _{i1},\dots ,\epsilon _{i k})$
 for some i and 
 $1 \leq k \leq \ell _i-1$
. We denote by
$1 \leq k \leq \ell _i-1$
. We denote by 
 $J(\mathfrak t;\boldsymbol {\epsilon })$
 the set of all such i. We know that there exists
$J(\mathfrak t;\boldsymbol {\epsilon })$
 the set of all such i. We know that there exists 
 $a_{\mathfrak t;\boldsymbol {\epsilon }} \in K$
 such that
$a_{\mathfrak t;\boldsymbol {\epsilon }} \in K$
 such that 
 $$ \begin{align*} a_{\mathfrak t;\boldsymbol{\epsilon}}+f_{\mathfrak t;\boldsymbol{\epsilon}}(\theta)=0, \end{align*} $$
$$ \begin{align*} a_{\mathfrak t;\boldsymbol{\epsilon}}+f_{\mathfrak t;\boldsymbol{\epsilon}}(\theta)=0, \end{align*} $$
or equivalently,
 $$ \begin{align*} a_{\mathfrak t;\boldsymbol{\epsilon}}+\sum_{i \in J(\frak t;\boldsymbol{\epsilon})} a_i \mathfrak{Li}(s_{i(k+1)},\dots,s_{i \ell_i};\epsilon_{i(k+1)},\dots,\epsilon_{i \ell_i})(\theta)=0. \end{align*} $$
$$ \begin{align*} a_{\mathfrak t;\boldsymbol{\epsilon}}+\sum_{i \in J(\frak t;\boldsymbol{\epsilon})} a_i \mathfrak{Li}(s_{i(k+1)},\dots,s_{i \ell_i};\epsilon_{i(k+1)},\dots,\epsilon_{i \ell_i})(\theta)=0. \end{align*} $$
The ACMPL's appearing in the above equality belong to 
 $\mathcal {AS}_{w-w(\mathfrak t)}$
. By the induction hypothesis, we can suppose that
$\mathcal {AS}_{w-w(\mathfrak t)}$
. By the induction hypothesis, we can suppose that 
 $\epsilon _{i(k+1)}=\dots =\epsilon _{i \ell _i}=1$
. Further, if
$\epsilon _{i(k+1)}=\dots =\epsilon _{i \ell _i}=1$
. Further, if 
 $q-1 \nmid w-w(\frak t)$
, then
$q-1 \nmid w-w(\frak t)$
, then 
 $a_i(t)=0$
 for all
$a_i(t)=0$
 for all 
 $i \in J(\frak t;\boldsymbol {\epsilon })$
. Therefore, letting
$i \in J(\frak t;\boldsymbol {\epsilon })$
. Therefore, letting 
 $(\mathfrak {s}_i;\boldsymbol {\epsilon }_i)=(s_{i1},\dots ,s_{i \ell _i};\epsilon _{i1},\dots ,\epsilon _{i \ell _i})$
 we can suppose that
$(\mathfrak {s}_i;\boldsymbol {\epsilon }_i)=(s_{i1},\dots ,s_{i \ell _i};\epsilon _{i1},\dots ,\epsilon _{i \ell _i})$
 we can suppose that 
 $s_{i2},\dots ,s_{i \ell _i}$
 are all divisible by
$s_{i2},\dots ,s_{i \ell _i}$
 are all divisible by 
 $q-1$
 and
$q-1$
 and 
 $\epsilon _{i2}=\dots =\epsilon _{i \ell _i}=1$
. In particular, for all i,
$\epsilon _{i2}=\dots =\epsilon _{i \ell _i}=1$
. In particular, for all i, 
 $\epsilon _{i1}=\chi (\boldsymbol {\epsilon }_i)=\epsilon $
.
$\epsilon _{i1}=\chi (\boldsymbol {\epsilon }_i)=\epsilon $
.
 Now, we want to solve Equation (3.12). Further, in this system we can assume that the corresponding element 
 $b \in \mathbb F_q[t] \setminus \{0\}$
 equals
$b \in \mathbb F_q[t] \setminus \{0\}$
 equals 
 $1$
. We define
$1$
. We define 
 $$\begin{align*}J:=I \cup \{(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)\}, \end{align*}$$
$$\begin{align*}J:=I \cup \{(\mathfrak{s}_i;\boldsymbol{\epsilon}_i)\}, \end{align*}$$
where I is given as in Equation (4.3). For 
 $(\mathfrak t;\boldsymbol {\epsilon }) \in J$
, we denote by
$(\mathfrak t;\boldsymbol {\epsilon }) \in J$
, we denote by 
 $J_0(\frak t;\boldsymbol {\epsilon })$
 consisting of
$J_0(\frak t;\boldsymbol {\epsilon })$
 consisting of 
 $(\frak t';\boldsymbol {\epsilon }') \in I$
 such that there exist i and
$(\frak t';\boldsymbol {\epsilon }') \in I$
 such that there exist i and 
 $0 \leq j <\ell _i$
 so that
$0 \leq j <\ell _i$
 so that 
 $(\frak t;\boldsymbol {\epsilon })=(s_{i1},s_{i2},\dots ,s_{ij};\epsilon ,1,\dots ,1)$
 and
$(\frak t;\boldsymbol {\epsilon })=(s_{i1},s_{i2},\dots ,s_{ij};\epsilon ,1,\dots ,1)$
 and 
 $(\frak t';\boldsymbol {\epsilon }')=(s_{i1},s_{i2},\dots ,s_{i(j+1)};\epsilon ,1,\dots ,1)$
. In particular, for
$(\frak t';\boldsymbol {\epsilon }')=(s_{i1},s_{i2},\dots ,s_{i(j+1)};\epsilon ,1,\dots ,1)$
. In particular, for 
 $(\frak t;\boldsymbol {\epsilon })=(\mathfrak {s}_i;\boldsymbol {\epsilon }_i)$
,
$(\frak t;\boldsymbol {\epsilon })=(\mathfrak {s}_i;\boldsymbol {\epsilon }_i)$
, 
 $J_0(\frak t;\boldsymbol {\epsilon })$
 is the empty set. For
$J_0(\frak t;\boldsymbol {\epsilon })$
 is the empty set. For 
 $(\mathfrak t;\boldsymbol {\epsilon }) \in J \setminus \{\emptyset \}$
, we also put
$(\mathfrak t;\boldsymbol {\epsilon }) \in J \setminus \{\emptyset \}$
, we also put 
 $$\begin{align*}m_{\frak t}:=\frac{w-w(\frak t)}{q-1} \in \mathbb Z^{\geq 0}. \end{align*}$$
$$\begin{align*}m_{\frak t}:=\frac{w-w(\frak t)}{q-1} \in \mathbb Z^{\geq 0}. \end{align*}$$
Then it is clear that Equation (3.12) is equivalent finding 
 $(\delta _{\mathfrak t;\boldsymbol {\epsilon }})_{(\mathfrak t;\boldsymbol {\epsilon }) \in J} \in \operatorname {\mathrm {Mat}}_{1 \times |J|}(\overline K[t])$
 such that
$(\delta _{\mathfrak t;\boldsymbol {\epsilon }})_{(\mathfrak t;\boldsymbol {\epsilon }) \in J} \in \operatorname {\mathrm {Mat}}_{1 \times |J|}(\overline K[t])$
 such that 
 $$ \begin{align} \delta_{\mathfrak t;\boldsymbol{\epsilon}}=\delta_{\mathfrak t;\boldsymbol{\epsilon}}^{(-1)} (t-\theta)^{w-w(\frak t)}+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\frak t;\boldsymbol{\epsilon})} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}^{(-1)} (t-\theta)^{w-w(\frak t)}, \quad \text{for all } (\frak t;\boldsymbol{\epsilon}) \in J \setminus \{\emptyset\}, \end{align} $$
$$ \begin{align} \delta_{\mathfrak t;\boldsymbol{\epsilon}}=\delta_{\mathfrak t;\boldsymbol{\epsilon}}^{(-1)} (t-\theta)^{w-w(\frak t)}+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\frak t;\boldsymbol{\epsilon})} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}^{(-1)} (t-\theta)^{w-w(\frak t)}, \quad \text{for all } (\frak t;\boldsymbol{\epsilon}) \in J \setminus \{\emptyset\}, \end{align} $$
and
 $$ \begin{align} \delta_{\mathfrak t;\boldsymbol{\epsilon}}=\delta_{\mathfrak t;\boldsymbol{\epsilon}}^{(-1)} (t-\theta)^{w-w(\frak t)}+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\frak t;\boldsymbol{\epsilon})} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}^{(-1)} \gamma^{(-1)} (t-\theta)^{w-w(\frak t)}, \quad \text{for } (\frak t;\boldsymbol{\epsilon})=\emptyset. \end{align} $$
$$ \begin{align} \delta_{\mathfrak t;\boldsymbol{\epsilon}}=\delta_{\mathfrak t;\boldsymbol{\epsilon}}^{(-1)} (t-\theta)^{w-w(\frak t)}+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\frak t;\boldsymbol{\epsilon})} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}^{(-1)} \gamma^{(-1)} (t-\theta)^{w-w(\frak t)}, \quad \text{for } (\frak t;\boldsymbol{\epsilon})=\emptyset. \end{align} $$
Here, 
 $\gamma ^{q-1}=\epsilon $
. In fact, for
$\gamma ^{q-1}=\epsilon $
. In fact, for 
 $(\frak t;\boldsymbol {\epsilon })=(\mathfrak {s}_i;\boldsymbol {\epsilon }_i)$
, the corresponding equation becomes
$(\frak t;\boldsymbol {\epsilon })=(\mathfrak {s}_i;\boldsymbol {\epsilon }_i)$
, the corresponding equation becomes 
 $\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}^{(-1)}$
. Thus,
$\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}^{(-1)}$
. Thus, 
 $\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=a_i(t) \in \mathbb F_q[t]$
.
$\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=a_i(t) \in \mathbb F_q[t]$
.
 Letting y be a variable, we denote by 
 $v_y$
 the valuation associated to the place y of the field
$v_y$
 the valuation associated to the place y of the field 
 $\mathbb F_q(y)$
. We put
$\mathbb F_q(y)$
. We put 
 $$\begin{align*}T:=t-t^q, \quad X:=t^q-\theta^q. \end{align*}$$
$$\begin{align*}T:=t-t^q, \quad X:=t^q-\theta^q. \end{align*}$$
We claim that
- 
1) For all  $(\mathfrak t;\boldsymbol {\epsilon }) \in J \setminus \{\emptyset \}$
, the polynomial $(\mathfrak t;\boldsymbol {\epsilon }) \in J \setminus \{\emptyset \}$
, the polynomial $\delta _{\frak t;\boldsymbol {\epsilon }}$
 is of the form where $\delta _{\frak t;\boldsymbol {\epsilon }}$
 is of the form where $$\begin{align*}\delta_{\frak t;\boldsymbol{\epsilon}}=f_{\frak t} \left(X^{m_{\frak t}}+\sum_{i=0}^{m_{\frak t}-1} P_{\frak t,i}(T) X^i \right),\end{align*}$$ $$\begin{align*}\delta_{\frak t;\boldsymbol{\epsilon}}=f_{\frak t} \left(X^{m_{\frak t}}+\sum_{i=0}^{m_{\frak t}-1} P_{\frak t,i}(T) X^i \right),\end{align*}$$- 
–  $f_{\frak t} \in \mathbb F_q[t]$
, $f_{\frak t} \in \mathbb F_q[t]$
,
- 
– for all  $0 \leq i \leq m_{\frak t}-1$
, $0 \leq i \leq m_{\frak t}-1$
, $P_{\frak t,i}(y)$
 belongs to $P_{\frak t,i}(y)$
 belongs to $\mathbb F_q(y)$
 with $\mathbb F_q(y)$
 with $v_y(P_{\frak t,i}) \geq 1$
. $v_y(P_{\frak t,i}) \geq 1$
.
 
- 
- 
2) For all  $\mathfrak t \in J \setminus \{\emptyset \}$
 and all $\mathfrak t \in J \setminus \{\emptyset \}$
 and all $\frak t' \in J_0(\frak t)$
, there exists $\frak t' \in J_0(\frak t)$
, there exists $P_{\frak t,\frak t'} \in \mathbb F_q(y)$
 such that In particular, if $P_{\frak t,\frak t'} \in \mathbb F_q(y)$
 such that In particular, if $$\begin{align*}f_{\frak t'}=f_{\frak t} P_{\frak t,\frak t'}(T). \end{align*}$$ $$\begin{align*}f_{\frak t'}=f_{\frak t} P_{\frak t,\frak t'}(T). \end{align*}$$ $f_{\frak t}=0$
, then $f_{\frak t}=0$
, then $f_{\frak t'}=0$
. $f_{\frak t'}=0$
.
 The proof is by induction on 
 $m_{\frak t}$
. We start with
$m_{\frak t}$
. We start with 
 $m_{\frak t}=0$
. Then
$m_{\frak t}=0$
. Then 
 $\frak t=\mathfrak {s}_i$
 and
$\frak t=\mathfrak {s}_i$
 and 
 $\boldsymbol {\epsilon }=\boldsymbol {\epsilon }_i$
 for some i. We have observed that
$\boldsymbol {\epsilon }=\boldsymbol {\epsilon }_i$
 for some i. We have observed that 
 $\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=a_i(t) \in \mathbb F_q[t]$
 and the assertion follows.
$\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=a_i(t) \in \mathbb F_q[t]$
 and the assertion follows.
 Suppose that the claim holds for all 
 $(\frak t;\boldsymbol {\epsilon }) \in J \setminus \{\emptyset \}$
 with
$(\frak t;\boldsymbol {\epsilon }) \in J \setminus \{\emptyset \}$
 with 
 $m_{\frak t}<m$
. We now prove the claim for all
$m_{\frak t}<m$
. We now prove the claim for all 
 $(\frak t;\boldsymbol {\epsilon }) \in J \setminus \{\emptyset \}$
 with
$(\frak t;\boldsymbol {\epsilon }) \in J \setminus \{\emptyset \}$
 with 
 $m_{\frak t}=m$
. In fact, we fix such
$m_{\frak t}=m$
. In fact, we fix such 
 $\frak t$
 and want to find
$\frak t$
 and want to find 
 $\delta _{\frak t;\boldsymbol {\epsilon }} \in \overline K[t]$
 such that
$\delta _{\frak t;\boldsymbol {\epsilon }} \in \overline K[t]$
 such that 
 $$ \begin{align} \delta_{\mathfrak t;\boldsymbol{\epsilon}}=\delta_{\mathfrak t;\boldsymbol{\epsilon}}^{(-1)} (t-\theta)^{(q-1)m}+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\frak t;\boldsymbol{\epsilon})} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}^{(-1)} (t-\theta)^{(q-1)m}. \end{align} $$
$$ \begin{align} \delta_{\mathfrak t;\boldsymbol{\epsilon}}=\delta_{\mathfrak t;\boldsymbol{\epsilon}}^{(-1)} (t-\theta)^{(q-1)m}+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\frak t;\boldsymbol{\epsilon})} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}^{(-1)} (t-\theta)^{(q-1)m}. \end{align} $$
By the induction hypothesis, for all 
 $(\frak t';\boldsymbol {\epsilon }') \in J_0(\frak t;\boldsymbol {\epsilon })$
, we know that
$(\frak t';\boldsymbol {\epsilon }') \in J_0(\frak t;\boldsymbol {\epsilon })$
, we know that 
 $$\begin{align*}\delta_{\frak t';\boldsymbol{\epsilon}'}=f_{\frak t'} \left(X^{m_{\frak t'}}+\sum_{i=0}^{m_{\frak t'}-1} P_{\frak t',i}(T) X^i \right),\end{align*}$$
$$\begin{align*}\delta_{\frak t';\boldsymbol{\epsilon}'}=f_{\frak t'} \left(X^{m_{\frak t'}}+\sum_{i=0}^{m_{\frak t'}-1} P_{\frak t',i}(T) X^i \right),\end{align*}$$
where
- 
•  $f_{\frak t'} \in \mathbb F_q[t]$
, $f_{\frak t'} \in \mathbb F_q[t]$
,
- 
• for all  $0 \leq i \leq m_{\frak t'}-1$
, $0 \leq i \leq m_{\frak t'}-1$
, $P_{\frak t',i}(y) \in \mathbb F_q(y)$
 with $P_{\frak t',i}(y) \in \mathbb F_q(y)$
 with $v_y(P_{\frak t,i}) \geq 1$
. $v_y(P_{\frak t,i}) \geq 1$
.
For 
 $(\frak t';\boldsymbol {\epsilon }') \in J_0(\frak t;\boldsymbol {\epsilon })$
, we write
$(\frak t';\boldsymbol {\epsilon }') \in J_0(\frak t;\boldsymbol {\epsilon })$
, we write 
 $\frak t'=(\frak t,(m-k)(q-1))$
 with
$\frak t'=(\frak t,(m-k)(q-1))$
 with 
 $0 \leq k < m$
 and
$0 \leq k < m$
 and 
 $k \not \equiv m \pmod {q}$
, in particular
$k \not \equiv m \pmod {q}$
, in particular 
 $m_{\frak t'}=k$
. We put
$m_{\frak t'}=k$
. We put 
 $f_k=f_{\frak t'}$
 and
$f_k=f_{\frak t'}$
 and 
 $P_{\frak t',i}=P_{k,i}$
 so that
$P_{\frak t',i}=P_{k,i}$
 so that 
 $$ \begin{align} \delta_{\frak t';\boldsymbol{\epsilon}'}=f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right) \in \mathbb F_q[t,\theta^q]. \end{align} $$
$$ \begin{align} \delta_{\frak t';\boldsymbol{\epsilon}'}=f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right) \in \mathbb F_q[t,\theta^q]. \end{align} $$
 By Lemma 4.3, 
 $\delta _{\frak t;\boldsymbol {\epsilon }}$
 belongs to
$\delta _{\frak t;\boldsymbol {\epsilon }}$
 belongs to 
 $K[t]$
, and
$K[t]$
, and 
 $\deg _\theta \delta _{\frak t;\boldsymbol {\epsilon }} \leq mq$
. Further, since
$\deg _\theta \delta _{\frak t;\boldsymbol {\epsilon }} \leq mq$
. Further, since 
 $\delta _{\frak t;\boldsymbol {\epsilon }}$
 is divisible by
$\delta _{\frak t;\boldsymbol {\epsilon }}$
 is divisible by 
 $(t-\theta )^{(q-1)m}$
, we write
$(t-\theta )^{(q-1)m}$
, we write 
 $\delta _{\frak t;\boldsymbol {\epsilon }}=F (t-\theta )^{(q-1)m}$
 with
$\delta _{\frak t;\boldsymbol {\epsilon }}=F (t-\theta )^{(q-1)m}$
 with 
 $F \in K[t]$
 and
$F \in K[t]$
 and 
 $\deg _\theta F \leq m$
. Dividing Equation (4.6) by
$\deg _\theta F \leq m$
. Dividing Equation (4.6) by 
 $(t-\theta )^{(q-1)m}$
 and twisting once yields
$(t-\theta )^{(q-1)m}$
 and twisting once yields 
 $$ \begin{align} F^{(1)}=F (t-\theta)^{(q-1)m}+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\frak t;\boldsymbol{\epsilon})} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}. \end{align} $$
$$ \begin{align} F^{(1)}=F (t-\theta)^{(q-1)m}+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\frak t;\boldsymbol{\epsilon})} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}. \end{align} $$
As 
 $\delta _{\frak t';\boldsymbol {\epsilon }'} \in \mathbb F_q[t,\theta ^q]$
 for all
$\delta _{\frak t';\boldsymbol {\epsilon }'} \in \mathbb F_q[t,\theta ^q]$
 for all 
 $(\frak t';\boldsymbol {\epsilon }') \in J_0(\frak t;\boldsymbol {\epsilon })$
, it follows that
$(\frak t';\boldsymbol {\epsilon }') \in J_0(\frak t;\boldsymbol {\epsilon })$
, it follows that 
 $F (t-\theta )^{(q-1)m} \in \mathbb F_q[t,\theta ^q]$
. As
$F (t-\theta )^{(q-1)m} \in \mathbb F_q[t,\theta ^q]$
. As 
 $\deg _\theta F \leq m$
, we get
$\deg _\theta F \leq m$
, we get 
 $$\begin{align*}F=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{m-iq}, \quad \text{for}\ f_{m-iq} \in \mathbb F_q[t]. \end{align*}$$
$$\begin{align*}F=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{m-iq}, \quad \text{for}\ f_{m-iq} \in \mathbb F_q[t]. \end{align*}$$
Thus,
 $$ \begin{align*} & F (t-\theta)^{(q-1)m}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta^q)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}. \end{align*} $$
$$ \begin{align*} & F (t-\theta)^{(q-1)m}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta^q)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}. \end{align*} $$
Putting these and Equation (4.7) into Equation (4.8) gets
 $$ \begin{align*} & \sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}+\sum_{\substack{0 \leq k<m \\ k \not\equiv m \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} $$
$$ \begin{align*} & \sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}+\sum_{\substack{0 \leq k<m \\ k \not\equiv m \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} $$
Comparing the coefficients of powers of X yields the following linear system in the variables 
 $f_0,\dots ,f_{m-1}$
:
$f_0,\dots ,f_{m-1}$
: 
 $$ \begin{align*} B_{\big| y=T} \begin{pmatrix} f_{m-1} \\ \vdots \\ f_0 \end{pmatrix}=f_m \begin{pmatrix} Q_{m-1} \\ \vdots \\ Q_0 \end{pmatrix}_{\big| y=T}. \end{align*} $$
$$ \begin{align*} B_{\big| y=T} \begin{pmatrix} f_{m-1} \\ \vdots \\ f_0 \end{pmatrix}=f_m \begin{pmatrix} Q_{m-1} \\ \vdots \\ Q_0 \end{pmatrix}_{\big| y=T}. \end{align*} $$
Here, for 
 $0 \leq i \leq m-1$
,
$0 \leq i \leq m-1$
, 
 $Q_i={m \choose i} y^{m-i} \in y\mathbb F_q[y]$
 and
$Q_i={m \choose i} y^{m-i} \in y\mathbb F_q[y]$
 and 
 $B=(B_{ij})_{0 \leq i,j \leq m-1} \in \operatorname {\mathrm {Mat}}_m(\mathbb F_q(y))$
 such that
$B=(B_{ij})_{0 \leq i,j \leq m-1} \in \operatorname {\mathrm {Mat}}_m(\mathbb F_q(y))$
 such that 
- 
•  $v_y(B_{ij}) \geq 1$
 if $v_y(B_{ij}) \geq 1$
 if $i>j$
, $i>j$
,
- 
•  $v_y(B_{ij}) \geq 0$
 if $v_y(B_{ij}) \geq 0$
 if $i<j$
, $i<j$
,
- 
•  $v_y(B_{ii})=0$
 as $v_y(B_{ii})=0$
 as $B_{ii}=\pm 1$
. $B_{ii}=\pm 1$
.
The above properties follow from the fact that 
 $P_{k,i} \in \mathbb F_q(y)$
 and
$P_{k,i} \in \mathbb F_q(y)$
 and 
 $v_y(P_{k,i}) \geq 1$
. Thus,
$v_y(P_{k,i}) \geq 1$
. Thus, 
 $v_y(\det B)=0$
 so that
$v_y(\det B)=0$
 so that 
 $\det B \neq 0$
. It follows that for all
$\det B \neq 0$
. It follows that for all 
 $0 \leq i \leq m-1$
,
$0 \leq i \leq m-1$
, 
 $f_i=f_m P_i(T)$
 with
$f_i=f_m P_i(T)$
 with 
 $P_i \in \mathbb F_q(y)$
 and
$P_i \in \mathbb F_q(y)$
 and 
 $v_y(P_i) \geq 1$
, and we are done.
$v_y(P_i) \geq 1$
, and we are done.
 To conclude, we have to solve Equation (4.4) for 
 $(\frak t;\boldsymbol {\epsilon })=\emptyset $
. We have some extra work as we have a factor
$(\frak t;\boldsymbol {\epsilon })=\emptyset $
. We have some extra work as we have a factor 
 $\gamma ^{(-1)}$
 on the right-hand side of Equation (4.5). We use
$\gamma ^{(-1)}$
 on the right-hand side of Equation (4.5). We use 
 $\gamma ^{(-1)}=\gamma /\epsilon $
 and put
$\gamma ^{(-1)}=\gamma /\epsilon $
 and put 
 $\delta :=\delta _{\emptyset ,\emptyset }/\gamma \in \overline K[t]$
. Then we have to solve
$\delta :=\delta _{\emptyset ,\emptyset }/\gamma \in \overline K[t]$
. Then we have to solve 
 $$ \begin{align} \epsilon \delta=\delta^{(-1)} (t-\theta)^w+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\emptyset)} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}^{(-1)} (t-\theta)^w. \end{align} $$
$$ \begin{align} \epsilon \delta=\delta^{(-1)} (t-\theta)^w+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\emptyset)} \delta_{\mathfrak t';\boldsymbol{\epsilon}'}^{(-1)} (t-\theta)^w. \end{align} $$
We distinguish two cases.
4.3.1. Case 1: 
 $q-1 \nmid w$
, says
$q-1 \nmid w$
, says 
 $w=m(q-1)+r$
 with
$w=m(q-1)+r$
 with 
 $0<r<q-1$
$0<r<q-1$
 We know that for all 
 $(\frak t';\boldsymbol {\epsilon }') \in J_0(\emptyset )$
, says
$(\frak t';\boldsymbol {\epsilon }') \in J_0(\emptyset )$
, says 
 $\frak t'=((m-k)(q-1)+r)$
 with
$\frak t'=((m-k)(q-1)+r)$
 with 
 $0 \leq k \leq m$
 and
$0 \leq k \leq m$
 and 
 $k \not \equiv m-r \pmod {q}$
,
$k \not \equiv m-r \pmod {q}$
, 
 $$ \begin{align} \delta_{\frak t';\boldsymbol{\epsilon}'}=f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right) \in \mathbb F_q[t,\theta^q], \end{align} $$
$$ \begin{align} \delta_{\frak t';\boldsymbol{\epsilon}'}=f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right) \in \mathbb F_q[t,\theta^q], \end{align} $$
where
- 
•  $f_k \in \mathbb F_q[t]$
, $f_k \in \mathbb F_q[t]$
,
- 
• for all  $0 \leq i \leq k-1$
, $0 \leq i \leq k-1$
, $P_{k,i}(y)$
 belongs to $P_{k,i}(y)$
 belongs to $\mathbb F_q(y)$
 with $\mathbb F_q(y)$
 with $v_y(P_{k,i}) \geq 1$
. $v_y(P_{k,i}) \geq 1$
.
 By Lemma 4.3, 
 $\delta $
 belongs to
$\delta $
 belongs to 
 $K[t]$
. We claim that
$K[t]$
. We claim that 
 $\deg _\theta \delta \leq mq$
. Otherwise, we have
$\deg _\theta \delta \leq mq$
. Otherwise, we have 
 $\deg _\theta \delta _\emptyset> mq$
. Twisting Equation (4.9) once gets
$\deg _\theta \delta _\emptyset> mq$
. Twisting Equation (4.9) once gets 
 $$ \begin{align*} \epsilon \delta^{(1)}=\delta (t-\theta^q)^w+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\emptyset)} \delta_{\mathfrak t';\boldsymbol{\epsilon}'} (t-\theta^q)^w. \end{align*} $$
$$ \begin{align*} \epsilon \delta^{(1)}=\delta (t-\theta^q)^w+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\emptyset)} \delta_{\mathfrak t';\boldsymbol{\epsilon}'} (t-\theta^q)^w. \end{align*} $$
As 
 $\deg _\theta \delta> mq$
, we compare the degrees of
$\deg _\theta \delta> mq$
, we compare the degrees of 
 $\theta $
 on both sides and obtain
$\theta $
 on both sides and obtain 
 $$\begin{align*}q\deg_\theta \delta=\deg_\theta \delta+wq. \end{align*}$$
$$\begin{align*}q\deg_\theta \delta=\deg_\theta \delta+wq. \end{align*}$$
Thus, 
 $q-1 \mid w$
, which is a contradiction. We conclude that
$q-1 \mid w$
, which is a contradiction. We conclude that 
 $\deg _\theta \delta \leq mq$
.
$\deg _\theta \delta \leq mq$
.
 From Equation (4.9), we see that 
 $\delta $
 is divisible by
$\delta $
 is divisible by 
 $(t-\theta )^w$
. Thus, we write
$(t-\theta )^w$
. Thus, we write 
 $\delta =F (t-\theta )^w$
 with
$\delta =F (t-\theta )^w$
 with 
 $F \in K[t]$
 and
$F \in K[t]$
 and 
 $\deg _\theta F \leq mq-w=m-r$
. Dividing Equation (4.9) by
$\deg _\theta F \leq mq-w=m-r$
. Dividing Equation (4.9) by 
 $(t-\theta )^w$
 and twisting once yields
$(t-\theta )^w$
 and twisting once yields 
 $$ \begin{align} \epsilon F^{(1)}=F (t-\theta)^w+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\emptyset)} \delta_{\mathfrak t'}. \end{align} $$
$$ \begin{align} \epsilon F^{(1)}=F (t-\theta)^w+\sum_{(\mathfrak t';\boldsymbol{\epsilon}') \in J_0(\emptyset)} \delta_{\mathfrak t'}. \end{align} $$
Since 
 $\delta _{\frak t';\boldsymbol {\epsilon }'} \in \mathbb F_q[t,\theta ^q]$
 for all
$\delta _{\frak t';\boldsymbol {\epsilon }'} \in \mathbb F_q[t,\theta ^q]$
 for all 
 $(\frak t';\boldsymbol {\epsilon }') \in J_0(\emptyset )$
, it follows that
$(\frak t';\boldsymbol {\epsilon }') \in J_0(\emptyset )$
, it follows that 
 $F (t-\theta )^w \in \mathbb F_q[t,\theta ^q]$
. As
$F (t-\theta )^w \in \mathbb F_q[t,\theta ^q]$
. As 
 $\deg _\theta F \leq m-r$
, we write
$\deg _\theta F \leq m-r$
, we write 
 $$\begin{align*}F=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta)^{m-r-iq}, \quad \text{for}\ f_{m-r-iq} \in \mathbb F_q[t]. \end{align*}$$
$$\begin{align*}F=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta)^{m-r-iq}, \quad \text{for}\ f_{m-r-iq} \in \mathbb F_q[t]. \end{align*}$$
It follows that
 $$ \begin{align*} & F (t-\theta)^w=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta^q)^{m-r-iq}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (T+X)^{m-r-iq}. \end{align*} $$
$$ \begin{align*} & F (t-\theta)^w=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (t-\theta^q)^{m-r-iq}=\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (T+X)^{m-r-iq}. \end{align*} $$
Putting these and Equation (4.10) into Equation (4.11) yields
 $$ \begin{align*} & \epsilon \sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (T+X)^{m-r-iq} \\ &\quad =\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} X^{m-i}+\sum_{\substack{0 \leq k \leq m \\ k \not\equiv m-r \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} $$
$$ \begin{align*} & \epsilon \sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} (T+X)^{m-r-iq} \\ &\quad =\sum_{0 \leq i \leq (m-r)/q} f_{m-r-iq} X^{m-i}+\sum_{\substack{0 \leq k \leq m \\ k \not\equiv m-r \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} $$
Comparing the coefficients of powers of X yields the following linear system in the variables 
 $f_0,\dots ,f_m$
:
$f_0,\dots ,f_m$
: 
 $$ \begin{align*} B_{\big| y=T} \begin{pmatrix} f_m \\ \vdots \\ f_0 \end{pmatrix}=0. \end{align*} $$
$$ \begin{align*} B_{\big| y=T} \begin{pmatrix} f_m \\ \vdots \\ f_0 \end{pmatrix}=0. \end{align*} $$
Here, 
 $B=(B_{ij})_{0 \leq i,j \leq m} \in \operatorname {\mathrm {Mat}}_{m+1}(\mathbb F_q(y))$
 such that
$B=(B_{ij})_{0 \leq i,j \leq m} \in \operatorname {\mathrm {Mat}}_{m+1}(\mathbb F_q(y))$
 such that 
- 
•  $v_y(B_{ij}) \geq 1$
 if $v_y(B_{ij}) \geq 1$
 if $i>j$
, $i>j$
,
- 
•  $v_y(B_{ij}) \geq 0$
 if $v_y(B_{ij}) \geq 0$
 if $i<j$
, $i<j$
,
- 
•  $v_y(B_{ii})=0$
 as $v_y(B_{ii})=0$
 as $B_{ii} \in \mathbb F_q^\times $
. $B_{ii} \in \mathbb F_q^\times $
.
The above properties follow from the fact that 
 $P_{k,i} \in \mathbb F_q(y)$
 and
$P_{k,i} \in \mathbb F_q(y)$
 and 
 $v_y(P_{k,i}) \geq 1$
. Thus,
$v_y(P_{k,i}) \geq 1$
. Thus, 
 $v_y(\det B)=0$
. Hence,
$v_y(\det B)=0$
. Hence, 
 $f_0=\dots =f_m=0$
. It follows that
$f_0=\dots =f_m=0$
. It follows that 
 $\delta _\emptyset =0$
 as
$\delta _\emptyset =0$
 as 
 $\delta =0$
 and
$\delta =0$
 and 
 $\delta _{\frak t';\boldsymbol {\epsilon }'}=0$
 for all
$\delta _{\frak t';\boldsymbol {\epsilon }'}=0$
 for all 
 $(\frak t';\boldsymbol {\epsilon }') \in J_0(\emptyset )$
. We conclude that
$(\frak t';\boldsymbol {\epsilon }') \in J_0(\emptyset )$
. We conclude that 
 $\delta _{\frak t;\boldsymbol {\epsilon }}=0$
 for all
$\delta _{\frak t;\boldsymbol {\epsilon }}=0$
 for all 
 $(\frak t;\boldsymbol {\epsilon }) \in J$
. In particular, for all i,
$(\frak t;\boldsymbol {\epsilon }) \in J$
. In particular, for all i, 
 $a_i(t)=\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=0$
, which is a contradiction. Thus, this case can never happen.
$a_i(t)=\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=0$
, which is a contradiction. Thus, this case can never happen.
4.3.2. Case 2: 
 $q-1 \mid w$
, says
$q-1 \mid w$
, says 
 $w=m(q-1)$
$w=m(q-1)$
 
 ${}$
${}$
 By similar arguments as above, we show that 
 $\delta =F (t-\theta )^{(q-1)m}$
 with
$\delta =F (t-\theta )^{(q-1)m}$
 with 
 $F \in K[t]$
 of the form
$F \in K[t]$
 of the form 
 $$\begin{align*}F=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{m-iq}, \quad \text{for}\ f_{m-iq} \in \mathbb F_q[t]. \end{align*}$$
$$\begin{align*}F=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{m-iq}, \quad \text{for}\ f_{m-iq} \in \mathbb F_q[t]. \end{align*}$$
Thus,
 $$ \begin{align*} & F (t-\theta)^{(q-1)m}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta^q)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}. \end{align*} $$
$$ \begin{align*} & F (t-\theta)^{(q-1)m}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta)^{mq-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}, \\ & F^{(1)}=\sum_{0 \leq i \leq m/q} f_{m-iq} (t-\theta^q)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}. \end{align*} $$
Putting these and Equation (4.7) into Equation (4.9) gets
 $$ \begin{align*} & \epsilon \sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}+\sum_{\substack{0 \leq k<m \\ k \not\equiv m \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} $$
$$ \begin{align*} & \epsilon \sum_{0 \leq i \leq m/q} f_{m-iq} (T+X)^{m-iq}=\sum_{0 \leq i \leq m/q} f_{m-iq} X^{m-i}+\sum_{\substack{0 \leq k<m \\ k \not\equiv m \pmod{q}}} f_k \left(X^k+\sum_{i=0}^{k-1} P_{k,i}(T) X^i \right). \end{align*} $$
Comparing the coefficients of powers of X yields
 $$\begin{align*}\epsilon f_m=f_m \end{align*}$$
$$\begin{align*}\epsilon f_m=f_m \end{align*}$$
and the following linear system in the variables 
 $f_0,\dots ,f_{m-1}$
:
$f_0,\dots ,f_{m-1}$
: 
 $$ \begin{align*} B_{\big| y=T} \begin{pmatrix} f_{m-1} \\ \vdots \\ f_0 \end{pmatrix}=f_m \begin{pmatrix} Q_{m-1} \\ \vdots \\ Q_0 \end{pmatrix}_{\big| y=T}. \end{align*} $$
$$ \begin{align*} B_{\big| y=T} \begin{pmatrix} f_{m-1} \\ \vdots \\ f_0 \end{pmatrix}=f_m \begin{pmatrix} Q_{m-1} \\ \vdots \\ Q_0 \end{pmatrix}_{\big| y=T}. \end{align*} $$
Here, for 
 $0 \leq i \leq m-1$
,
$0 \leq i \leq m-1$
, 
 $Q_i={m \choose i} y^{m-i} \in y\mathbb F_q[y]$
 and
$Q_i={m \choose i} y^{m-i} \in y\mathbb F_q[y]$
 and 
 $B=(B_{ij})_{0 \leq i,j \leq m-1} \in \operatorname {\mathrm {Mat}}_m(\mathbb F_q(y))$
 such that
$B=(B_{ij})_{0 \leq i,j \leq m-1} \in \operatorname {\mathrm {Mat}}_m(\mathbb F_q(y))$
 such that 
- 
•  $v_y(B_{ij}) \geq 1$
 if $v_y(B_{ij}) \geq 1$
 if $i>j$
, $i>j$
,
- 
•  $v_y(B_{ij}) \geq 0$
 if $v_y(B_{ij}) \geq 0$
 if $i<j$
, $i<j$
,
- 
•  $v_y(B_{ii})=0$
 as $v_y(B_{ii})=0$
 as $B_{ii} \in \mathbb F_q^\times $
. $B_{ii} \in \mathbb F_q^\times $
.
The above properties follow from the fact that 
 $P_{k,i} \in \mathbb F_q(y)$
 and
$P_{k,i} \in \mathbb F_q(y)$
 and 
 $v_y(P_{k,i}) \geq 1$
. Thus,
$v_y(P_{k,i}) \geq 1$
. Thus, 
 $v_y(\det B)=0$
 so that
$v_y(\det B)=0$
 so that 
 $\det B \neq 0$
.
$\det B \neq 0$
.
We distinguish two subcases.
 
Subcase 1: 
 $\epsilon \neq 1$
.
$\epsilon \neq 1$
.
 It follows that 
 $f_m=0$
. Then
$f_m=0$
. Then 
 $f_0=\dots =f_{m-1}=0$
. Thus,
$f_0=\dots =f_{m-1}=0$
. Thus, 
 $\delta _{\frak t;\boldsymbol {\epsilon }}=0$
 for all
$\delta _{\frak t;\boldsymbol {\epsilon }}=0$
 for all 
 $(\frak t;\boldsymbol {\epsilon }) \in J$
. In particular, for all i,
$(\frak t;\boldsymbol {\epsilon }) \in J$
. In particular, for all i, 
 $a_i(t)=\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=0$
. This is a contradiction, and we conclude that this case can never happen.
$a_i(t)=\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}=0$
. This is a contradiction, and we conclude that this case can never happen.
 
Subcase 2: 
 $\epsilon =1$
.
$\epsilon =1$
.
 It follows that 
 $\gamma \in \mathbb F_q^\times $
 and thus
$\gamma \in \mathbb F_q^\times $
 and thus 
- 
1) The polynomial  $\delta _\emptyset =\delta \gamma $
 is of the form with $\delta _\emptyset =\delta \gamma $
 is of the form with $$\begin{align*}\delta_\emptyset=f_\emptyset \left(X^m+\sum_{i=0}^{m-1} P_{\emptyset,i}(T) X^i \right)\end{align*}$$ $$\begin{align*}\delta_\emptyset=f_\emptyset \left(X^m+\sum_{i=0}^{m-1} P_{\emptyset,i}(T) X^i \right)\end{align*}$$- 
–  $f_\emptyset \in \mathbb F_q[t]$
, $f_\emptyset \in \mathbb F_q[t]$
,
- 
– for all  $0 \leq i \leq m-1$
, $0 \leq i \leq m-1$
, $P_{\emptyset ,i}(y) \in \mathbb F_q(y)$
 with $P_{\emptyset ,i}(y) \in \mathbb F_q(y)$
 with $v_y(P_{\emptyset ,i}) \geq 1$
. $v_y(P_{\emptyset ,i}) \geq 1$
.
 
- 
- 
2) For all  $(\frak t';\boldsymbol {\epsilon }') \in J_0(\emptyset )$
, there exists $(\frak t';\boldsymbol {\epsilon }') \in J_0(\emptyset )$
, there exists $P_{\emptyset ,\frak t'} \in \mathbb F_q(y)$
 such that $P_{\emptyset ,\frak t'} \in \mathbb F_q(y)$
 such that $$\begin{align*}f_{\frak t'}=f_\emptyset P_{\emptyset,\frak t'}(T). \end{align*}$$ $$\begin{align*}f_{\frak t'}=f_\emptyset P_{\emptyset,\frak t'}(T). \end{align*}$$
 Hence, there exists a unique solution 
 $(\delta _{\mathfrak t;\boldsymbol {\epsilon }})_{(\mathfrak t;\boldsymbol {\epsilon }) \in J} \in \operatorname {\mathrm {Mat}}_{1 \times |J|}(K[t])$
 of Equation (4.4) up to a factor in
$(\delta _{\mathfrak t;\boldsymbol {\epsilon }})_{(\mathfrak t;\boldsymbol {\epsilon }) \in J} \in \operatorname {\mathrm {Mat}}_{1 \times |J|}(K[t])$
 of Equation (4.4) up to a factor in 
 $\mathbb F_q(t)$
. Recall that for all i,
$\mathbb F_q(t)$
. Recall that for all i, 
 $a_i(t)=\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}$
. Therefore, up to a scalar in
$a_i(t)=\delta _{\mathfrak {s}_i;\boldsymbol {\epsilon }_i}$
. Therefore, up to a scalar in 
 $K^\times $
, there exists at most one nontrivial relation
$K^\times $
, there exists at most one nontrivial relation 
 $$\begin{align*}a \widetilde \pi^w+\sum_i a_i \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix}=0 \end{align*}$$
$$\begin{align*}a \widetilde \pi^w+\sum_i a_i \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix}=0 \end{align*}$$
with 
 $a_i \in K$
 and
$a_i \in K$
 and 
 $\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} \in \mathcal {AS}_w$
. Further, we must have
$\operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} \in \mathcal {AS}_w$
. Further, we must have 
 $\boldsymbol {\varepsilon }_i=(1,\dots ,1)$
 for all i.
$\boldsymbol {\varepsilon }_i=(1,\dots ,1)$
 for all i.
 To conclude, it suffices to exhibit such a relation with 
 $a \neq 0$
. In fact, we recall
$a \neq 0$
. In fact, we recall 
 $w=(q-1)m$
 and then express
$w=(q-1)m$
 and then express 
 $\operatorname {\mathrm {Li}}(q-1)^m=\operatorname {\mathrm {Li}} \begin {pmatrix} 1 \\ q-1 \end {pmatrix}^m$
 as a K-linear combination of ACMPL's of weight w. By Theorem 2.11, we can write
$\operatorname {\mathrm {Li}}(q-1)^m=\operatorname {\mathrm {Li}} \begin {pmatrix} 1 \\ q-1 \end {pmatrix}^m$
 as a K-linear combination of ACMPL's of weight w. By Theorem 2.11, we can write 
 $$\begin{align*}\operatorname{\mathrm{Li}}(q-1)^m=\operatorname{\mathrm{Li}} \begin{pmatrix} 1 \\ q-1 \end{pmatrix}^m=\sum_i a_i \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix}, \quad \text{where}\ a_i \in K, \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} \in \mathcal{AS}_w. \end{align*}$$
$$\begin{align*}\operatorname{\mathrm{Li}}(q-1)^m=\operatorname{\mathrm{Li}} \begin{pmatrix} 1 \\ q-1 \end{pmatrix}^m=\sum_i a_i \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix}, \quad \text{where}\ a_i \in K, \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} \in \mathcal{AS}_w. \end{align*}$$
We note that 
 $\operatorname {\mathrm {Li}}(q-1)=\zeta _A(q-1)=-D^{-1}_1 \widetilde \pi ^{q-1}$
. Thus,
$\operatorname {\mathrm {Li}}(q-1)=\zeta _A(q-1)=-D^{-1}_1 \widetilde \pi ^{q-1}$
. Thus, 
 $$\begin{align*}(-D_1)^{-m} \widetilde \pi^w-\sum_i a_i \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix}=0, \end{align*}$$
$$\begin{align*}(-D_1)^{-m} \widetilde \pi^w-\sum_i a_i \operatorname{\mathrm{Li}} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix}=0, \end{align*}$$
which is the desired relation.
5. Applications on AMZV's and Zagier–Hoffman’s conjectures in positive characteristic
In this section, we give two applications of the study of ACMPL's.
 First, we use Theorem 4.6 to prove Theorem A which calculates the dimensions of the vector space 
 $\mathcal {AZ}_w$
 of alternating multiple zeta values in positive characteristic (AMZV's) of fixed weight introduced by Harada [Reference Harada21]. Consequently, we determine all linear relations for AMZV's. To do so, we develop an algebraic theory to obtain a weak version of Brown’s theorem for AMZV's. Then we deduce that
$\mathcal {AZ}_w$
 of alternating multiple zeta values in positive characteristic (AMZV's) of fixed weight introduced by Harada [Reference Harada21]. Consequently, we determine all linear relations for AMZV's. To do so, we develop an algebraic theory to obtain a weak version of Brown’s theorem for AMZV's. Then we deduce that 
 $\mathcal {AZ}_w$
 and
$\mathcal {AZ}_w$
 and 
 $\mathcal {AL}_w$
 are equal and conclude. In contrast to the setting of MZV's, although the results are clean, we are unable to obtain either sharp upper bounds or sharp lower bounds for
$\mathcal {AL}_w$
 are equal and conclude. In contrast to the setting of MZV's, although the results are clean, we are unable to obtain either sharp upper bounds or sharp lower bounds for 
 $\mathcal {AZ}_w$
 for general w without the theory of ACMPL's.
$\mathcal {AZ}_w$
 for general w without the theory of ACMPL's.
Second, we restrict our attention to MZV's and determine all linear relations between MZV's. In particular, we obtain a proof of Zagier–Hoffman’s conjectures in positive characteristic in full generality (i.e., Theorem B) and generalize the work of one of the authors [Reference Ngo Dac31].
5.1. Linear relations between AMZV's
5.1.1. Preliminaries
 For 
 $d \in \mathbb {Z}$
 and for
$d \in \mathbb {Z}$
 and for 
 $\mathfrak {s}=(s_1,\dots ,s_n) \in \mathbb {N}^n$
, recalling
$\mathfrak {s}=(s_1,\dots ,s_n) \in \mathbb {N}^n$
, recalling 
 $S_d(\mathfrak {s})$
 and
$S_d(\mathfrak {s})$
 and 
 $S_{<d}(\mathfrak {s})$
 given in §2.1.3, and further letting
$S_{<d}(\mathfrak {s})$
 given in §2.1.3, and further letting 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 be an array, we recall (see §2.1.3)
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 be an array, we recall (see §2.1.3) 
 $$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d = \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K \end{align*} $$
and
 $$ \begin{align*} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d>\deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{align*} $$
$$ \begin{align*} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ d>\deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K. \end{align*} $$
One verifies easily the following formulas:
 $$ \begin{align*} & S_{<d} \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} = S_{<d}(s_1, \dots, s_n),\quad S_d \begin{pmatrix} 1 &\dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} = S_{d}(s_1, \dots, s_n),\\ & S_{d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} = \varepsilon^d S_d(s),\quad S_{d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = S_{d} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_{-} \\ \mathfrak{s}_{-} \end{pmatrix}. \end{align*} $$
$$ \begin{align*} & S_{<d} \begin{pmatrix} 1& \dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} = S_{<d}(s_1, \dots, s_n),\quad S_d \begin{pmatrix} 1 &\dots & 1 \\ s_1 & \dots & s_n \end{pmatrix} = S_{d}(s_1, \dots, s_n),\\ & S_{d} \begin{pmatrix} \varepsilon \\ s \end{pmatrix} = \varepsilon^d S_d(s),\quad S_{d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = S_{d} \begin{pmatrix} \varepsilon_1 \\ s_1 \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_{-} \\ \mathfrak{s}_{-} \end{pmatrix}. \end{align*} $$
Harada [Reference Harada21] introduced the AMZV as follows;
 $$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum \limits_{d \geq 0} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K_{\infty}. \end{align*} $$
$$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum \limits_{d \geq 0} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \sum\limits_{\substack{a_1, \dots, a_n \in A_{+} \\ \deg a_1> \dots > \deg a_n\geq 0}} \dfrac{\varepsilon_1^{\deg a_1} \dots \varepsilon_n^{\deg a_n }}{a_1^{s_1} \dots a_n^{s_n}} \in K_{\infty}. \end{align*} $$
 Using Chen’s formula (see [Reference Chen13]), Harada proved that for 
 $s, t \in \mathbb {N}$
 and
$s, t \in \mathbb {N}$
 and 
 $\varepsilon , \epsilon \in \mathbb {F}_q^{\times }$
, we have
$\varepsilon , \epsilon \in \mathbb {F}_q^{\times }$
, we have 
 $$ \begin{align} S_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} S_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = S_d \begin{pmatrix} \varepsilon\epsilon \\ s+t \end{pmatrix} + \sum \limits_i \Delta^i_{s,t} S_d \begin{pmatrix} \varepsilon\epsilon & 1 \\ s+t-i & i \end{pmatrix} , \end{align} $$
$$ \begin{align} S_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} S_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = S_d \begin{pmatrix} \varepsilon\epsilon \\ s+t \end{pmatrix} + \sum \limits_i \Delta^i_{s,t} S_d \begin{pmatrix} \varepsilon\epsilon & 1 \\ s+t-i & i \end{pmatrix} , \end{align} $$
where
 $$ \begin{align} \Delta^i_{s,t} = \begin{cases} (-1)^{s-1} {i - 1 \choose s - 1} + (-1)^{t-1} {i-1 \choose t-1} & \quad \text{if } q - 1 \mid i \text{ and } 0 < i < s + t, \\ 0 & \quad \text{otherwise.} \end{cases} \end{align} $$
$$ \begin{align} \Delta^i_{s,t} = \begin{cases} (-1)^{s-1} {i - 1 \choose s - 1} + (-1)^{t-1} {i-1 \choose t-1} & \quad \text{if } q - 1 \mid i \text{ and } 0 < i < s + t, \\ 0 & \quad \text{otherwise.} \end{cases} \end{align} $$
Remark 5.1. When 
 $s + t \leq q$
, we deduce from the above formulas that
$s + t \leq q$
, we deduce from the above formulas that 
 $$ \begin{align*} S_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} S_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = S_d \begin{pmatrix} \varepsilon\epsilon \\ s+t \end{pmatrix}. \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \varepsilon \\ s \end{pmatrix} S_d \begin{pmatrix} \epsilon \\ t \end{pmatrix} = S_d \begin{pmatrix} \varepsilon\epsilon \\ s+t \end{pmatrix}. \end{align*} $$
He then proved similar results for products of AMZV's (see [Reference Harada21]):
Proposition 5.2. Let 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
,
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, 
 $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
be two arrays. Then
$ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
be two arrays. Then 
- 
1. There exist  $f_i \in \mathbb {F}_q$
 and arrays $f_i \in \mathbb {F}_q$
 and arrays $ \begin {pmatrix} \boldsymbol {\mu }_i \\ \mathfrak {u}_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }_i \\ \mathfrak {u}_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }_i \\ \mathfrak {u}_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $ \begin {pmatrix} \boldsymbol {\mu }_i \\ \mathfrak {u}_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $\operatorname {\mathrm {depth}}(\mathfrak {u}_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $\operatorname {\mathrm {depth}}(\mathfrak {u}_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} S_d \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f_i S_d \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$ $$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} S_d \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f_i S_d \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$
- 
2. There exist  $f^{\prime }_i \in \mathbb {F}_q$
 and arrays $f^{\prime }_i \in \mathbb {F}_q$
 and arrays $ \begin {pmatrix} \boldsymbol {\mu }^{\prime }_i \\ \mathfrak {u}^{\prime }_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }^{\prime }_i \\ \mathfrak {u}^{\prime }_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }^{\prime }_i \\ \mathfrak {u}^{\prime }_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $ \begin {pmatrix} \boldsymbol {\mu }^{\prime }_i \\ \mathfrak {u}^{\prime }_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $\operatorname {\mathrm {depth}}(\mathfrak {u}^{\prime }_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $\operatorname {\mathrm {depth}}(\mathfrak {u}^{\prime }_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $$ \begin{align*} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f^{\prime}_i S_{<d} \begin{pmatrix} \boldsymbol{\mu}^{\prime}_i \\ \mathfrak{u}^{\prime}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$ $$ \begin{align*} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f^{\prime}_i S_{<d} \begin{pmatrix} \boldsymbol{\mu}^{\prime}_i \\ \mathfrak{u}^{\prime}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$
- 
3. There exist  $f^{\prime \prime }_i \in \mathbb {F}_q$
 and arrays $f^{\prime \prime }_i \in \mathbb {F}_q$
 and arrays $ \begin {pmatrix} \boldsymbol {\mu }^{\prime \prime }_i \\ \mathfrak {u}^{\prime \prime }_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }^{\prime \prime }_i \\ \mathfrak {u}^{\prime \prime }_i \end {pmatrix} $
 with $ \begin {pmatrix} \boldsymbol {\mu }^{\prime \prime }_i \\ \mathfrak {u}^{\prime \prime }_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $ \begin {pmatrix} \boldsymbol {\mu }^{\prime \prime }_i \\ \mathfrak {u}^{\prime \prime }_i \end {pmatrix} \leq \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 and $\operatorname {\mathrm {depth}}(\mathfrak {u}^{\prime \prime }_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $\operatorname {\mathrm {depth}}(\mathfrak {u}^{\prime \prime }_i) \leq \operatorname {\mathrm {depth}}(\mathfrak {s}) + \operatorname {\mathrm {depth}}(\mathfrak {t})$
 for all i such that $$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f^{\prime\prime}_i S_d \begin{pmatrix} \boldsymbol{\mu}^{\prime\prime}_i \\ \mathfrak{u}^{\prime\prime}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$ $$ \begin{align*} S_d \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{t} \end{pmatrix} = \sum \limits_i f^{\prime\prime}_i S_d \begin{pmatrix} \boldsymbol{\mu}^{\prime\prime}_i \\ \mathfrak{u}^{\prime\prime}_i \end{pmatrix} \quad \text{for all } d \in \mathbb{Z}. \end{align*} $$
 We denote by 
 $\mathcal {AZ}$
 the K-vector space generated by the AMZV's and
$\mathcal {AZ}$
 the K-vector space generated by the AMZV's and 
 $\mathcal {AZ}_w$
 the K-vector space generated by the AMZV's of weight w. It follows from Proposition 5.2 that
$\mathcal {AZ}_w$
 the K-vector space generated by the AMZV's of weight w. It follows from Proposition 5.2 that 
 $\mathcal {AZ}$
 is a K-algebra.
$\mathcal {AZ}$
 is a K-algebra.
5.1.2. Algebraic theory for AMZV's
We can extend an algebraic theory for AMZV's which follow the same line as that in §2.
Definition 5.3. A binary relation is a K-linear combination of the form
 $$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} =0 \quad \text{for all } d \in \mathbb{Z}, \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} =0 \quad \text{for all } d \in \mathbb{Z}, \end{align*} $$
where 
 $a_i,b_i \in K$
 and
$a_i,b_i \in K$
 and 
 $ \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} , \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays of the same weight.
$ \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} , \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays of the same weight.
 A binary relation is called a fixed relation if 
 $b_i = 0$
 for all i.
$b_i = 0$
 for all i.
 We denote by 
 $\mathfrak {R}_{w}$
 the set of all binary relations of weight w. From Lemma 2.2 and the relation
$\mathfrak {R}_{w}$
 the set of all binary relations of weight w. From Lemma 2.2 and the relation 
 $R_{\varepsilon }$
 defined in §2.2, we obtain the following binary relation
$R_{\varepsilon }$
 defined in §2.2, we obtain the following binary relation 
 $$ \begin{align*} R_{\varepsilon} \colon \quad S_d \begin{pmatrix} \varepsilon\\ q \end{pmatrix} + \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} =0, \end{align*} $$
$$ \begin{align*} R_{\varepsilon} \colon \quad S_d \begin{pmatrix} \varepsilon\\ q \end{pmatrix} + \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} =0, \end{align*} $$
where 
 $D_1 = \theta ^q - \theta $
.
$D_1 = \theta ^q - \theta $
.
 For later definitions, let 
 $R \in \mathfrak {R}_w$
 be a binary relation of the form
$R \in \mathfrak {R}_w$
 be a binary relation of the form 
 $$ \begin{align} R(d) \colon \quad \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} =0, \end{align} $$
$$ \begin{align} R(d) \colon \quad \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} =0, \end{align} $$
where 
 $a_i,b_i \in K$
 and
$a_i,b_i \in K$
 and 
 $ \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} , \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays of the same weight. We now define some operators on K-vector spaces of binary relations.
$ \begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} , \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays of the same weight. We now define some operators on K-vector spaces of binary relations.
 First, we define operators 
 $\mathcal B^*$
. Let
$\mathcal B^*$
. Let 
 $ \begin {pmatrix} \sigma \\ v \end {pmatrix} $
 be an array. We introduce
$ \begin {pmatrix} \sigma \\ v \end {pmatrix} $
 be an array. We introduce 
 $$ \begin{align*} \mathcal B^*_{\sigma,v} \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+v} \end{align*} $$
$$ \begin{align*} \mathcal B^*_{\sigma,v} \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+v} \end{align*} $$
as follows: For each 
 $R \in \mathfrak {R}_{w}$
 as given in Equation (5.3), the image
$R \in \mathfrak {R}_{w}$
 as given in Equation (5.3), the image 
 $\mathcal B^*_{\sigma ,v}(R) = S_d \begin {pmatrix} \sigma \\ v \end {pmatrix} \sum _{j < d} R(j)$
 is a fixed relation of the form
$\mathcal B^*_{\sigma ,v}(R) = S_d \begin {pmatrix} \sigma \\ v \end {pmatrix} \sum _{j < d} R(j)$
 is a fixed relation of the form 
 $$ \begin{align*} 0 &= S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \left(\sum \limits_ia_i S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{<d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \right) \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \sigma & \boldsymbol{\varepsilon}_i \\ v& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma & \boldsymbol{\epsilon}_i \\ v& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \sum \limits_j f_{i,j} S_d \begin{pmatrix} \boldsymbol{\mu}_{i,j} \\ \mathfrak{u}_{i,j} \end{pmatrix}. \end{align*} $$
$$ \begin{align*} 0 &= S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} \left(\sum \limits_ia_i S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{<d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \right) \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{<d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma \\ v \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \sigma & \boldsymbol{\varepsilon}_i \\ v& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma & \boldsymbol{\epsilon}_i \\ v& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i \sum \limits_j f_{i,j} S_d \begin{pmatrix} \boldsymbol{\mu}_{i,j} \\ \mathfrak{u}_{i,j} \end{pmatrix}. \end{align*} $$
The last equality follows from Proposition 5.2.
 Let 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \sigma _1 & \dots & \sigma _n \\ v_1 & \dots & v_n \end {pmatrix} $
 be an array. We define an operator
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \sigma _1 & \dots & \sigma _n \\ v_1 & \dots & v_n \end {pmatrix} $
 be an array. We define an operator 
 $\mathcal {B}^*_{\Sigma ,V}(R) $
 by
$\mathcal {B}^*_{\Sigma ,V}(R) $
 by 
 $$ \begin{align*} \mathcal B^*_{\Sigma,V}(R) := \mathcal B^*_{\sigma_1,v_1} \circ \dots \circ \mathcal B^*_{\sigma_n,v_n}(R). \end{align*} $$
$$ \begin{align*} \mathcal B^*_{\Sigma,V}(R) := \mathcal B^*_{\sigma_1,v_1} \circ \dots \circ \mathcal B^*_{\sigma_n,v_n}(R). \end{align*} $$
Lemma 5.4. Let 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \sigma _1 & \dots & \sigma _n \\ v_1 & \dots & v_n \end {pmatrix} $
 be an array. Under the notations of Equation (5.3), suppose that for all i,
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \sigma _1 & \dots & \sigma _n \\ v_1 & \dots & v_n \end {pmatrix} $
 be an array. Under the notations of Equation (5.3), suppose that for all i, 
 $v_n + t_{i1} \leq q$
, where
$v_n + t_{i1} \leq q$
, where 
 $\mathfrak {t}_{i} = (t_{i1}, \mathfrak {t}_{i-})$
. Then
$\mathfrak {t}_{i} = (t_{i1}, \mathfrak {t}_{i-})$
. Then 
 $\mathcal B^*_{\Sigma ,V}(R)$
 is of the form
$\mathcal B^*_{\Sigma ,V}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \Sigma & \boldsymbol{\varepsilon}_i \\ V& \mathfrak{s}_i \end{pmatrix} & + \sum \limits_i b_i S_d \begin{pmatrix} \Sigma & \boldsymbol{\epsilon}_i \\ V& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_1 & \dots & \sigma_{n-1} & \sigma_n \epsilon_{i1} &\boldsymbol{\epsilon}_{i-} \\ v_1 & \dots & v_{n-1} & v_n+ t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \Sigma & \boldsymbol{\varepsilon}_i \\ V& \mathfrak{s}_i \end{pmatrix} & + \sum \limits_i b_i S_d \begin{pmatrix} \Sigma & \boldsymbol{\epsilon}_i \\ V& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_1 & \dots & \sigma_{n-1} & \sigma_n \epsilon_{i1} &\boldsymbol{\epsilon}_{i-} \\ v_1 & \dots & v_{n-1} & v_n+ t_{i1} & \mathfrak{t}_{i-} \end{pmatrix} = 0. \end{align*} $$
Proof. From the definition, we have 
 $\mathcal {B}^*_{\sigma _n,v_n}(R)$
 is of the form
$\mathcal {B}^*_{\sigma _n,v_n}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \sigma_n & \boldsymbol{\varepsilon}_i \\ v_n& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n & \boldsymbol{\epsilon}_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \sigma_n & \boldsymbol{\varepsilon}_i \\ v_n& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n & \boldsymbol{\epsilon}_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} $$
For all i, since 
 $v_n + t_{i1} \leq q$
, it follows from Remark 5.1 that
$v_n + t_{i1} \leq q$
, it follows from Remark 5.1 that 
 $$ \begin{align*} S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_d \begin{pmatrix} \epsilon_{i1} \\ t_{i1} \end{pmatrix} S_{<d} \begin{pmatrix} {\boldsymbol{\epsilon}_i}_- \\ {\mathfrak{t}_i}_- \end{pmatrix} &= S_d \begin{pmatrix} \sigma_n \epsilon_{i1} \\ v_n + t_{i1} \end{pmatrix} S_{<d} \begin{pmatrix} {\boldsymbol{\epsilon}_i}_- \\ {\mathfrak{t}_i}_- \end{pmatrix}\\ &= S_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\boldsymbol{\epsilon}_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix}, \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = S_d \begin{pmatrix} \sigma_n \\ v_n \end{pmatrix} S_d \begin{pmatrix} \epsilon_{i1} \\ t_{i1} \end{pmatrix} S_{<d} \begin{pmatrix} {\boldsymbol{\epsilon}_i}_- \\ {\mathfrak{t}_i}_- \end{pmatrix} &= S_d \begin{pmatrix} \sigma_n \epsilon_{i1} \\ v_n + t_{i1} \end{pmatrix} S_{<d} \begin{pmatrix} {\boldsymbol{\epsilon}_i}_- \\ {\mathfrak{t}_i}_- \end{pmatrix}\\ &= S_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\boldsymbol{\epsilon}_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix}, \end{align*} $$
hence 
 $\mathcal {B}^*_{\sigma _n,v_n}(R)$
 is of the form
$\mathcal {B}^*_{\sigma _n,v_n}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \sigma_n & \boldsymbol{\varepsilon}_i \\ v_n& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n & \boldsymbol{\epsilon}_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\boldsymbol{\epsilon}_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \sigma_n & \boldsymbol{\varepsilon}_i \\ v_n& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n & \boldsymbol{\epsilon}_i \\ v_n& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \sigma_n \epsilon_{i1} & {\boldsymbol{\epsilon}_i}_- \\ v_n + t_{i1} & {\mathfrak{t}_i}_- \end{pmatrix} = 0. \end{align*} $$
Apply the operator 
 $\mathcal B^*_{\sigma _1,v_1} \circ \dots \circ \mathcal B^*_{\sigma _{n - 1},v_{n - 1}}$
 to
$\mathcal B^*_{\sigma _1,v_1} \circ \dots \circ \mathcal B^*_{\sigma _{n - 1},v_{n - 1}}$
 to 
 $\mathcal {B}^*_{\sigma _n,v_n}(R)$
, the result then follows from the definition.
$\mathcal {B}^*_{\sigma _n,v_n}(R)$
, the result then follows from the definition.
 Second, we define operators 
 $\mathcal C$
. Let
$\mathcal C$
. Let 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 be an array of weight v. We introduce
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 be an array of weight v. We introduce 
 $$ \begin{align*} \mathcal C_{\Sigma,V}(R) \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+v} \end{align*} $$
$$ \begin{align*} \mathcal C_{\Sigma,V}(R) \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+v} \end{align*} $$
as follows: For each 
 $R \in \mathfrak {R}_{w}$
 as given in Equation (5.3), the image
$R \in \mathfrak {R}_{w}$
 as given in Equation (5.3), the image 
 $\mathcal C_{\Sigma ,V}(R) = R(d) S_{<d+1} \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 is a binary relation of the form
$\mathcal C_{\Sigma ,V}(R) = R(d) S_{<d+1} \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 is a binary relation of the form 
 $$ \begin{align*} 0 &= \left( \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \right) S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i f_i S_d \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} + \sum \limits_i f^{\prime}_i S_{d+1} \begin{pmatrix} \boldsymbol{\mu}^{\prime}_i \\ \mathfrak{u}^{\prime}_i \end{pmatrix}. \end{align*} $$
$$ \begin{align*} 0 &= \left( \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} \right) S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} \\ &= \sum \limits_i f_i S_d \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} + \sum \limits_i f^{\prime}_i S_{d+1} \begin{pmatrix} \boldsymbol{\mu}^{\prime}_i \\ \mathfrak{u}^{\prime}_i \end{pmatrix}. \end{align*} $$
The last equality follows from Proposition 5.2.
 In particular, the following proposition gives the form of 
 $\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
.
$\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
.
Proposition 5.5. Let 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix}$
 be an array with
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix}$
 be an array with 
 $V = (v_1,V_{-})$
 and
$V = (v_1,V_{-})$
 and 
 $\Sigma = (\sigma _1, \Sigma _{-})$
. Then
$\Sigma = (\sigma _1, \Sigma _{-})$
. Then 
 $\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
 is of the form
$\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
 is of the form 
 $$ \begin{align*} S_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{align*} $$
where 
 $a_i, b_i \in K$
 and
$a_i, b_i \in K$
 and 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}, \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying
$\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}, \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying 
- 
•  $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} \leq \begin {pmatrix} \varepsilon \\ q \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix}$
 and $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} \leq \begin {pmatrix} \varepsilon \\ q \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix}$
 and $s_{i1} < q + v_1$
 for all i; $s_{i1} < q + v_1$
 for all i;
- 
•  $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} \leq \begin {pmatrix} 1 \\ q - 1 \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 for all i. $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} \leq \begin {pmatrix} 1 \\ q - 1 \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 for all i.
Proof. From the definition, 
 $\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
 is of the form
$\mathcal C_{\Sigma ,V}(R_{\varepsilon })$
 is of the form 
 $$ \begin{align*} S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \varepsilon^{-1} D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + \varepsilon^{-1} D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} = 0. \end{align*} $$
It follows from Equation (5.1) and Proposition 5.2 that
 $$ \begin{align*} S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= S_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix}, \\ \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} , \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{<d} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= S_d \begin{pmatrix} \varepsilon\sigma_1 & \Sigma_{-} \\ q + v_1 & V_{-} \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix}, \\ \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon& 1 \\ 1 & q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \Sigma \\ V \end{pmatrix} &= \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} , \end{align*} $$
where 
 $a_i, b_i \in K$
 and
$a_i, b_i \in K$
 and 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}, \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying
$\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}, \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying 
- 
•  $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} \leq \begin {pmatrix} \varepsilon \\ q \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix}$
 and $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix} \leq \begin {pmatrix} \varepsilon \\ q \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix}$
 and $s_{i1} < q + v_1$
 for all i; $s_{i1} < q + v_1$
 for all i;
- 
•  $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} \leq \begin {pmatrix} 1 \\ q - 1 \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 for all i. $ \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} \leq \begin {pmatrix} 1 \\ q - 1 \end {pmatrix} + \begin {pmatrix} \Sigma \\ V \end {pmatrix} $
 for all i.
This proves the proposition.
 Finally, we define operators 
 $\mathcal {BC}$
. Let
$\mathcal {BC}$
. Let 
 $\varepsilon \in \mathbb {F}_q^{\times }$
. We introduce
$\varepsilon \in \mathbb {F}_q^{\times }$
. We introduce 
 $$ \begin{align*} \mathcal{BC}_{\varepsilon,q} \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+q} \end{align*} $$
$$ \begin{align*} \mathcal{BC}_{\varepsilon,q} \colon \mathfrak{R}_{w} \longrightarrow \mathfrak{R}_{w+q} \end{align*} $$
as follows: For each 
 $R \in \mathfrak {R}_{w}$
 as given in Equation (5.3), the image
$R \in \mathfrak {R}_{w}$
 as given in Equation (5.3), the image 
 $\mathcal {BC}_{\varepsilon ,q}(R)$
 is a binary relation given by
$\mathcal {BC}_{\varepsilon ,q}(R)$
 is a binary relation given by 
 $$ \begin{align*} \mathcal{BC}_{\varepsilon,q}(R) = \mathcal B^*_{\varepsilon,q}(R) - \sum\limits_i b_i \mathcal C_{\boldsymbol{\epsilon}_i,\mathfrak{t}_i} (R_{\varepsilon}). \end{align*} $$
$$ \begin{align*} \mathcal{BC}_{\varepsilon,q}(R) = \mathcal B^*_{\varepsilon,q}(R) - \sum\limits_i b_i \mathcal C_{\boldsymbol{\epsilon}_i,\mathfrak{t}_i} (R_{\varepsilon}). \end{align*} $$
 Let us clarify the definition of 
 $\mathcal {BC}_{\varepsilon ,q}$
. We know that
$\mathcal {BC}_{\varepsilon ,q}$
. We know that 
 $\mathcal B^*_{\varepsilon ,q}(R)$
 is of the form
$\mathcal B^*_{\varepsilon ,q}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon& \boldsymbol{\varepsilon}_i \\ q& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ q& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon& \boldsymbol{\varepsilon}_i \\ q& \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ q& \mathfrak{t}_i \end{pmatrix} + \sum \limits_i b_i S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} $$
Moreover, 
 $\mathcal C_{\boldsymbol {\epsilon }_i,\mathfrak {t}_i} (R_{\varepsilon })$
 is of the form
$\mathcal C_{\boldsymbol {\epsilon }_i,\mathfrak {t}_i} (R_{\varepsilon })$
 is of the form 
 $$ \begin{align*} S_d \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ q& \mathfrak{t}_i \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} + \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon \\ 1 \end{pmatrix} S_{<d+1} \begin{pmatrix} 1 \\ q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} $$
$$ \begin{align*} S_d \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_i \\ q& \mathfrak{t}_i \end{pmatrix} + S_d \begin{pmatrix} \varepsilon \\ q \end{pmatrix} S_{d} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} + \varepsilon^{-1}D_1 S_{d+1} \begin{pmatrix} \varepsilon \\ 1 \end{pmatrix} S_{<d+1} \begin{pmatrix} 1 \\ q-1 \end{pmatrix} S_{<d+1} \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} = 0. \end{align*} $$
Combining with Proposition 5.2, we have that 
 $\mathcal {BC}_{\varepsilon ,q}(R)$
 is of the form
$\mathcal {BC}_{\varepsilon ,q}(R)$
 is of the form 
 $$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon& \boldsymbol{\varepsilon}_i \\ q& \mathfrak{s}_i \end{pmatrix} + \sum \limits_{i,j} b_{ij} S_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_{ij} \\ 1& \mathfrak{t}_{ij} \end{pmatrix} =0, \end{align*} $$
$$ \begin{align*} \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon& \boldsymbol{\varepsilon}_i \\ q& \mathfrak{s}_i \end{pmatrix} + \sum \limits_{i,j} b_{ij} S_{d+1} \begin{pmatrix} \varepsilon& \boldsymbol{\epsilon}_{ij} \\ 1& \mathfrak{t}_{ij} \end{pmatrix} =0, \end{align*} $$
where 
 $b_{ij} \in K$
 and
$b_{ij} \in K$
 and 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_{ij} \\ \mathfrak {t}_{ij} \end {pmatrix} $
are arrays satisfying
$ \begin {pmatrix} \boldsymbol {\epsilon }_{ij} \\ \mathfrak {t}_{ij} \end {pmatrix} $
are arrays satisfying 
 $ \begin {pmatrix} \boldsymbol {\epsilon }_{ij} \\ \mathfrak {t}_{ij} \end {pmatrix} \leq \begin {pmatrix} 1 \\ q-1 \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon }_{i} \\ \mathfrak {t}_{i} \end {pmatrix} $
 for all j.
$ \begin {pmatrix} \boldsymbol {\epsilon }_{ij} \\ \mathfrak {t}_{ij} \end {pmatrix} \leq \begin {pmatrix} 1 \\ q-1 \end {pmatrix} + \begin {pmatrix} \boldsymbol {\epsilon }_{i} \\ \mathfrak {t}_{i} \end {pmatrix} $
 for all j.
Proposition 5.6. 1) Let 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 be an array such that
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _n \\ s_1 & \dots & s_n \end {pmatrix} $
 be an array such that 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_{k-1})$
 for some
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_{k-1})$
 for some 
 $1 \leq k \leq n$
. Then
$1 \leq k \leq n$
. Then 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be decomposed as follows:
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be decomposed as follows: 
 $$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{\sum\limits_i a_i \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon}^{\prime}_i \\ \mathfrak{s}^{\prime}_i \end{pmatrix} }_{\text{type 1}} + \underbrace{\sum\limits_i b_i\zeta_A \begin{pmatrix} \boldsymbol{\epsilon}_i' \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\zeta_A \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align*} $$
$$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{\sum\limits_i a_i \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon}^{\prime}_i \\ \mathfrak{s}^{\prime}_i \end{pmatrix} }_{\text{type 1}} + \underbrace{\sum\limits_i b_i\zeta_A \begin{pmatrix} \boldsymbol{\epsilon}_i' \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\zeta_A \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align*} $$
where 
 $a_i, b_i, c_i \in K$
 such that for all i, the following properties are satisfied:
$a_i, b_i, c_i \in K$
 such that for all i, the following properties are satisfied: 
- 
• For all arrays  $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side, $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side, $$ \begin{align*} \operatorname{\mathrm{depth}}(\mathfrak{t}) \geq \operatorname{\mathrm{depth}}(\mathfrak{s}) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\mathfrak{s}). \end{align*} $$ $$ \begin{align*} \operatorname{\mathrm{depth}}(\mathfrak{t}) \geq \operatorname{\mathrm{depth}}(\mathfrak{s}) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\mathfrak{s}). \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\varepsilon }' \\ \mathfrak {s}' \end {pmatrix} $
 of type $1$
 with respect to $1$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $\operatorname {\mathrm {Init}}(\mathfrak {s}) \preceq \operatorname {\mathrm {Init}}(\mathfrak {s}')$
 and $\operatorname {\mathrm {Init}}(\mathfrak {s}) \preceq \operatorname {\mathrm {Init}}(\mathfrak {s}')$
 and $s^{\prime }_k < s_k$
. $s^{\prime }_k < s_k$
.
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\epsilon }' \\ \mathfrak {t}' \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\epsilon }' \\ \mathfrak {t}' \end {pmatrix} $
 of type $2$
 with respect to $2$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, for all $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, for all $k \leq \ell \leq n$
, $k \leq \ell \leq n$
, $$ \begin{align*} t^{\prime}_{1} + \dots + t^{\prime}_\ell < s_1 + \dots + s_\ell. \end{align*} $$ $$ \begin{align*} t^{\prime}_{1} + \dots + t^{\prime}_\ell < s_1 + \dots + s_\ell. \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\mu } \\ \mathfrak {u} \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\mu } \\ \mathfrak {u} \end {pmatrix} $
 of type $3$
 with respect to $3$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $\operatorname {\mathrm {Init}}(\mathfrak {s}) \prec \operatorname {\mathrm {Init}}(\mathfrak {u})$
. $\operatorname {\mathrm {Init}}(\mathfrak {s}) \prec \operatorname {\mathrm {Init}}(\mathfrak {u})$
.
 2) Let 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _k \\ s_1 & \dots & s_k \end {pmatrix} $
 be an array such that
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \dots & \varepsilon _k \\ s_1 & \dots & s_k \end {pmatrix} $
 be an array such that 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = \mathfrak {s}$
 and
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = \mathfrak {s}$
 and 
 $s_k = q$
. Then
$s_k = q$
. Then 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be decomposed as follows:
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be decomposed as follows: 
 $$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{\sum\limits_i b_i\zeta_A \begin{pmatrix} \boldsymbol{\epsilon}^{\prime}_i \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\zeta_A \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align*} $$
$$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = \underbrace{\sum\limits_i b_i\zeta_A \begin{pmatrix} \boldsymbol{\epsilon}^{\prime}_i \\ \mathfrak{t}^{\prime}_i \end{pmatrix} }_{\text{type 2}} + \underbrace{\sum\limits_i c_i\zeta_A \begin{pmatrix} \boldsymbol{\mu}_i \\ \mathfrak{u}_i \end{pmatrix} }_{\text{type 3}} , \end{align*} $$
where 
 $ b_i, c_i \in K$
 such that for all i, the following properties are satisfied:
$ b_i, c_i \in K$
 such that for all i, the following properties are satisfied: 
- 
• For all arrays  $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side, $ \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
 appearing on the right-hand side, $$ \begin{align*} \operatorname{\mathrm{depth}}(\mathfrak{t}) \geq \operatorname{\mathrm{depth}}(\mathfrak{s}) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\mathfrak{s}). \end{align*} $$ $$ \begin{align*} \operatorname{\mathrm{depth}}(\mathfrak{t}) \geq \operatorname{\mathrm{depth}}(\mathfrak{s}) \quad \text{and} \quad T_k(\mathfrak{t}) \leq T_k(\mathfrak{s}). \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\epsilon }' \\ \mathfrak {t}' \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\epsilon }' \\ \mathfrak {t}' \end {pmatrix} $
 of type $2$
 with respect to $2$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, $$ \begin{align*} t^{\prime}_{1} + \dots + t^{\prime}_k < s_1 + \dots + s_k. \end{align*} $$ $$ \begin{align*} t^{\prime}_{1} + \dots + t^{\prime}_k < s_1 + \dots + s_k. \end{align*} $$
- 
• For the array  $ \begin {pmatrix} \boldsymbol {\mu } \\ \mathfrak {u} \end {pmatrix} $
 of type $ \begin {pmatrix} \boldsymbol {\mu } \\ \mathfrak {u} \end {pmatrix} $
 of type $3$
 with respect to $3$
 with respect to $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we have $\operatorname {\mathrm {Init}}(\mathfrak {s}) \prec \operatorname {\mathrm {Init}}(\mathfrak {u})$
. $\operatorname {\mathrm {Init}}(\mathfrak {s}) \prec \operatorname {\mathrm {Init}}(\mathfrak {u})$
.
Proof. The proof follows the same line as in [Reference Ngo Dac31, Proposition 2.12 and 2.13]. We outline the proof here and refer the reader to [Reference Im, Kim, Le, Ngo Dac and Pham25] for more details. For Part 1, since 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_{k-1})$
, we get
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_{k-1})$
, we get 
 $s_k> q$
. Set
$s_k> q$
. Set 
 $ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} 1 & \varepsilon _{k+1} &\dots & \varepsilon _n \\ s_k - q & s_{k+1} &\dots & s_n \end {pmatrix} $
. By Proposition 5.5,
$ \begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} 1 & \varepsilon _{k+1} &\dots & \varepsilon _n \\ s_k - q & s_{k+1} &\dots & s_n \end {pmatrix} $
. By Proposition 5.5, 
 $\mathcal {C}_{\Sigma ,V}(R_{\varepsilon _k})$
 is of the form
$\mathcal {C}_{\Sigma ,V}(R_{\varepsilon _k})$
 is of the form 
 $$ \begin{align} S_d \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon_k & \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{align} $$
$$ \begin{align} S_d \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} + \sum \limits_i a_i S_d \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} + \sum \limits_i b_i S_{d+1} \begin{pmatrix} \varepsilon_k & \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix} =0, \end{align} $$
where 
 $a_i, b_i \in K$
 and
$a_i, b_i \in K$
 and 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}, \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying
$\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}, \begin {pmatrix} \boldsymbol {\epsilon }_i \\ \mathfrak {t}_i \end {pmatrix} $
 are arrays satisfying 
 $$ \begin{align} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} &\leq \begin{pmatrix} \varepsilon_k \\ q \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} \quad \text{and} \quad s_{i1} < q + v_1 = s_k;\\ \notag \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} &\leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} 1 & \varepsilon_{k + 1} & \dots & \varepsilon_n \\ s_k - 1 & s_{k + 1} & \dots & s_n \end{pmatrix}. \end{align} $$
$$ \begin{align} \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} &\leq \begin{pmatrix} \varepsilon_k \\ q \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} \varepsilon_{k} & \dots & \varepsilon_n \\ s_{k} & \dots & s_n \end{pmatrix} \quad \text{and} \quad s_{i1} < q + v_1 = s_k;\\ \notag \begin{pmatrix} \boldsymbol{\epsilon}_i \\ \mathfrak{t}_i \end{pmatrix} &\leq \begin{pmatrix} 1 \\ q - 1 \end{pmatrix} + \begin{pmatrix} \Sigma \\ V \end{pmatrix} = \begin{pmatrix} 1 & \varepsilon_{k + 1} & \dots & \varepsilon_n \\ s_k - 1 & s_{k + 1} & \dots & s_n \end{pmatrix}. \end{align} $$
 For 
 $m \in \mathbb {N}$
, we recall that
$m \in \mathbb {N}$
, we recall that 
 $q^{\{m\}}$
 is the sequence of length m with all terms equal to q. Setting
$q^{\{m\}}$
 is the sequence of length m with all terms equal to q. Setting 
 $s_0 = 0$
, we may assume that there exists a maximal index j with
$s_0 = 0$
, we may assume that there exists a maximal index j with 
 $0 \leq j \leq k-1$
 such that
$0 \leq j \leq k-1$
 such that 
 $s_j < q$
, hence
$s_j < q$
, hence 
 $\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_j, q^{\{k-j-1\}})$
. Then the operator
$\operatorname {\mathrm {Init}}(\mathfrak {s}) = (s_1, \dots , s_j, q^{\{k-j-1\}})$
. Then the operator 
 $ \mathcal {BC}_{\varepsilon _{j+1},q} \circ \dots \circ \mathcal {BC}_{\varepsilon _{k-1},q}$
 applied to the relation (5.4) gives
$ \mathcal {BC}_{\varepsilon _{j+1},q} \circ \dots \circ \mathcal {BC}_{\varepsilon _{k-1},q}$
 applied to the relation (5.4) gives 
 $$ \begin{align} S_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon_{k} & \dots & \varepsilon_n \\ q & \dots & q & s_{k} & \dots & s_n \end{pmatrix} & + \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \boldsymbol{\varepsilon}_i \\ q & \dots & q & \mathfrak{s}_i \end{pmatrix} \\ \notag & + \sum \limits_i b_{i_1 \dots i_{k-j}} S_{d+1} \begin{pmatrix} \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align} $$
$$ \begin{align} S_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \varepsilon_{k} & \dots & \varepsilon_n \\ q & \dots & q & s_{k} & \dots & s_n \end{pmatrix} & + \sum \limits_i a_i S_d \begin{pmatrix} \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \boldsymbol{\varepsilon}_i \\ q & \dots & q & \mathfrak{s}_i \end{pmatrix} \\ \notag & + \sum \limits_i b_{i_1 \dots i_{k-j}} S_{d+1} \begin{pmatrix} \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} =0, \end{align} $$
where 
 $b_{i_1 \dots i_{k-j}} \in K$
 and
$b_{i_1 \dots i_{k-j}} \in K$
 and 
 $$ \begin{align} \begin{pmatrix} \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_k & 1 & \varepsilon_{k+1}& \dots & \varepsilon_n \\ q& \dots & q & s_{k} - 1 & s_{k+1} &\dots & s_n \end{pmatrix}. \end{align} $$
$$ \begin{align} \begin{pmatrix} \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \leq \begin{pmatrix} \varepsilon_{j+2}& \dots & \varepsilon_k & 1 & \varepsilon_{k+1}& \dots & \varepsilon_n \\ q& \dots & q & s_{k} - 1 & s_{k+1} &\dots & s_n \end{pmatrix}. \end{align} $$
 We let 
 $ \begin {pmatrix} \Sigma ' \\ V' \end {pmatrix} = \begin {pmatrix} \varepsilon _{1} &\dots & \varepsilon _j \\ s_1 &\dots & s_j \end {pmatrix} $
, and we apply
$ \begin {pmatrix} \Sigma ' \\ V' \end {pmatrix} = \begin {pmatrix} \varepsilon _{1} &\dots & \varepsilon _j \\ s_1 &\dots & s_j \end {pmatrix} $
, and we apply 
 $\mathcal {B}^*_{\Sigma ',V'}$
 to Equation (5.6). Since
$\mathcal {B}^*_{\Sigma ',V'}$
 to Equation (5.6). Since 
 $s_j < q$
, that is,
$s_j < q$
, that is, 
 $s_j + 1 \leq q$
, we can deduce from Lemma 5.4 that
$s_j + 1 \leq q$
, we can deduce from Lemma 5.4 that 
 $$ \begin{align} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} &=- \sum \limits_i a_i \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \boldsymbol{\varepsilon}_i \\ s_1 & \dots & s_j & q & \dots & q & \mathfrak{s}_i \end{pmatrix} - \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \notag\\ &\quad - \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}. \end{align} $$
$$ \begin{align} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} &=- \sum \limits_i a_i \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \dots & \varepsilon_{k-1} & \boldsymbol{\varepsilon}_i \\ s_1 & \dots & s_j & q & \dots & q & \mathfrak{s}_i \end{pmatrix} - \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_j & \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_j & 1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix} \notag\\ &\quad - \sum \limits_i b_{i_1 \dots i_{k-j}} \zeta_A \begin{pmatrix} \varepsilon_1 & \dots & \varepsilon_{j-1} & \varepsilon_j \varepsilon_{j+1} & \boldsymbol{\epsilon}_{i_1 \dots i_{k-j}} \\ s_1 & \dots & s_{j-1} & s_j+1 & \mathfrak{t}_{i_1 \dots i_{k-j}} \end{pmatrix}. \end{align} $$
The first term, the second term and the third term on the right-hand side of Equation (5.8) are referred to as type 1, type 2 and type 3, respectively. From Equations (5.5) and (5.7) and Remark 2.1, one verifies that the arrays of type 1, type 2 and type 3 satisfy the desired conditions. We have proved Part 1.
 The proof of Part 
 $2$
 is similar to that of Proposition 2.7. We first begin with the relation
$2$
 is similar to that of Proposition 2.7. We first begin with the relation 
 $R_{\varepsilon _k}$
. Next, we apply
$R_{\varepsilon _k}$
. Next, we apply 
 $ \mathcal {BC}_{\varepsilon _{j+1},q} \circ \dots \circ \mathcal {BC}_{\varepsilon _{k-1},q}$
 to
$ \mathcal {BC}_{\varepsilon _{j+1},q} \circ \dots \circ \mathcal {BC}_{\varepsilon _{k-1},q}$
 to 
 $R_{\varepsilon _k}$
 and then apply
$R_{\varepsilon _k}$
 and then apply 
 $\mathcal {B}^*_{\Sigma ',V'}$
. We can decompose
$\mathcal {B}^*_{\Sigma ',V'}$
. We can decompose 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 in terms of type 2 and type 3 as in Equation (5.8). One verifies that these terms satisfy the desired conditions. We finish the proof.
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 in terms of type 2 and type 3 as in Equation (5.8). One verifies that these terms satisfy the desired conditions. We finish the proof.
Proposition 5.7. For all 
 $k \in \mathbb {N}$
 and for all arrays
$k \in \mathbb {N}$
 and for all arrays 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
,
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be expressed as a K-linear combination of
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 can be expressed as a K-linear combination of 
 $\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that
$\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that 
 $\mathfrak {t}$
 is k-admissible.
$\mathfrak {t}$
 is k-admissible.
Proof. The proof follows the same line as that of [Reference Ngo Dac31, Proposition 3.2]. We outline the proof here and refer the reader to [Reference Im, Kim, Le, Ngo Dac and Pham25] for more details. We consider the following statement:
 
 $(H_k) \quad $
 For all arrays
$(H_k) \quad $
 For all arrays 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we can express
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
, we can express 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 as a K-linear combination of
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
 as a K-linear combination of 
 $\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that
$\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that 
 $\mathfrak {t}$
 is k-admissible.
$\mathfrak {t}$
 is k-admissible.
 We will show that 
 $(H_k)$
 holds for all
$(H_k)$
 holds for all 
 $k \in \mathbb {N}$
 by induction on k. For
$k \in \mathbb {N}$
 by induction on k. For 
 $k = 1$
, we prove that
$k = 1$
, we prove that 
 $(H_1)$
 holds by induction on the first component
$(H_1)$
 holds by induction on the first component 
 $s_1$
 of
$s_1$
 of 
 $\mathfrak {s}$
. If
$\mathfrak {s}$
. If 
 $s_1 \leq q$
, then either
$s_1 \leq q$
, then either 
 $\mathfrak {s}$
 is
$\mathfrak {s}$
 is 
 $1$
-admissible, or
$1$
-admissible, or 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ q \end {pmatrix}$
. We deduce from the relation
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ q \end {pmatrix}$
. We deduce from the relation 
 $R_{\varepsilon }$
 that
$R_{\varepsilon }$
 that 
 $(H_1)$
 holds for the case
$(H_1)$
 holds for the case 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ q \end {pmatrix}$
. If
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon \\ q \end {pmatrix}$
. If 
 $s_1> q$
, we assume that
$s_1> q$
, we assume that 
 $(H_1)$
 holds for the array
$(H_1)$
 holds for the array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
, where
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
, where 
 $s_1 < s$
. We need to shows that
$s_1 < s$
. We need to shows that 
 $(H_1)$
 holds for the array
$(H_1)$
 holds for the array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 where
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 where 
 $s_1 = s$
. Indeed, assume that
$s_1 = s$
. Indeed, assume that 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
. Set
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
. Set 
 $\begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \varepsilon _2 & \cdots & \varepsilon _n \\ s_1 - q & s_2 & \cdots & s_n \end {pmatrix}$
. Applying
$\begin {pmatrix} \Sigma \\ V \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \varepsilon _2 & \cdots & \varepsilon _n \\ s_1 - q & s_2 & \cdots & s_n \end {pmatrix}$
. Applying 
 $ C_{\Sigma ,V}$
 to the relation
$ C_{\Sigma ,V}$
 to the relation 
 $R_{1}$
 and using Proposition 5.5, we can deduce that
$R_{1}$
 and using Proposition 5.5, we can deduce that 
 $$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = - \sum \limits_i a_i \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} - \sum \limits_i b_i \zeta_A \begin{pmatrix} 1 & \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix}, \end{align*} $$
$$ \begin{align*} \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon} \\ \mathfrak{s} \end{pmatrix} = - \sum \limits_i a_i \zeta_A \begin{pmatrix} \boldsymbol{\varepsilon}_i \\ \mathfrak{s}_i \end{pmatrix} - \sum \limits_i b_i \zeta_A \begin{pmatrix} 1 & \boldsymbol{\epsilon}_i \\ 1 & \mathfrak{t}_i \end{pmatrix}, \end{align*} $$
where 
 $a_i, b_i \in K$
 and
$a_i, b_i \in K$
 and 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}$
are arrays satisfying
$\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}$
are arrays satisfying 
 $s_{i1} < s$
 for all i. From the induction hypothesis, we deduce that
$s_{i1} < s$
 for all i. From the induction hypothesis, we deduce that 
 $(H_1)$
 holds for
$(H_1)$
 holds for 
 $\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}$
, and therefore for
$\begin {pmatrix} \boldsymbol {\varepsilon }_i \\ \mathfrak {s}_i \end {pmatrix}$
, and therefore for 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
.
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
.
 We next assume that 
 $(H_{k - 1})$
 holds. We need to show that
$(H_{k - 1})$
 holds. We need to show that 
 $(H_k)$
 holds. The rest of the proof is similar to that of Proposition 2.9. We can restrict our attention to the array
$(H_k)$
 holds. The rest of the proof is similar to that of Proposition 2.9. We can restrict our attention to the array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
, where
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
, where 
 $\mathfrak {s}$
 is not k-admissible and
$\mathfrak {s}$
 is not k-admissible and 
 $\operatorname {\mathrm {depth}}(\mathfrak {s}) \geq k$
. We show that
$\operatorname {\mathrm {depth}}(\mathfrak {s}) \geq k$
. We show that 
 $(H_k)$
 holds for the array
$(H_k)$
 holds for the array 
 $\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 by induction on
$\begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 by induction on 
 $s_1 + \cdots + s_k$
. For the induction step, by using Proposition 5.6 and the induction hypothesis, we can give an algorithm to decompose
$s_1 + \cdots + s_k$
. For the induction step, by using Proposition 5.6 and the induction hypothesis, we can give an algorithm to decompose 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
as a K-linear combination of
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} $
as a K-linear combination of 
 $\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that
$\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {t} \end {pmatrix} $
’s of the same weight such that 
 $\mathfrak {t}$
 is k-admissible. From Proposition 5.6 and similar arguments as in [Reference Ngo Dac31, Proposition 3.2], we can show that this algorithm stops after a finite number of steps. This proves the result.
$\mathfrak {t}$
 is k-admissible. From Proposition 5.6 and similar arguments as in [Reference Ngo Dac31, Proposition 3.2], we can show that this algorithm stops after a finite number of steps. This proves the result.
Consequently, we obtain a weak version of Brown’s theorem for AMZV's as follows.
Proposition 5.8. The set of all elements 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 such that
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 such that 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AT}_w$
 forms a set of generators for
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AT}_w$
 forms a set of generators for 
 $\mathcal {AZ}_w$
. Here, we recall that
$\mathcal {AZ}_w$
. Here, we recall that 
 $\mathcal {AT}_w$
 is the set of all AMZV's
$\mathcal {AT}_w$
 is the set of all AMZV's 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 of weight w such that
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \operatorname {\mathrm {Li}} \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix}$
 of weight w such that 
 $ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
 with
$ \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} = \begin {pmatrix} \varepsilon _1 & \cdots & \varepsilon _n \\ s_1 & \cdots & s_n \end {pmatrix}$
 with 
 $s_1, \dots , s_{n-1} \leq q$
 and
$s_1, \dots , s_{n-1} \leq q$
 and 
 $s_n < q$
 introduced in the paragraph preceding Proposition 2.10.
$s_n < q$
 introduced in the paragraph preceding Proposition 2.10.
Proof. It follows from Proposition 5.7 in the case of 
 $k = w$
.
$k = w$
.
5.2. Proof of Theorem A
As a direct consequence of Proposition 2.10 and Proposition 5.8, we get
Theorem 5.9. The K-vector space 
 $\mathcal {AZ}_w$
 of AMZV's of weight w and the K-vector space
$\mathcal {AZ}_w$
 of AMZV's of weight w and the K-vector space 
 $\mathcal {AL}_w$
 of ACMPL's of weight w are the same.
$\mathcal {AL}_w$
 of ACMPL's of weight w are the same.
By this identification, we apply Theorem 4.6 to obtain Theorem A.
5.3. Zagier–Hoffman’s conjectures in positive characteristic
5.3.1. Known results
 We use freely the notation introduced in §1.2.1. We recall that for 
 $w \in \mathbb {N}$
,
$w \in \mathbb {N}$
, 
 $\mathcal Z_w$
 denotes the K-vector space spanned by the MZV's of weight w and
$\mathcal Z_w$
 denotes the K-vector space spanned by the MZV's of weight w and 
 $\mathcal T_w$
 denotes the set of
$\mathcal T_w$
 denotes the set of 
 $\zeta _A(\mathfrak {s})$
, where
$\zeta _A(\mathfrak {s})$
, where 
 $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb {N}^r$
 of weight w with
$\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb {N}^r$
 of weight w with 
 $1\leq s_i\leq q$
 for
$1\leq s_i\leq q$
 for 
 $1\leq i\leq r-1$
 and
$1\leq i\leq r-1$
 and 
 $s_r<q$
.
$s_r<q$
.
Recall that the main results of [Reference Ngo Dac31] state that
- 
• For all  $w \in \mathbb {N}$
 we always have $w \in \mathbb {N}$
 we always have $\dim _K \mathcal Z_w \leq d(w)$
 (see [Reference Ngo Dac31, Theorem A]). $\dim _K \mathcal Z_w \leq d(w)$
 (see [Reference Ngo Dac31, Theorem A]).
- 
• For  $w \leq 2q-2$
, we have $w \leq 2q-2$
, we have $\dim _K \mathcal Z_w \geq d(w)$
 (see [Reference Ngo Dac31, Theorem B]). In particular, Conjecture 1.7 holds for $\dim _K \mathcal Z_w \geq d(w)$
 (see [Reference Ngo Dac31, Theorem B]). In particular, Conjecture 1.7 holds for $w \leq 2q-2$
 (see [Reference Ngo Dac31, Theorem D]). $w \leq 2q-2$
 (see [Reference Ngo Dac31, Theorem D]).
However, as stated in [Reference Ngo Dac31, Remark 6.3] it would be very difficult to extend the method of [Reference Ngo Dac31] for general weights.
As an application of our main results, we present a proof of Theorem B which settles both Conjectures1.6 and 1.7.
5.3.2. Proof of Theorem B
 As we have already known the sharp upper bound for 
 $\mathcal Z_w$
 (see [Reference Ngo Dac31, Theorem A]), Theorem B follows immediately from the following proposition.
$\mathcal Z_w$
 (see [Reference Ngo Dac31, Theorem A]), Theorem B follows immediately from the following proposition.
Proposition 5.10. For all 
 $w \in \mathbb {N}$
 we have
$w \in \mathbb {N}$
 we have 
 $\dim _K \mathcal Z_w \geq d(w)$
.
$\dim _K \mathcal Z_w \geq d(w)$
.
Proof. We denote by 
 $\mathcal S_w$
 the set of CMPL's consisting of
$\mathcal S_w$
 the set of CMPL's consisting of 
 $\operatorname {\mathrm {Li}}(s_1,\ldots ,s_r)$
 of weight w with
$\operatorname {\mathrm {Li}}(s_1,\ldots ,s_r)$
 of weight w with 
 $q \nmid s_i$
 for all i.
$q \nmid s_i$
 for all i.
 Then 
 $\mathcal {S}_w$
 can be considered as a subset of
$\mathcal {S}_w$
 can be considered as a subset of 
 $\mathcal {AS}_w$
 by assuming
$\mathcal {AS}_w$
 by assuming 
 $\boldsymbol {\epsilon }=(1,\ldots , 1)$
. In fact, all algebraic relations in §2 hold for CMPL version, that is, for
$\boldsymbol {\epsilon }=(1,\ldots , 1)$
. In fact, all algebraic relations in §2 hold for CMPL version, that is, for 
 $\operatorname {\mathrm {Si}}_d(s_1,\dots ,s_r)=\operatorname {\mathrm {Si}}_d \begin {pmatrix}1 & \dots &1 \\ s_1& \dots & s_r\end {pmatrix}$
 and
$\operatorname {\mathrm {Si}}_d(s_1,\dots ,s_r)=\operatorname {\mathrm {Si}}_d \begin {pmatrix}1 & \dots &1 \\ s_1& \dots & s_r\end {pmatrix}$
 and 
 $\operatorname {\mathrm {Li}}(s_1,\dots ,s_r)=\operatorname {\mathrm {Li}} \begin {pmatrix}1 & \dots &1 \\ s_1& \dots & s_r\end {pmatrix}$
. It follows that
$\operatorname {\mathrm {Li}}(s_1,\dots ,s_r)=\operatorname {\mathrm {Li}} \begin {pmatrix}1 & \dots &1 \\ s_1& \dots & s_r\end {pmatrix}$
. It follows that 
 $\mathcal S_w$
 is contained in
$\mathcal S_w$
 is contained in 
 $\mathcal Z_w$
 by Theorem 5.9. Further, by §2.4.1,
$\mathcal Z_w$
 by Theorem 5.9. Further, by §2.4.1, 
 $|\mathcal S_w|=d(w)$
. By Theorem 4.4, we deduce that elements in
$|\mathcal S_w|=d(w)$
. By Theorem 4.4, we deduce that elements in 
 $\mathcal S_w$
 are all linearly independent over K. Therefore,
$\mathcal S_w$
 are all linearly independent over K. Therefore, 
 $\dim _K \mathcal Z_w \geq |\mathcal S_w|=d(w)$
.
$\dim _K \mathcal Z_w \geq |\mathcal S_w|=d(w)$
.
5.4. Sharp bounds without ACMPL's
To end this paper, we mention that without ACMPL's it seems very hard to obtain for arbitrary weight w
- 
• either the sharp upper bound  $\dim _K \mathcal {AZ}_w \leq s(w)$
, $\dim _K \mathcal {AZ}_w \leq s(w)$
,
- 
• or the sharp lower bound  $\dim _K \mathcal {AZ}_w \geq s(w)$
. $\dim _K \mathcal {AZ}_w \geq s(w)$
.
We can only do this for small weights with ad hoc arguments. We collect the results below, sketch some ideas for the proofs, and refer the reader to [Reference Im, Kim, Le, Ngo Dac and Pham26] for full details.
Proposition 5.11. Let 
 $w \leq 2q-2$
. Then
$w \leq 2q-2$
. Then 
 $\dim _K \mathcal {AZ}_w \leq s(w)$
.
$\dim _K \mathcal {AZ}_w \leq s(w)$
.
Proof. We outline the proof and refer the reader to [Reference Im, Kim, Le, Ngo Dac and Pham25] for more details. We denote by 
 $\mathcal {AT}_w^1$
 the subset of AMZV's
$\mathcal {AT}_w^1$
 the subset of AMZV's 
 $\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {s} \end {pmatrix}$
 of
$\zeta _A \begin {pmatrix} \boldsymbol {\epsilon } \\ \mathfrak {s} \end {pmatrix}$
 of 
 $\mathcal {AT}_w$
 such that
$\mathcal {AT}_w$
 such that 
 $\epsilon _i=1$
 whenever
$\epsilon _i=1$
 whenever 
 $s_i=q$
 and by
$s_i=q$
 and by 
 $\langle \mathcal {AT}_w^1 \rangle $
 the K-vector space spanned by the AMZV's in
$\langle \mathcal {AT}_w^1 \rangle $
 the K-vector space spanned by the AMZV's in 
 $\mathcal {AT}_w^1$
. We see that
$\mathcal {AT}_w^1$
. We see that 
 $|\mathcal {AT}_w^1|=s(w)$
. Thus, it suffices to prove that
$|\mathcal {AT}_w^1|=s(w)$
. Thus, it suffices to prove that 
 $\langle \mathcal {AT}_w^1 \rangle =\mathcal {AZ}_w$
.
$\langle \mathcal {AT}_w^1 \rangle =\mathcal {AZ}_w$
.
 Let 
 $U=(u_1,\dots ,u_n)$
 and
$U=(u_1,\dots ,u_n)$
 and 
 $W=(w_1,\dots ,w_r)$
 be tuples of positive integers such that
$W=(w_1,\dots ,w_r)$
 be tuples of positive integers such that 
 $w(U)+w(W)+ q = w, u_n \leq q-1$
 and
$w(U)+w(W)+ q = w, u_n \leq q-1$
 and 
 $w_1,\dots ,w_r \leq q$
. Let
$w_1,\dots ,w_r \leq q$
. Let 
 $\boldsymbol {\epsilon }=(\epsilon _1,\dots ,\epsilon _n)\in (\mathbb F_q^\times )^n$
 and
$\boldsymbol {\epsilon }=(\epsilon _1,\dots ,\epsilon _n)\in (\mathbb F_q^\times )^n$
 and 
 $\boldsymbol {\lambda }=(\lambda _1,\dots ,\lambda _r)\in (\mathbb F_q^\times )^r$
. By direct calculations, we can obtain an explicit formula for
$\boldsymbol {\lambda }=(\lambda _1,\dots ,\lambda _r)\in (\mathbb F_q^\times )^r$
. By direct calculations, we can obtain an explicit formula for 
 $\mathcal B_{\boldsymbol {\epsilon },U} \mathcal C_{\boldsymbol {\lambda },W} (R_{\epsilon })$
. We give briefly the form of this formula as follows:
$\mathcal B_{\boldsymbol {\epsilon },U} \mathcal C_{\boldsymbol {\lambda },W} (R_{\epsilon })$
. We give briefly the form of this formula as follows: 
 $$ \begin{align} &S_d\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon & \boldsymbol{\lambda}\\ U & q & W \end{pmatrix}+S_d\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon \lambda_1 & \boldsymbol{\lambda}_-\\ U & q + w_1 & W_- \end{pmatrix}\\ \notag & + \sum_i \alpha_i S_d\begin{pmatrix} \boldsymbol{\epsilon}^{\prime}_i & \lambda^{\prime}_{i1} & \boldsymbol{\lambda}^{\prime}_{i-}\\ U^{\prime}_i & q + w^{\prime}_{i1} & W^{\prime}_{i-} \end{pmatrix} + \sum_i \beta_i S_d\begin{pmatrix} \boldsymbol{\epsilon}^{\prime\prime}_i & \lambda_1 & \boldsymbol{\lambda}_-\\ U^{\prime\prime}_i & q + w_1 - 1 & W_- \end{pmatrix} + \sum_i \gamma_i S_d\begin{pmatrix} \mu_i\\ V_i \end{pmatrix}= 0. \end{align} $$
$$ \begin{align} &S_d\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon & \boldsymbol{\lambda}\\ U & q & W \end{pmatrix}+S_d\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon \lambda_1 & \boldsymbol{\lambda}_-\\ U & q + w_1 & W_- \end{pmatrix}\\ \notag & + \sum_i \alpha_i S_d\begin{pmatrix} \boldsymbol{\epsilon}^{\prime}_i & \lambda^{\prime}_{i1} & \boldsymbol{\lambda}^{\prime}_{i-}\\ U^{\prime}_i & q + w^{\prime}_{i1} & W^{\prime}_{i-} \end{pmatrix} + \sum_i \beta_i S_d\begin{pmatrix} \boldsymbol{\epsilon}^{\prime\prime}_i & \lambda_1 & \boldsymbol{\lambda}_-\\ U^{\prime\prime}_i & q + w_1 - 1 & W_- \end{pmatrix} + \sum_i \gamma_i S_d\begin{pmatrix} \mu_i\\ V_i \end{pmatrix}= 0. \end{align} $$
Here, the coefficients 
 $\alpha _i, \beta _i, \gamma _i \in K$
. For the third term, we have
$\alpha _i, \beta _i, \gamma _i \in K$
. For the third term, we have 
 $W^{\prime }_i = (w^{\prime }_{i1}, W^{\prime }_{i-})$
 are tuples of positive integers such that
$W^{\prime }_i = (w^{\prime }_{i1}, W^{\prime }_{i-})$
 are tuples of positive integers such that 
 $\operatorname {\mathrm {depth}}(W^{\prime }_i) < r$
. For the last term, since
$\operatorname {\mathrm {depth}}(W^{\prime }_i) < r$
. For the last term, since 
 $w(U)+w(W)=w-q \leq q-2$
, we have
$w(U)+w(W)=w-q \leq q-2$
, we have 
 $V_i$
 are tuples of positive integers such that all components are less than or equal to
$V_i$
 are tuples of positive integers such that all components are less than or equal to 
 $q - 1$
.
$q - 1$
.
 We denote by 
 $H_r$
 the following claim: For any tuples of positive integers U and
$H_r$
 the following claim: For any tuples of positive integers U and 
 $W=(w_1,\dots ,w_r)$
 of depth r,
$W=(w_1,\dots ,w_r)$
 of depth r, 
 $\boldsymbol {\epsilon } \in (\mathbb F_q^\times )^{\operatorname {\mathrm {depth}}(U)}$
 of any depth,
$\boldsymbol {\epsilon } \in (\mathbb F_q^\times )^{\operatorname {\mathrm {depth}}(U)}$
 of any depth, 
 $\boldsymbol {\lambda }=(\lambda _1,\dots ,\lambda _r)\in (\mathbb F_q^\times )^{r}$
, and
$\boldsymbol {\lambda }=(\lambda _1,\dots ,\lambda _r)\in (\mathbb F_q^\times )^{r}$
, and 
 $\epsilon \in \mathbb F_q^\times $
 such that
$\epsilon \in \mathbb F_q^\times $
 such that 
 $w(U)+w(W)+q=w$
, the AMZV's
$w(U)+w(W)+q=w$
, the AMZV's 
 $\zeta _A\begin {pmatrix} \boldsymbol {\epsilon } & \epsilon & \boldsymbol {\lambda }\\ U & q & W \end {pmatrix}$
 and
$\zeta _A\begin {pmatrix} \boldsymbol {\epsilon } & \epsilon & \boldsymbol {\lambda }\\ U & q & W \end {pmatrix}$
 and 
 $\zeta _A\begin {pmatrix} \boldsymbol {\epsilon } & \epsilon \lambda _1 & \boldsymbol {\lambda }_-\\ U & q+w_1 & W_- \end {pmatrix}$
 belong to
$\zeta _A\begin {pmatrix} \boldsymbol {\epsilon } & \epsilon \lambda _1 & \boldsymbol {\lambda }_-\\ U & q+w_1 & W_- \end {pmatrix}$
 belong to 
 $\langle \mathcal {AT}_w^1\rangle $
.
$\langle \mathcal {AT}_w^1\rangle $
.
 We will show that 
 $H_r$
 holds for all
$H_r$
 holds for all 
 $r \geq 0$
 by induction on r. For
$r \geq 0$
 by induction on r. For 
 $r=0$
, we know that
$r=0$
, we know that 
 $W=\emptyset $
. The explicit expression for
$W=\emptyset $
. The explicit expression for 
 $\mathcal B_{\boldsymbol {\epsilon },U}(R_{\epsilon })$
 is given by
$\mathcal B_{\boldsymbol {\epsilon },U}(R_{\epsilon })$
 is given by 
 $$ \begin{align*} S_d\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon\\ U & q \end{pmatrix}&+\epsilon^{-1}D_1S_d\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon& 1\\ U & 1 & q - 1 \end{pmatrix} +\epsilon^{-1}D_1S_d\begin{pmatrix} \epsilon_1 & \dots & \epsilon_{n-1} & \epsilon_n \epsilon & 1 \\ u_1 & \dots & u_{n-1} & u_n + 1 & q - 1 \end{pmatrix}=0. \end{align*} $$
$$ \begin{align*} S_d\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon\\ U & q \end{pmatrix}&+\epsilon^{-1}D_1S_d\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon& 1\\ U & 1 & q - 1 \end{pmatrix} +\epsilon^{-1}D_1S_d\begin{pmatrix} \epsilon_1 & \dots & \epsilon_{n-1} & \epsilon_n \epsilon & 1 \\ u_1 & \dots & u_{n-1} & u_n + 1 & q - 1 \end{pmatrix}=0. \end{align*} $$
Since 
 $u_i \leq w(U)=w-q \leq q-2$
, we deduce that
$u_i \leq w(U)=w-q \leq q-2$
, we deduce that 
 $\zeta _A\begin {pmatrix} \boldsymbol {\epsilon } & \epsilon \\ U & q \end {pmatrix}\in \langle \mathcal {AT}_w^1 \rangle $
 as required.
$\zeta _A\begin {pmatrix} \boldsymbol {\epsilon } & \epsilon \\ U & q \end {pmatrix}\in \langle \mathcal {AT}_w^1 \rangle $
 as required.
 Suppose that 
 $H_{r'}$
 holds for any
$H_{r'}$
 holds for any 
 $r'<r$
. We now show that
$r'<r$
. We now show that 
 $H_r$
 holds. We proceed again by induction on
$H_r$
 holds. We proceed again by induction on 
 $w_1$
. For
$w_1$
. For 
 $w_1=1$
, we apply the formula (5.9). As
$w_1=1$
, we apply the formula (5.9). As 
 $w(U)+w(W)=w-q \leq q-2$
, by induction we deduce that all the terms except the first two ones in this expression belong to
$w(U)+w(W)=w-q \leq q-2$
, by induction we deduce that all the terms except the first two ones in this expression belong to 
 $\langle \mathcal {AT}_w^1 \rangle $
. Thus, for any
$\langle \mathcal {AT}_w^1 \rangle $
. Thus, for any 
 $\epsilon \in \mathbb F_q^\times $
,
$\epsilon \in \mathbb F_q^\times $
, 
 $$ \begin{align} \zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon & \boldsymbol{\lambda}\\ U & q & W \end{pmatrix}+\zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon \lambda_1 & \boldsymbol{\lambda}_-\\ U & q + 1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align} $$
$$ \begin{align} \zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon & \boldsymbol{\lambda}\\ U & q & W \end{pmatrix}+\zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon \lambda_1 & \boldsymbol{\lambda}_-\\ U & q + 1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align} $$
We take 
 $\epsilon =1$
. As the first term lies in
$\epsilon =1$
. As the first term lies in 
 $\mathcal {AT}_w^1$
 by definition, we deduce that
$\mathcal {AT}_w^1$
 by definition, we deduce that 
 $$ \begin{align*} \zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \lambda_1 & \boldsymbol{\lambda}_-\\ U & q + 1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align*} $$
$$ \begin{align*} \zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \lambda_1 & \boldsymbol{\lambda}_-\\ U & q + 1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align*} $$
Thus, in Equation (5.10) we now know that the second term lies in 
 $\langle \mathcal {AT}_w^1 \rangle $
, which implies that
$\langle \mathcal {AT}_w^1 \rangle $
, which implies that 
 $$\begin{align*}\zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon & \boldsymbol{\lambda}\\ U & q & W \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align*}$$
$$\begin{align*}\zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon & \boldsymbol{\lambda}\\ U & q & W \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align*}$$
 We suppose that 
 $H_r$
 holds for all
$H_r$
 holds for all 
 $W'=(w_1',\dots ,w_r')$
 such that
$W'=(w_1',\dots ,w_r')$
 such that 
 $w_1'<w_1$
. We have to show that
$w_1'<w_1$
. We have to show that 
 $H_r$
 holds for all
$H_r$
 holds for all 
 $W=(w_1,\dots ,w_r)$
. The proof is similar to that of the base step
$W=(w_1,\dots ,w_r)$
. The proof is similar to that of the base step 
 $w_1 = 1$
. We first consider the formula (5.9). As
$w_1 = 1$
. We first consider the formula (5.9). As 
 $w(U)+w(W)=w-q \leq q-2$
, we can deduce from the induction hypothesis that
$w(U)+w(W)=w-q \leq q-2$
, we can deduce from the induction hypothesis that 
 $$ \begin{align} \zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon & \boldsymbol{\lambda}\\ U & q & W \end{pmatrix}+\zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon \lambda_1 & \boldsymbol{\lambda}_-\\ U & q + w_1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align} $$
$$ \begin{align} \zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon & \boldsymbol{\lambda}\\ U & q & W \end{pmatrix}+\zeta_A\begin{pmatrix} \boldsymbol{\epsilon} & \epsilon \lambda_1 & \boldsymbol{\lambda}_-\\ U & q + w_1 & W_- \end{pmatrix} \in \langle \mathcal{AT}_w^1 \rangle. \end{align} $$
From similar arguments as in the base step 
 $w_1 = 1$
, we can deduce that the first term and the second term of Equation (5.11) belong to
$w_1 = 1$
, we can deduce that the first term and the second term of Equation (5.11) belong to 
 $\langle \mathcal {AT}_w^1 \rangle $
. The proof is complete.
$\langle \mathcal {AT}_w^1 \rangle $
. The proof is complete.
Remark 5.12. The condition 
 $w \leq 2q-2$
 is essential in the previous proof as it allows us to significantly simplify the expression of
$w \leq 2q-2$
 is essential in the previous proof as it allows us to significantly simplify the expression of 
 $\mathcal B_{\boldsymbol {\epsilon },U} \mathcal C_{\boldsymbol {\lambda },W} (R_{\epsilon })$
 (see Equation (5.11)). For
$\mathcal B_{\boldsymbol {\epsilon },U} \mathcal C_{\boldsymbol {\lambda },W} (R_{\epsilon })$
 (see Equation (5.11)). For 
 $w=2q-1$
, the situation is already complicated but we can manage to prove Proposition 5.11. Unfortunately, we are not able to extend it to
$w=2q-1$
, the situation is already complicated but we can manage to prove Proposition 5.11. Unfortunately, we are not able to extend it to 
 $w=2q$
.
$w=2q$
.
Proposition 5.13. Let either 
 $w \leq 3q-3$
, or
$w \leq 3q-3$
, or 
 $w=3q-2,q=2$
. Then
$w=3q-2,q=2$
. Then 
 $\dim _K \mathcal {AZ}_w \geq s(w)$
.
$\dim _K \mathcal {AZ}_w \geq s(w)$
.
Proof. We outline a proof of this theorem and refer the reader to [Reference Im, Kim, Le, Ngo Dac and Pham26] for more details. For 
 $1 \leq w \leq 3q-2$
, we denote by
$1 \leq w \leq 3q-2$
, we denote by 
 $\mathcal I_w'$
 the set of tuples
$\mathcal I_w'$
 the set of tuples 
 $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 of weight w as follows:
$\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 of weight w as follows: 
- 
• For  $1 \leq w \leq 2q-2$
, $1 \leq w \leq 2q-2$
, $\mathcal I_w'$
 consists of tuples $\mathcal I_w'$
 consists of tuples $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 of weight w, where $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 of weight w, where $s_i \neq q$
 for all i. $s_i \neq q$
 for all i.
- 
• For  $2q-1 \leq w \leq 3q-3$
, $2q-1 \leq w \leq 3q-3$
, $\mathcal I_w'$
 consists of tuples $\mathcal I_w'$
 consists of tuples $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 of weight w of the form $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 of weight w of the form- 
– either  $s_i \neq q, 2q-1, 2q$
 for all i, $s_i \neq q, 2q-1, 2q$
 for all i,
- 
– or there exists a unique integer  $1 \leq i <r$
 such that $1 \leq i <r$
 such that $(s_i,s_{i+1})=(q-1,q)$
. $(s_i,s_{i+1})=(q-1,q)$
.
 
- 
- 
• For  $w = 3q-2$
 and $w = 3q-2$
 and $q>2$
, $q>2$
, $\mathcal I_w'$
 consists of tuples $\mathcal I_w'$
 consists of tuples $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 of weight w of the form $\mathfrak {s}=(s_1,\ldots ,s_r) \in \mathbb N^r$
 of weight w of the form- 
– either  $s_i \neq q, 2q-1, 2q, 3q-2$
 for all i, $s_i \neq q, 2q-1, 2q, 3q-2$
 for all i,
- 
– or there exists a unique integer  $1 \leq i <r$
 such that $1 \leq i <r$
 such that $(s_i,s_{i+1}) \in \{(q-1,q),(2q-2,q)\}$
, but $(s_i,s_{i+1}) \in \{(q-1,q),(2q-2,q)\}$
, but $\mathfrak {s} \neq (q-1,q-1,q)$
, $\mathfrak {s} \neq (q-1,q-1,q)$
,
- 
– or  $\mathfrak {s}=(q-1,2q-1)$
. $\mathfrak {s}=(q-1,2q-1)$
.
 
- 
- 
• For  $q=2$
 and $q=2$
 and $w = 3q-2=4$
, $w = 3q-2=4$
, $\mathcal I_w'$
 consists of the following tuples: $\mathcal I_w'$
 consists of the following tuples: $(2,1,1)$
, $(2,1,1)$
, $(1,2,1)$
 and $(1,2,1)$
 and $(1,3)$
. $(1,3)$
.
We denote by 
 $\mathcal {AT}_w'$
 the subset of AMZV's given by
$\mathcal {AT}_w'$
 the subset of AMZV's given by 
 $$\begin{align*}\mathcal{AT}_w':=\left\{\zeta_A\begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{s} \end{pmatrix} : \mathfrak{s} \in \mathcal I_w', \text{ and } \epsilon_i=1 \text{ whenever } s_i \in \{q,2q-1\} \right\}.\end{align*}$$
$$\begin{align*}\mathcal{AT}_w':=\left\{\zeta_A\begin{pmatrix} \boldsymbol{\epsilon} \\ \mathfrak{s} \end{pmatrix} : \mathfrak{s} \in \mathcal I_w', \text{ and } \epsilon_i=1 \text{ whenever } s_i \in \{q,2q-1\} \right\}.\end{align*}$$
Thus, if either 
 $w \leq 3q-3$
, or
$w \leq 3q-3$
, or 
 $w=3q-2, q=2$
, then one shows that
$w=3q-2, q=2$
, then one shows that 
 $$\begin{align*}|\mathcal{AT}_w'|=s(w). \end{align*}$$
$$\begin{align*}|\mathcal{AT}_w'|=s(w). \end{align*}$$
 Further, for 
 $w \leq 3q-3$
 and any
$w \leq 3q-3$
 and any 
 $(\mathfrak {s};\boldsymbol {\epsilon })=(s_1,\dots ,s_r;\epsilon _1,\dots ,\epsilon _r) \in \mathbb {N}^r \times (\mathbb F_q^\times )^r$
, if
$(\mathfrak {s};\boldsymbol {\epsilon })=(s_1,\dots ,s_r;\epsilon _1,\dots ,\epsilon _r) \in \mathbb {N}^r \times (\mathbb F_q^\times )^r$
, if 
 $\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AT}_w'$
, then
$\zeta _A \begin {pmatrix} \boldsymbol {\varepsilon } \\ \mathfrak {s} \end {pmatrix} \in \mathcal {AT}_w'$
, then 
 $\zeta _A \begin {pmatrix} s_1 & \dots & s_{r-1} \\ \epsilon _1 & \dots & \epsilon _{r-1} \end {pmatrix}$
 belongs to
$\zeta _A \begin {pmatrix} s_1 & \dots & s_{r-1} \\ \epsilon _1 & \dots & \epsilon _{r-1} \end {pmatrix}$
 belongs to 
 $\mathcal {AT}_{w-s_r}'$
. This property allows us to apply Theorem 3.4 and show by induction on
$\mathcal {AT}_{w-s_r}'$
. This property allows us to apply Theorem 3.4 and show by induction on 
 $w \leq 3q-3$
 that the AMZV's in
$w \leq 3q-3$
 that the AMZV's in 
 $\mathcal {AT}_w'$
 are all linearly independent over K. The proof is similar to that of Theorem 4.4. We apply Theorem 3.4 and reduce to solve a system of
$\mathcal {AT}_w'$
 are all linearly independent over K. The proof is similar to that of Theorem 4.4. We apply Theorem 3.4 and reduce to solve a system of 
 $\sigma $
-linear equations. By direct but complicated calculations, we show that there does not exist any nontrivial solutions and we are done. For
$\sigma $
-linear equations. By direct but complicated calculations, we show that there does not exist any nontrivial solutions and we are done. For 
 $w=3q-2$
 and
$w=3q-2$
 and 
 $q=2$
, it can be treated separately by the same method.
$q=2$
, it can be treated separately by the same method.
Remark 5.14. 1) We note that the MZV's 
 $\zeta _A(1,2q-2)$
 and
$\zeta _A(1,2q-2)$
 and 
 $\zeta _A(2q-1)$
 (resp.
$\zeta _A(2q-1)$
 (resp. 
 $\zeta _A(1,3q-3)$
 and
$\zeta _A(1,3q-3)$
 and 
 $\zeta _A(3q-2)$
) are linearly dependent over K by [Reference Rodriguez and Thakur28, Theorem 3.1]. This explains the above ad hoc construction of
$\zeta _A(3q-2)$
) are linearly dependent over K by [Reference Rodriguez and Thakur28, Theorem 3.1]. This explains the above ad hoc construction of 
 $\mathcal {AT}_w'$
.
$\mathcal {AT}_w'$
.
 2) Despite extensive numerical experiments, we cannot find a suitable basis 
 $\mathcal {AT}_w'$
 for the case
$\mathcal {AT}_w'$
 for the case 
 $w=3q-1$
.
$w=3q-1$
.
Acknowledgements
This project is carried out within the framework of the France–Korea International Research Laboratory/Network in Mathematics (FKmath).
Funding statement
The first named author (B.-H. Im) was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1A2B5B01001835) and (NRF-2023R1A2C1002385). Two of the authors (KN. L. and T. ND.) were partially supported by the Excellence Research Chair ‘L-functions in positive characteristic and applications’ funded by the Normandy Region. The fourth author (T. ND.) was partially supported by the ANR Grant COLOSS ANR-19-CE40-0015-02. The fifth author (LH. P.) was supported by the grant ICRTM.02-2021.05 funded by the International Center for Research and Postgraduate Training in Mathematics (VAST, Vietnam).
Competing interests
The authors have no competing interest to declare.
Ethical standards
The research meets all ethical guidelines, including adherence to the legal requirements of the study country.
 
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
