1 Introduction
A central feature of random lozenge tilings is that they exhibit boundary-induced phase transitions. Depending on the shape of the domain, they can admit frozen regions, where the associated height function is flat almost deterministically, and liquid regions, where the height function appears more rough and random; the curve separating these two phases is called an arctic boundary. We refer to the papers [Reference Cohn, Larsen and ProppCLP98] and [Reference Cerf and KenyonCK01] for some of the earlier analyses of this phenomenon in lozenge tilings of hexagonal domains and to the book [Reference GorinGor21] for a comprehensive review. A thorough study of arctic boundaries on arbitrary polygons was pursued in [Reference Kenyon and OkounkovKO07, Reference Astala, Duse, Prause and ZhongADPZ20], where it was shown that their limiting trajectories are algebraic curves.
 After realizing that these phase boundaries exist and admit limits, the next question is to understand their fluctuations, known as the edge statistics. On domains of diameter order n, the general prediction is that their fluctuations are of order 
 $n^{1/3}$
 and
$n^{1/3}$
 and 
 $n^{2/3}$
 in the directions transverse and parallel to their limiting trajectories, respectively. Upon scaling by these exponents, it is further predicted that the boundary converges to the Airy
$n^{2/3}$
 in the directions transverse and parallel to their limiting trajectories, respectively. Upon scaling by these exponents, it is further predicted that the boundary converges to the Airy
 $_2$
 process, a universal scaling limit introduced in [Reference Prähofer and SpohnPS02] that is believed to govern various phenomena related to the Kardar–Parisi–Zhang universality class. See [Reference JohanssonJoh18] for a detailed survey.
$_2$
 process, a universal scaling limit introduced in [Reference Prähofer and SpohnPS02] that is believed to govern various phenomena related to the Kardar–Parisi–Zhang universality class. See [Reference JohanssonJoh18] for a detailed survey.
 Following the initial works [Reference JohanssonJoh00, Reference JohanssonJoh02, Reference JohanssonJoh05] (where it was first proven in the related context of domino tilings for the Aztec diamond), this prediction has been established for random lozenge tilings of various families of domains. For example, we refer to [Reference Okounkov and ReshetikhinOR03, Reference Okounkov and ReshetikhinOR07, Reference Ferrari and SpohnFS03] for certain q-weighted random plane partitions, [Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07] for tilings of hexagons and [Reference PetrovPet14, Reference Duse and MetcalfeDM18] for tilings of trapezoids (hexagons with cuts along a single side). These results are all based on exact and analyzable formulas, specific to the domain of study, for the correlation kernel for which the tiling forms a determinantal point process. Although for lozenge tilings of arbitrary polygonals such explicit formulas are not known, it is believed under such generality that convergence to the Airy
 $_2$
 process under the above scaling still holds; see [Reference GorinGor21, Conjecture 18.7] and [Reference Astala, Duse, Prause and ZhongADPZ20, Conjecture 9.1].
$_2$
 process under the above scaling still holds; see [Reference GorinGor21, Conjecture 18.7] and [Reference Astala, Duse, Prause and ZhongADPZ20, Conjecture 9.1].
 In this paper, we prove this statement for simply connected polygonal domains subject to a certain technical assumption on their limit shape that we believe to hold generically (see Assumption 2.8 and Remark 2.9 below). Under the interpretation of lozenge tilings as nonintersecting random Bernoulli walks, we in fact more broadly consider the family of Bernoulli walks around the arctic boundary (not only the extreme one); we prove under the above scaling that it converges to the Airy line ensemble, a multilevel generalization of the Airy
 $_2$
 process, introduced in [Reference Prähofer and SpohnPS02, Reference Corwin and HammondCH14]. An informal formulation of this result is provided as follows; see Theorem 2.10 below for a more precise statement.
$_2$
 process, introduced in [Reference Prähofer and SpohnPS02, Reference Corwin and HammondCH14]. An informal formulation of this result is provided as follows; see Theorem 2.10 below for a more precise statement.
Theorem (Theorem 2.10 below).
Consider a uniformly random lozenge tiling of a simply connected polygonal domain, whose arctic boundary does not exhibit any of the configurations depicted in Figure 1. Under appropriate rescaling, the family of associated nonintersecting Bernoulli walks in a neighborhood of any point (that is neither a cusp nor a tangency location) of the limiting arctic boundary converges to the Airy line ensemble.

Figure 1 Depicted above are the four scenarios for arctic curve 
 $\mathfrak {A}$
 forbidden by Assumption 2.8.
$\mathfrak {A}$
 forbidden by Assumption 2.8.
In the above theorem, we forbade specific (presumably nongeneric) behaviors for singular points of the arctic boundary associated with the domain. These include the presence of tacnodes and cuspidal turning points; see Assumption 2.8 for the exact condition. At some of these nongeneric singularities, the edge scaling limit is more exotic; see [Reference Okounkov and ReshetikhinOR06, Reference Duse, Johansson and MetcalfeDJM16, Reference Adler, Johansson and van MoerbekeAJvM18b, Reference Adler, Johansson and van MoerbekeAJvM18a, Reference Adler and van MoerbekeAvM18] for more information. Still, it is believed that such behaviors should not disrupt the convergence to the Airy line ensemble elsewhere along the arctic boundary; that our theorem does not apply for these nongeneric polygons therefore seems to be an artifact of our proof method. Generic singularities along the arctic boundary (which do appear in almost any polygonal domain) are ordinary cusps. The scaling limits at such points are believed to be given by the Pearcey process [Reference Okounkov and ReshetikhinOR07]; we do not address this intriguing question here.Footnote 1
The above theorem can be viewed as a universality result for random lozenge tilings since it shows that their statistics converge to the Airy line ensemble at any point (that is not a cusp or tangency location) around the arctic boundary, regardless of the polygonal shape bounding the domain. Recently, universality results for lozenge tilings at other points inside the domain (where different limiting statistics appear) have been established. For example, it was shown that local statistics in the interior of the liquid region converge to the unique translation-invariant, ergodic Gibbs measure of the appropriate slope for hexagons [Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Reference GorinGor08, Reference Gorin, Borodin and RainsABR10], domains covered by finitely many trapezoids [Reference PetrovPet14, Reference GorinGor17], bounded perturbations of these [Reference LaslierLas19] and finally for general domains [Reference AggarwalAgg23]. It was also recently shown in [Reference Aggarwal and GorinAG22] that, at tangency locations between the arctic boundary and sides of general domains, the limiting statistics converge to the corners process of the Gaussian unitary ensemble. Both of these phenomena were proven to be quite robust, and they in fact apply on domains beyond polygonal ones.
 Although the Airy
 $_2$
 process also serves as the edge scaling limit in random lozenge tilings for certain classes of domains beyond polygons, the precise conditions under which it appears seem subtle. They are not determined by information about the macroscopic shape of the domain alone; microscopic perturbations of it can affect the edge statistics. Indeed, placing a single microscopic defect on an edge of a hexagonal domain corresponds to inserting a new walk in the associated nonintersecting Bernoulli walk ensemble. At the point where this new walk meets the arctic boundary for the original hexagon, the edge statistics should instead be given by the Airy
$_2$
 process also serves as the edge scaling limit in random lozenge tilings for certain classes of domains beyond polygons, the precise conditions under which it appears seem subtle. They are not determined by information about the macroscopic shape of the domain alone; microscopic perturbations of it can affect the edge statistics. Indeed, placing a single microscopic defect on an edge of a hexagonal domain corresponds to inserting a new walk in the associated nonintersecting Bernoulli walk ensemble. At the point where this new walk meets the arctic boundary for the original hexagon, the edge statistics should instead be given by the Airy
 $_2$
 process with a wanderer, introduced in [Reference Adler, Ferrari and van MoerbekeAFvM10].
$_2$
 process with a wanderer, introduced in [Reference Adler, Ferrari and van MoerbekeAFvM10].
 We now outline our proof of the theorem. We will show a concentration estimate for the tiling height function on a simply connected polygon 
 $\mathsf {P}$
 (satisfying Assumption 2.8 below) of diameter order n, stating that with high probability it is within
$\mathsf {P}$
 (satisfying Assumption 2.8 below) of diameter order n, stating that with high probability it is within 
 $n^{\delta }$
 of its limit shape, for any
$n^{\delta }$
 of its limit shape, for any 
 $\delta> 0$
. Given such a bound, we establish the theorem by locally comparing the random tiling of
$\delta> 0$
. Given such a bound, we establish the theorem by locally comparing the random tiling of 
 $\mathsf {P}$
 with that of a hexagon, around their arctic boundaries. More specifically, the concentration estimate implies that the extreme paths in the nonintersecting Bernoulli walk ensemble
$\mathsf {P}$
 with that of a hexagon, around their arctic boundaries. More specifically, the concentration estimate implies that the extreme paths in the nonintersecting Bernoulli walk ensemble 
 $\mathsf {X}$
 associated with a random tiling of
$\mathsf {X}$
 associated with a random tiling of 
 $\mathsf {P}$
 remains close to its limiting trajectory. We then match the slope and curvature of this limiting curve with those of the arctic boundary for a suitably chosen hexagon
$\mathsf {P}$
 remains close to its limiting trajectory. We then match the slope and curvature of this limiting curve with those of the arctic boundary for a suitably chosen hexagon 
 $\mathsf {P}'$
. Using this, we exhibit a coupling between
$\mathsf {P}'$
. Using this, we exhibit a coupling between 
 $\mathsf {X}$
 and the nonintersecting Bernoulli walk ensemble associated with a random tiling of
$\mathsf {X}$
 and the nonintersecting Bernoulli walk ensemble associated with a random tiling of 
 $\mathsf {P}'$
, in such a way that they likely nearly coincide, up to error
$\mathsf {P}'$
, in such a way that they likely nearly coincide, up to error 
 $\mathrm {o}(n^{1/3})$
, around their arctic boundaries. Known results for random tilings on hexagonal domains [Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Reference PetrovPet14, Reference Duse and MetcalfeDM18], coming from their exact solvability, show that the edge statistics of the random tiling of
$\mathrm {o}(n^{1/3})$
, around their arctic boundaries. Known results for random tilings on hexagonal domains [Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Reference PetrovPet14, Reference Duse and MetcalfeDM18], coming from their exact solvability, show that the edge statistics of the random tiling of 
 $\mathsf {P}'$
 are given by the Airy line ensemble. It follows that the same holds for the random tiling of
$\mathsf {P}'$
 are given by the Airy line ensemble. It follows that the same holds for the random tiling of 
 $\mathsf {P}$
.
$\mathsf {P}$
.
 The remainder of this paper is devoted to proving the above-mentioned concentration estimate, given by Theorem 3.10 below. Such a concentration phenomenon is ubiquitous in random matrix theory and is known as eigenvalue rigidity. In the context of Wigner matrices, it was first proven in [Reference Erdős, Yau and YinEYY12], and later for more general classes of random matrices [Reference Erdős, Knowles, Yau and YinEKYY13, Reference Knowles and YinKY13, Reference Bloemendal, Erdős, Knowles, Yau and YinBEK+14, Reference He, Knowles and RosenthalHKR18, Reference Bao, Erdős and SchnelliBES17, Reference Ajanki, Erdős and KrügerAEK19, Reference Bao, Erdős and SchnelliBES20]. To show this in our tiling context, we begin with a ‘preliminary’ concentration bound, Theorem 4.3 below, for a family of n nonintersecting random discrete bridges conditioned to start and end at specified locations (equivalently, random lozenge tilings of a strip). Assuming the initial and ending data for these Bernoulli walks are such that the limiting arctic boundary has at most one cusp (see the left and middle of Figure 2 for examples), this bound states that with high probability the associated height function is within 
 $n^{\delta }$
 of its limit shape. Its proof is presented in part I of this series [Reference HuangHua24], which proceeds by first using results of [Reference HuangHua20] to approximate the random bridge model by a family of nonintersecting Bernoulli random walks with space- and time-dependent drifts. The latter walk model can then be studied through a dynamical version of the loop equations and an analysis of the complex Burgers equation through the characteristic method.
$n^{\delta }$
 of its limit shape. Its proof is presented in part I of this series [Reference HuangHua24], which proceeds by first using results of [Reference HuangHua20] to approximate the random bridge model by a family of nonintersecting Bernoulli random walks with space- and time-dependent drifts. The latter walk model can then be studied through a dynamical version of the loop equations and an analysis of the complex Burgers equation through the characteristic method.

Figure 2 Shown to the left and middle are arctic boundaries exhibiting a single cusp. Shown to the right is an arctic boundary exhibiting two cusps that point in opposite directions and a decomposition of that strip into overlapping regions that each have (at most) one cusp.
Imposing that the arctic boundary has at most one cusp is a substantial constraint; it does not hold for lozenge tilings of most polygons. Its origin can be heuristically attributed to the fact that, while disjoint families of nonintersecting Bernoulli walks often merge, merged ones do not separate unless they are driven by a diverging drift (which much less amenable to analysis). As such, when comparing the bridge model to a family of nonintersecting Bernoulli walks with drift, one must ensure that these Bernoulli walks only merge and never separate. If the arctic boundary has only one cusp, then by suitably orienting the Bernoulli walks, this cusp can be interpreted as a location where families of Bernoulli walks merge; for example, this is the case if we orient the Bernoulli walks associated with the left and middle diagrams in Figure 2 north and south, respectively. If the arctic boundary for the bridge model exhibits two cusps ‘pointing in opposite directions’, as in the right side of Figure 2, then any choice of orientation will lead to at least one cusp serving as a point where the Bernoulli walks separate. This issue was circumvented in [Reference HuangHua20] by restricting to a family of domains in which all cusps point in the same direction. However, on generic polygons, cusps pointing in opposite directions do appear, so this point must be addressed here.
 To that end, we decompose our domain into a bounded number of (possibly overlapping) subregions that each have at most one cusp; see the right side of Figure 2. We then introduce a Markov chain, called the alternating dynamics (a form of the block dynamics), that uniformly resamples the tiling in one subregion and leaves it fixed in the others. Known estimates [Reference Randall and TetaliRT00] for mixing times of Glauber dynamics, together with the censoring inequality of [Reference Peres and WinklerPW13], imply that this Markov chain mixes to the uniform measure in time that is polynomial in n (for example, 
 $\mathcal {O} (n^{22})$
).
$\mathcal {O} (n^{22})$
).
 Initiating the alternating dynamics from a profile approximating the limit shape, we show that the 
 $n^{\delta }$
 concentration bound is with high probability preserved at each step of the alternating dynamics (from which the result follows by running these dynamics until they mix). The preliminary concentration result alone is insufficient to prove this since the
$n^{\delta }$
 concentration bound is with high probability preserved at each step of the alternating dynamics (from which the result follows by running these dynamics until they mix). The preliminary concentration result alone is insufficient to prove this since the 
 $n^{\delta }$
 error it admits could in principle accumulate at each step. To overcome this, we introduce deterministic barrier functions, which we refer to as tilted profiles, and show (with the assistance of the preliminary concentration bound) that they likely bound the tiling height function from above and below throughout the dynamics. To prove that such tilted profiles exist, we exhibit them by perturbing solutions to the complex Burgers equation in a specific way.
$n^{\delta }$
 error it admits could in principle accumulate at each step. To overcome this, we introduce deterministic barrier functions, which we refer to as tilted profiles, and show (with the assistance of the preliminary concentration bound) that they likely bound the tiling height function from above and below throughout the dynamics. To prove that such tilted profiles exist, we exhibit them by perturbing solutions to the complex Burgers equation in a specific way.
 The remainder of this paper is organized as follows. In Section 2, we define the model and state our main results. In Section 3, we state the concentration result for the tiling height function on the polygon 
 $\mathsf {P}$
 and establish the theorem assuming it. In Section 4, we state the preliminary concentration bound for nonintersecting Bernoulli walks, introduce the alternating dynamics Markov chain and bound its mixing time. In Section 5, we introduce and discuss properties of tilted height functions. In Section 6, we establish the concentration result for the tiling height function on
$\mathsf {P}$
 and establish the theorem assuming it. In Section 4, we state the preliminary concentration bound for nonintersecting Bernoulli walks, introduce the alternating dynamics Markov chain and bound its mixing time. In Section 5, we introduce and discuss properties of tilted height functions. In Section 6, we establish the concentration result for the tiling height function on 
 $\mathsf {P}$
. In Section 7, we give the proof for the existence of tilted height functions.
$\mathsf {P}$
. In Section 7, we give the proof for the existence of tilted height functions.
Notation
 Throughout, we let 
 $\overline {\mathbb {C}} = \mathbb {C} \cup \{ \infty \}$
,
$\overline {\mathbb {C}} = \mathbb {C} \cup \{ \infty \}$
, 
 $\mathbb {H}^+ = \big \{ z \in \mathbb {C} : \operatorname {\mathrm {Im}} z> 0 \big \}$
 and
$\mathbb {H}^+ = \big \{ z \in \mathbb {C} : \operatorname {\mathrm {Im}} z> 0 \big \}$
 and 
 $\mathbb {H}^- = \{ z \in \mathbb {C} : \operatorname {\mathrm {Im}} z < 0 \}$
 denote the compactified complex plane, upper complex plane and lower complex plane, respectively. We further denote by
$\mathbb {H}^- = \{ z \in \mathbb {C} : \operatorname {\mathrm {Im}} z < 0 \}$
 denote the compactified complex plane, upper complex plane and lower complex plane, respectively. We further denote by 
 $|u - v|$
 the Euclidean distance between any elements
$|u - v|$
 the Euclidean distance between any elements 
 $u, v \in \mathbb {R}^2$
. For any subset
$u, v \in \mathbb {R}^2$
. For any subset 
 $\mathfrak {R} \subseteq \mathbb {R}^2$
, we let
$\mathfrak {R} \subseteq \mathbb {R}^2$
, we let 
 $\partial \mathfrak {R}$
 denote its boundary,
$\partial \mathfrak {R}$
 denote its boundary, 
 $\overline {\mathfrak {R}}$
 denote its closure and
$\overline {\mathfrak {R}}$
 denote its closure and 
 $\operatorname {\mathrm {diam}} \mathfrak {R} = \sup _{r, r' \in \mathfrak {R}} |r - r'|$
 denote its diameter. For any additional subset
$\operatorname {\mathrm {diam}} \mathfrak {R} = \sup _{r, r' \in \mathfrak {R}} |r - r'|$
 denote its diameter. For any additional subset 
 $\mathfrak {R}' \subseteq \mathbb {R}^2$
, we let
$\mathfrak {R}' \subseteq \mathbb {R}^2$
, we let 
 $\operatorname {\mathrm {dist}} (\mathfrak {R}, \mathfrak {R}') = \sup _{r \in \mathfrak {R}} \inf _{r' \in \mathfrak {R}'} |r - r'|$
 denote the distance between
$\operatorname {\mathrm {dist}} (\mathfrak {R}, \mathfrak {R}') = \sup _{r \in \mathfrak {R}} \inf _{r' \in \mathfrak {R}'} |r - r'|$
 denote the distance between 
 $\mathfrak {R}$
 and
$\mathfrak {R}$
 and 
 $\mathfrak {R}'$
. For any real number
$\mathfrak {R}'$
. For any real number 
 $c \in \mathbb {R}$
, we also define the rescaled set
$c \in \mathbb {R}$
, we also define the rescaled set 
 $c \mathfrak {R} = \{ cr : r \in \mathfrak {R} \}$
, and for any
$c \mathfrak {R} = \{ cr : r \in \mathfrak {R} \}$
, and for any 
 $u \in \mathbb {R}^2$
, we define the shifted set
$u \in \mathbb {R}^2$
, we define the shifted set 
 $\mathfrak {R} + u = \big \{ r + u : r \in \mathfrak {R} \big \}$
. For any
$\mathfrak {R} + u = \big \{ r + u : r \in \mathfrak {R} \big \}$
. For any 
 $u \in \mathbb {R}^2$
 and
$u \in \mathbb {R}^2$
 and 
 $r \geq 0$
, let
$r \geq 0$
, let 
 $\mathfrak {B}_r (u) = \mathfrak {B} (u; r) = \big \{ v \in \mathbb {R}^2 : |v - u| \leq r \big \}$
 denote the disk centered at u of radius r.
$\mathfrak {B}_r (u) = \mathfrak {B} (u; r) = \big \{ v \in \mathbb {R}^2 : |v - u| \leq r \big \}$
 denote the disk centered at u of radius r.
2 Results
2.1 Tilings and height functions
 We denote by 
 $\mathbb {T}$
 the triangular lattice, namely, the graph whose vertex set is
$\mathbb {T}$
 the triangular lattice, namely, the graph whose vertex set is 
 $\mathbb {Z}^2$
 and whose edge set consists of edges connecting
$\mathbb {Z}^2$
 and whose edge set consists of edges connecting 
 $(x, y), (x', y') \in \mathbb {Z}^2$
 if
$(x, y), (x', y') \in \mathbb {Z}^2$
 if 
 $(x' - x, y' - y) \in \{ (1, 0), (0, 1), (1, 1)\}$
. The axes of
$(x' - x, y' - y) \in \{ (1, 0), (0, 1), (1, 1)\}$
. The axes of 
 $\mathbb {T}$
 are the lines
$\mathbb {T}$
 are the lines 
 $\{ x = 0 \}$
,
$\{ x = 0 \}$
, 
 $\{ y = 0 \}$
 and
$\{ y = 0 \}$
 and 
 $\{ x = y \}$
, and the faces of
$\{ x = y \}$
, and the faces of 
 $\mathbb {T}$
 are triangles with vertices of the form
$\mathbb {T}$
 are triangles with vertices of the form 
 $\big \{ (x, y), (x + 1, y), (x + 1, y + 1) \big \}$
 or
$\big \{ (x, y), (x + 1, y), (x + 1, y + 1) \big \}$
 or 
 $\big \{ (x, y), (x, y + 1), (x + 1, y + 1) \big \}$
. A domain
$\big \{ (x, y), (x, y + 1), (x + 1, y + 1) \big \}$
. A domain 
 $\mathsf {R} \subseteq \mathbb {T}$
 is a simply connected induced subgraph of
$\mathsf {R} \subseteq \mathbb {T}$
 is a simply connected induced subgraph of 
 $\mathbb {T}$
. The boundary
$\mathbb {T}$
. The boundary 
 $\partial \mathsf {R} \subseteq \mathsf {R}$
 is the set of vertices
$\partial \mathsf {R} \subseteq \mathsf {R}$
 is the set of vertices 
 $\mathsf {v} \in \mathsf {R}$
 adjacent to a vertex in
$\mathsf {v} \in \mathsf {R}$
 adjacent to a vertex in 
 $\mathbb {T} \setminus \mathsf {R}$
.
$\mathbb {T} \setminus \mathsf {R}$
.
 A dimer covering of a domain 
 $\mathsf {R} \subseteq \mathbb {T}$
 is defined to be a perfect matching on the dual graph of
$\mathsf {R} \subseteq \mathbb {T}$
 is defined to be a perfect matching on the dual graph of 
 $\mathsf {R}$
. A pair of adjacent triangular faces in any such matching forms a parallelogram, which we will also refer to as a lozenge or tile. Lozenges can be oriented in one of three ways; see the right side of Figure 3 for all three orientations. We refer to the topmost lozenge there (that is, one with vertices of the form
$\mathsf {R}$
. A pair of adjacent triangular faces in any such matching forms a parallelogram, which we will also refer to as a lozenge or tile. Lozenges can be oriented in one of three ways; see the right side of Figure 3 for all three orientations. We refer to the topmost lozenge there (that is, one with vertices of the form 
 $\big \{ (x, y), (x, y + 1), (x + 1, y + 2), (x + 1, y + 1) \big \}$
) as a type
$\big \{ (x, y), (x, y + 1), (x + 1, y + 2), (x + 1, y + 1) \big \}$
) as a type 
 $1$
 lozenge. Similarly, we refer to the middle (with vertices of the form
$1$
 lozenge. Similarly, we refer to the middle (with vertices of the form 
 $\big \{ (x, y), (x + 1, y), (x + 2, y + 1), (x + 1, y + 1) \big \}$
) and bottom (vertices of the form
$\big \{ (x, y), (x + 1, y), (x + 2, y + 1), (x + 1, y + 1) \big \}$
) and bottom (vertices of the form 
 $\big \{ (x, y), (x, y + 1), (x + 1, y + 1), (x + 1, y) \big \}$
) ones there as type
$\big \{ (x, y), (x, y + 1), (x + 1, y + 1), (x + 1, y) \big \}$
) ones there as type 
 $2$
 and type
$2$
 and type 
 $3$
 lozenges, respectively. A dimer covering of
$3$
 lozenges, respectively. A dimer covering of 
 $\mathsf {R}$
 can equivalently be interpreted as a tiling of
$\mathsf {R}$
 can equivalently be interpreted as a tiling of 
 $\mathsf {R}$
 by lozenges of types
$\mathsf {R}$
 by lozenges of types 
 $1$
,
$1$
, 
 $2$
 and
$2$
 and 
 $3$
. Therefore, we will also refer to a dimer covering of
$3$
. Therefore, we will also refer to a dimer covering of 
 $\mathsf {R}$
 as a (lozenge) tiling. We call
$\mathsf {R}$
 as a (lozenge) tiling. We call 
 $\mathsf {R}$
 tileable if it admits a tiling.
$\mathsf {R}$
 tileable if it admits a tiling.

Figure 3 Depicted to the right are the three types of lozenges. Depicted in the middle is a lozenge tiling of a hexagon. One may view this tiling as a packing of boxes (of the type depicted on the left) into a large corner, which gives rise to a height function (shown in the middle).
 Associated with any tiling of 
 $\mathsf {R}$
 is a height function
$\mathsf {R}$
 is a height function 
 $\mathsf {H}: \mathsf {R} \rightarrow \mathbb {Z}$
, namely, a function on the vertices of
$\mathsf {H}: \mathsf {R} \rightarrow \mathbb {Z}$
, namely, a function on the vertices of 
 $\mathsf {R}$
 that satisfies
$\mathsf {R}$
 that satisfies 
 $$ \begin{align*} \mathsf{H} (\mathsf{v}) - \mathsf{H} (\mathsf{u}) \in \{ 0, 1 \}, \quad \text{whenever}\; \mathsf{u} = (x, y)\; \text{and}\; \mathsf{v} \in \big\{ (x + 1, y), (x, y - 1), (x + 1, y + 1) \big\}, \end{align*} $$
$$ \begin{align*} \mathsf{H} (\mathsf{v}) - \mathsf{H} (\mathsf{u}) \in \{ 0, 1 \}, \quad \text{whenever}\; \mathsf{u} = (x, y)\; \text{and}\; \mathsf{v} \in \big\{ (x + 1, y), (x, y - 1), (x + 1, y + 1) \big\}, \end{align*} $$
for some 
 $(x, y) \in \mathbb {Z}^2$
. We refer to the restriction
$(x, y) \in \mathbb {Z}^2$
. We refer to the restriction 
 $\mathsf {h} = \mathsf {H}|_{\partial \mathsf {R}}$
 as a boundary height function. For any boundary height function
$\mathsf {h} = \mathsf {H}|_{\partial \mathsf {R}}$
 as a boundary height function. For any boundary height function 
 $\mathsf {h} : \partial \mathsf {R} \rightarrow \mathbb {Z}$
, let
$\mathsf {h} : \partial \mathsf {R} \rightarrow \mathbb {Z}$
, let 
 $\mathscr {G} (\mathsf {h})$
 denote the set of all height functions
$\mathscr {G} (\mathsf {h})$
 denote the set of all height functions 
 $\mathsf {H}: \mathsf {R} \rightarrow \mathbb {Z}$
 with
$\mathsf {H}: \mathsf {R} \rightarrow \mathbb {Z}$
 with 
 $\mathsf {H}|_{\partial \mathsf {R}} = \mathsf {h}$
.
$\mathsf {H}|_{\partial \mathsf {R}} = \mathsf {h}$
.
 For a fixed vertex 
 $\mathsf {v} \in \mathsf {R}$
 and integer
$\mathsf {v} \in \mathsf {R}$
 and integer 
 $m \in \mathbb {Z}$
, one can associate with any tiling of
$m \in \mathbb {Z}$
, one can associate with any tiling of 
 $\mathsf {R}$
 a height function
$\mathsf {R}$
 a height function 
 $\mathsf {H}: \mathsf {R} \rightarrow \mathbb {Z}$
 as follows. First, set
$\mathsf {H}: \mathsf {R} \rightarrow \mathbb {Z}$
 as follows. First, set 
 $\mathsf {H} (\mathsf {v}) = m$
, and then define
$\mathsf {H} (\mathsf {v}) = m$
, and then define 
 $\mathsf {H}$
 at the remaining vertices of
$\mathsf {H}$
 at the remaining vertices of 
 $\mathsf {R}$
 in such a way that the height functions along the four vertices of any lozenge in the tiling are of the form depicted on the right side of Figure 3. In particular, we require that
$\mathsf {R}$
 in such a way that the height functions along the four vertices of any lozenge in the tiling are of the form depicted on the right side of Figure 3. In particular, we require that 
 $\mathsf {H} (x + 1, y) = \mathsf {H} (x, y)$
 if and only if
$\mathsf {H} (x + 1, y) = \mathsf {H} (x, y)$
 if and only if 
 $(x, y)$
 and
$(x, y)$
 and 
 $(x + 1, y)$
 are vertices of the same type
$(x + 1, y)$
 are vertices of the same type 
 $1$
 lozenge and that
$1$
 lozenge and that 
 $\mathsf {H} (x, y) - \mathsf {H} (x, y + 1) = 1$
 if and only if
$\mathsf {H} (x, y) - \mathsf {H} (x, y + 1) = 1$
 if and only if 
 $(x, y)$
 and
$(x, y)$
 and 
 $(x, y + 1)$
 are vertices of the same type
$(x, y + 1)$
 are vertices of the same type 
 $2$
 lozenge. Since
$2$
 lozenge. Since 
 $\mathsf {R}$
 is simply connected, a height function on
$\mathsf {R}$
 is simply connected, a height function on 
 $\mathsf {R}$
 is uniquely determined by these conditions (and the value of
$\mathsf {R}$
 is uniquely determined by these conditions (and the value of 
 $\mathsf {H}(\mathsf {v}) = m$
).
$\mathsf {H}(\mathsf {v}) = m$
).
 We refer to the right side of Figure 3 for an example; as depicted there, we can also view a lozenge tiling of 
 $\mathsf {R}$
 as a packing of
$\mathsf {R}$
 as a packing of 
 $\mathsf {R}$
 by boxes of the type shown on the left side of Figure 3. In this case, the value
$\mathsf {R}$
 by boxes of the type shown on the left side of Figure 3. In this case, the value 
 $\mathsf {H} (\mathsf {u})$
 of the height function associated with this tiling at some vertex
$\mathsf {H} (\mathsf {u})$
 of the height function associated with this tiling at some vertex 
 $\mathsf {u} \in \mathsf {R}$
 denotes the height of the stack of boxes at
$\mathsf {u} \in \mathsf {R}$
 denotes the height of the stack of boxes at 
 $\mathsf {u}$
. Observe in particular that, if there exists a tiling
$\mathsf {u}$
. Observe in particular that, if there exists a tiling 
 $\mathscr {M}$
 of
$\mathscr {M}$
 of 
 $\mathsf {R}$
 associated with some height function
$\mathsf {R}$
 associated with some height function 
 $\mathsf {H}$
, then the boundary height function
$\mathsf {H}$
, then the boundary height function 
 $\mathsf {h} = \mathsf {H} |_{\partial \mathsf {R}}$
 is independent of
$\mathsf {h} = \mathsf {H} |_{\partial \mathsf {R}}$
 is independent of 
 $\mathscr {M}$
 and is uniquely determined by
$\mathscr {M}$
 and is uniquely determined by 
 $\mathsf {R}$
 (except for a global shift, which was above fixed by the value of
$\mathsf {R}$
 (except for a global shift, which was above fixed by the value of 
 $\mathsf {H}(\mathsf {v}) = m$
).
$\mathsf {H}(\mathsf {v}) = m$
).
2.2 Nonintersecting Bernoulli walk ensembles
 In this section, we explain the correspondence between tilings and nonintersecting Bernoulli walk ensembles, to which end we begin by defining the latter. A Bernoulli walk is a sequence 
 $\mathsf {q} = \big ( \mathsf {q} (s), \mathsf {q} (s + 1), \ldots , \mathsf {q} (t) \big ) \in \mathbb {Z}_{\geq 0}^{t - s + 1}$
 such that
$\mathsf {q} = \big ( \mathsf {q} (s), \mathsf {q} (s + 1), \ldots , \mathsf {q} (t) \big ) \in \mathbb {Z}_{\geq 0}^{t - s + 1}$
 such that 
 $\mathsf {q} (r + 1) - \mathsf {q} (r) \in \{ 0, 1 \}$
 for each
$\mathsf {q} (r + 1) - \mathsf {q} (r) \in \{ 0, 1 \}$
 for each 
 $r \in [s, t - 1]$
; viewing r as a time index,
$r \in [s, t - 1]$
; viewing r as a time index, 
 $\big ( \mathsf {q} (r) \big )$
 denotes the space-time trajectory for a discrete walk, which may either not move or jump to the right at each step. For this reason, the interval
$\big ( \mathsf {q} (r) \big )$
 denotes the space-time trajectory for a discrete walk, which may either not move or jump to the right at each step. For this reason, the interval 
 $[s, t]$
 is called the time span of the Bernoulli walk
$[s, t]$
 is called the time span of the Bernoulli walk 
 $\mathsf {q}$
, and a step
$\mathsf {q}$
, and a step 
 $(r,r + 1)$
 of this Bernoulli walk may be interpreted as an ‘nonjump’ or a ‘right-jump’ if
$(r,r + 1)$
 of this Bernoulli walk may be interpreted as an ‘nonjump’ or a ‘right-jump’ if 
 $\mathsf {q} (r + 1) = \mathsf {q} (r)$
 or
$\mathsf {q} (r + 1) = \mathsf {q} (r)$
 or 
 $\mathsf {q} (r + 1) = \mathsf {q} (r) + 1$
, respectively. A family of Bernoulli walks
$\mathsf {q} (r + 1) = \mathsf {q} (r) + 1$
, respectively. A family of Bernoulli walks 
 $\mathsf {Q} = \big ( \mathsf {q}_l, \mathsf {q}_{l + 1}, \ldots , \mathsf {q}_m \big )$
 is called nonintersecting if
$\mathsf {Q} = \big ( \mathsf {q}_l, \mathsf {q}_{l + 1}, \ldots , \mathsf {q}_m \big )$
 is called nonintersecting if 
 $\mathsf {q}_i (r) < \mathsf {q}_j (r)$
 whenever
$\mathsf {q}_i (r) < \mathsf {q}_j (r)$
 whenever 
 $l \leq i < j \leq m$
 and r is the in time span of
$l \leq i < j \leq m$
 and r is the in time span of 
 $\mathsf {q}_i$
 and
$\mathsf {q}_i$
 and 
 $\mathsf {q}_j$
.
$\mathsf {q}_j$
.
 Now, fix some tileable domain 
 $\mathsf {R} \subset \mathbb {T}$
, with a height function
$\mathsf {R} \subset \mathbb {T}$
, with a height function 
 $\mathsf {H}: \mathsf {R} \rightarrow \mathbb {T}$
 corresponding to a tiling
$\mathsf {H}: \mathsf {R} \rightarrow \mathbb {T}$
 corresponding to a tiling 
 $\mathscr {M}$
 of
$\mathscr {M}$
 of 
 $\mathsf {R}$
. We may interpret
$\mathsf {R}$
. We may interpret 
 $\mathscr {M}$
 as a family of nonintersecting Bernoulli walks by first omitting all type
$\mathscr {M}$
 as a family of nonintersecting Bernoulli walks by first omitting all type 
 $1$
 lozenges from
$1$
 lozenges from 
 $\mathscr {M}$
 and then viewing any type
$\mathscr {M}$
 and then viewing any type 
 $2$
 or type
$2$
 or type 
 $3$
 tile as a right-jump or nonjump of a Bernoulli walk, respectively; see Figure 4 for a depiction.
$3$
 tile as a right-jump or nonjump of a Bernoulli walk, respectively; see Figure 4 for a depiction.

Figure 4 Depicted to the left is an ensemble 
 $\mathsf {Q} = \big ( \mathsf {q}_{-2}, \mathsf {q}_{-1}, \mathsf {q}_0, \mathsf {q}_1, \mathsf {q}_2, \mathsf {q}_3 \big )$
 consisting of six nonintersecting Bernoulli walks. Depicted to the right is an associated lozenge tiling.
$\mathsf {Q} = \big ( \mathsf {q}_{-2}, \mathsf {q}_{-1}, \mathsf {q}_0, \mathsf {q}_1, \mathsf {q}_2, \mathsf {q}_3 \big )$
 consisting of six nonintersecting Bernoulli walks. Depicted to the right is an associated lozenge tiling.
 It will be useful to set more precise notation on this correspondence. Since 
 $\partial _x \mathsf {H} (x, t) \in \{ 0, 1 \}$
 for all
$\partial _x \mathsf {H} (x, t) \in \{ 0, 1 \}$
 for all 
 $(x, t)$
, there exist integers
$(x, t)$
, there exist integers 
 $\mathsf {x}_{a(t)} (t) < \mathsf {x}_{a(t) + 1} (t) < \cdots < \mathsf {x}_{b(t)} (t)$
 such that
$\mathsf {x}_{a(t)} (t) < \mathsf {x}_{a(t) + 1} (t) < \cdots < \mathsf {x}_{b(t)} (t)$
 such that 
 $$ \begin{align*} \partial_x \mathsf{H} (x, t) = \displaystyle\sum_{i = a(t)}^{b(t)} \textbf{1} \Big( x \in \big[ \mathsf{x}_i (t), \mathsf{x}_i (t) + 1 \big] \Big), \end{align*} $$
$$ \begin{align*} \partial_x \mathsf{H} (x, t) = \displaystyle\sum_{i = a(t)}^{b(t)} \textbf{1} \Big( x \in \big[ \mathsf{x}_i (t), \mathsf{x}_i (t) + 1 \big] \Big), \end{align*} $$
which are those such that 
 $\mathsf {H} \big ( \mathsf {x}_i (t) + 1, t \big ) = \mathsf {H} \big ( \mathsf {x}_i (t), t \big ) + 1$
. This fixes the locations of
$\mathsf {H} \big ( \mathsf {x}_i (t) + 1, t \big ) = \mathsf {H} \big ( \mathsf {x}_i (t), t \big ) + 1$
. This fixes the locations of 
 $\mathsf {x}_i (t)$
, but in this way the indices
$\mathsf {x}_i (t)$
, but in this way the indices 
 $a(t)$
 and
$a(t)$
 and 
 $b(t)$
 are defined up to an overall shift. We will fix this shift by stipulating that
$b(t)$
 are defined up to an overall shift. We will fix this shift by stipulating that 
 $\mathsf {H} \big ( \mathsf {x}_{a(t)}+1, t \big ) = a(t)$
; this defines a Bernoulli walk
$\mathsf {H} \big ( \mathsf {x}_{a(t)}+1, t \big ) = a(t)$
; this defines a Bernoulli walk 
 $\mathsf {x}_i = \big ( \mathsf {x}_i (t) \big )$
.Footnote 
2
 Ranging over i, we obtain a nonintersecting ensemble of Bernoulli walks
$\mathsf {x}_i = \big ( \mathsf {x}_i (t) \big )$
.Footnote 
2
 Ranging over i, we obtain a nonintersecting ensemble of Bernoulli walks 
 $\mathsf {X} = \big ( \mathsf {x}_l, \mathsf {x}_{l + 1}, \ldots , \mathsf {x}_m \big )$
 that are indexed through the height function
$\mathsf {X} = \big ( \mathsf {x}_l, \mathsf {x}_{l + 1}, \ldots , \mathsf {x}_m \big )$
 that are indexed through the height function 
 $\mathsf {H}$
.
$\mathsf {H}$
.
2.3 Limit shapes and arctic boundaries
To analyze the limits of height functions of random tilings, it will be useful to introduce continuum analogs of the notions considered in Section 2.1. So, set
 $$ \begin{align} \mathcal{T} = \big\{ (s, t) \in (0, 1) \times \mathbb{R}_{< 0}: s + t> 0 \big\} \subset \mathbb{R}^2, \end{align} $$
$$ \begin{align} \mathcal{T} = \big\{ (s, t) \in (0, 1) \times \mathbb{R}_{< 0}: s + t> 0 \big\} \subset \mathbb{R}^2, \end{align} $$
and its closure 
 $\overline {\mathcal {T}} = \big \{ (s, t) \in [0, 1] \times \mathbb {R}_{\leq 0} : s + t \ge 0 \big \}$
. We interpret
$\overline {\mathcal {T}} = \big \{ (s, t) \in [0, 1] \times \mathbb {R}_{\leq 0} : s + t \ge 0 \big \}$
. We interpret 
 $\overline {\mathcal {T}}$
 as the set of possible gradients, also called slopes, for a continuum height function;
$\overline {\mathcal {T}}$
 as the set of possible gradients, also called slopes, for a continuum height function; 
 $\mathcal {T}$
 is then the set of ‘liquid’ slopes, whose associated tilings contain tiles of all types. For any simply connected open subset
$\mathcal {T}$
 is then the set of ‘liquid’ slopes, whose associated tilings contain tiles of all types. For any simply connected open subset 
 $\mathfrak {R} \subset \mathbb {R}^2$
, we say that a function
$\mathfrak {R} \subset \mathbb {R}^2$
, we say that a function 
 $H : \mathfrak {R} \rightarrow \mathbb {R}$
 is admissible if H is
$H : \mathfrak {R} \rightarrow \mathbb {R}$
 is admissible if H is 
 $1$
-Lipschitz and
$1$
-Lipschitz and 
 $\nabla H(u) \in \overline {\mathcal {T}}$
 for almost all
$\nabla H(u) \in \overline {\mathcal {T}}$
 for almost all 
 $u \in \mathfrak {R}$
. We further say a function
$u \in \mathfrak {R}$
. We further say a function 
 $h: \partial \mathfrak {R} \rightarrow \mathbb {R}$
 admits an admissible extension to
$h: \partial \mathfrak {R} \rightarrow \mathbb {R}$
 admits an admissible extension to 
 $\mathfrak {R}$
 if
$\mathfrak {R}$
 if 
 $\operatorname {\mathrm {Adm}} (\mathfrak {R}; h)$
, the set of admissible functions
$\operatorname {\mathrm {Adm}} (\mathfrak {R}; h)$
, the set of admissible functions 
 $H: \mathfrak {R} \rightarrow \mathbb {R}$
 with
$H: \mathfrak {R} \rightarrow \mathbb {R}$
 with 
 $H |_{\partial \mathfrak {R}} = h$
, is not empty.
$H |_{\partial \mathfrak {R}} = h$
, is not empty.
 We say that a sequence of domains 
 $\mathsf {R}_1, \mathsf {R}_2, \ldots \subset \mathbb {T}$
 converges to a simply connected subset
$\mathsf {R}_1, \mathsf {R}_2, \ldots \subset \mathbb {T}$
 converges to a simply connected subset 
 $\mathfrak {R} \subset \mathbb {R}^2$
 if
$\mathfrak {R} \subset \mathbb {R}^2$
 if 
 $n^{-1} \mathsf {R}_n \subseteq \mathfrak {R}$
 for each
$n^{-1} \mathsf {R}_n \subseteq \mathfrak {R}$
 for each 
 $n \geq 1$
 and
$n \geq 1$
 and 
 $\lim _{n \rightarrow \infty } \operatorname {\mathrm {dist}} (n^{-1} \mathsf {R}_n, \mathfrak {R}) = 0$
. We further say that a sequence
$\lim _{n \rightarrow \infty } \operatorname {\mathrm {dist}} (n^{-1} \mathsf {R}_n, \mathfrak {R}) = 0$
. We further say that a sequence 
 $\mathsf {h}_1, \mathsf {h}_2, \ldots $
 of boundary height functions on
$\mathsf {h}_1, \mathsf {h}_2, \ldots $
 of boundary height functions on 
 $\mathsf {R}_1, \mathsf {R}_2, \ldots $
, respectively, converges to a boundary height function
$\mathsf {R}_1, \mathsf {R}_2, \ldots $
, respectively, converges to a boundary height function 
 $h : \partial \mathfrak {R} \rightarrow \mathbb {R}$
 if
$h : \partial \mathfrak {R} \rightarrow \mathbb {R}$
 if 
 $\lim _{n \rightarrow \infty } n^{-1} \mathsf {h}_n (n v) = h (v)$
 if v is any point in
$\lim _{n \rightarrow \infty } n^{-1} \mathsf {h}_n (n v) = h (v)$
 if v is any point in 
 $n^{-1} \mathsf {R}_n$
 for all sufficiently large n.
$n^{-1} \mathsf {R}_n$
 for all sufficiently large n.
 To state results on the limiting height function of random tilings, for any 
 $x \in \mathbb {R}_{\geq 0}$
 and
$x \in \mathbb {R}_{\geq 0}$
 and 
 $(s, t) \in \overline {\mathcal {T}}$
 we denote the Lobachevsky function
$(s, t) \in \overline {\mathcal {T}}$
 we denote the Lobachevsky function 
 $L: \mathbb {R}_{\geq 0} \rightarrow \mathbb {R}$
 and the surface tension
$L: \mathbb {R}_{\geq 0} \rightarrow \mathbb {R}$
 and the surface tension 
 $\sigma : \overline {\mathcal {T}} \rightarrow \mathbb {R}^2$
 by
$\sigma : \overline {\mathcal {T}} \rightarrow \mathbb {R}^2$
 by 
 $$ \begin{align} L(x) = - \displaystyle\int_0^x \log |2 \sin z| \mathrm{d} z; \qquad \sigma (s, t) = \displaystyle\frac{1}{\pi} \Big( L \big(\pi (1-s) \big) + L (- \pi t) + L \big( \pi (s + t) \big) \Big). \end{align} $$
$$ \begin{align} L(x) = - \displaystyle\int_0^x \log |2 \sin z| \mathrm{d} z; \qquad \sigma (s, t) = \displaystyle\frac{1}{\pi} \Big( L \big(\pi (1-s) \big) + L (- \pi t) + L \big( \pi (s + t) \big) \Big). \end{align} $$
 For any 
 $H \in \operatorname {\mathrm {Adm}} (\mathfrak {R})$
, we further denote the entropy functional
$H \in \operatorname {\mathrm {Adm}} (\mathfrak {R})$
, we further denote the entropy functional 
 $$ \begin{align} \mathcal{E} (H) = \displaystyle\int_{\mathfrak{R}} \sigma \big( \nabla H (z) \big) \mathrm{d}z. \end{align} $$
$$ \begin{align} \mathcal{E} (H) = \displaystyle\int_{\mathfrak{R}} \sigma \big( \nabla H (z) \big) \mathrm{d}z. \end{align} $$
 The following variational principle of [Reference Cohn, Kenyon and ProppCKP01] states that the height function associated with a uniformly random tiling of a sequence of domains converging to 
 $\mathfrak {R}$
 converges to the maximizer of
$\mathfrak {R}$
 converges to the maximizer of 
 $\mathcal {E}$
 with high probability.
$\mathcal {E}$
 with high probability.
Lemma 2.1 [Reference Cohn, Kenyon and ProppCKP01, Theorem 1.1].
 Let 
 $\mathsf {R}_1, \mathsf {R}_2, \ldots \subset \mathbb {T}^2$
 denote a sequence of tileable domains, with associated boundary height functions
$\mathsf {R}_1, \mathsf {R}_2, \ldots \subset \mathbb {T}^2$
 denote a sequence of tileable domains, with associated boundary height functions 
 $\mathsf {h}_1, \mathsf {h}_2, \ldots $
, respectively. Assume that they converge to a simply connected subset
$\mathsf {h}_1, \mathsf {h}_2, \ldots $
, respectively. Assume that they converge to a simply connected subset 
 $\mathfrak {R} \rightarrow \mathbb {R}^2$
 with piecewise smooth boundary, and a boundary height function
$\mathfrak {R} \rightarrow \mathbb {R}^2$
 with piecewise smooth boundary, and a boundary height function 
 $h : \partial \mathfrak {R} \rightarrow \mathbb {R}$
, respectively. Denoting the height function associated with a uniformly random tiling of
$h : \partial \mathfrak {R} \rightarrow \mathbb {R}$
, respectively. Denoting the height function associated with a uniformly random tiling of 
 $\mathsf {R}_n$
 by
$\mathsf {R}_n$
 by 
 $\mathsf {H}_n$
, we have
$\mathsf {H}_n$
, we have 
 $$ \begin{align*} \displaystyle\lim_{n \rightarrow \infty} \mathbb{P} \bigg( \displaystyle\max_{\mathsf{v} \in \mathsf{R}_n} \big| n^{-1} \mathsf{H}_n (\mathsf{v}) - H^* (n^{-1} \mathsf{v}) \big|> \varepsilon \bigg) = 0, \end{align*} $$
$$ \begin{align*} \displaystyle\lim_{n \rightarrow \infty} \mathbb{P} \bigg( \displaystyle\max_{\mathsf{v} \in \mathsf{R}_n} \big| n^{-1} \mathsf{H}_n (\mathsf{v}) - H^* (n^{-1} \mathsf{v}) \big|> \varepsilon \bigg) = 0, \end{align*} $$
where 
 $H^*$
 is the unique maximizer of
$H^*$
 is the unique maximizer of 
 $\mathcal {E}$
 on
$\mathcal {E}$
 on 
 $\mathfrak {R}$
 with boundary data h,
$\mathfrak {R}$
 with boundary data h, 
 $$ \begin{align} H^* = \displaystyle\operatorname{\mathrm{argmax}}_{H \in \operatorname{\mathrm{Adm}} (\mathfrak{R}; h)} \mathcal{E} (H). \end{align} $$
$$ \begin{align} H^* = \displaystyle\operatorname{\mathrm{argmax}}_{H \in \operatorname{\mathrm{Adm}} (\mathfrak{R}; h)} \mathcal{E} (H). \end{align} $$
 The fact that there is a unique maximizer described as in (2.4) follows from [Reference De Silva and SavinDSS10, Proposition 4.5]. Under a suitable change of coordinates, this maximizer 
 $H^*$
 solves a complex variant of the Burgers equation [Reference Kenyon and OkounkovKO07], which makes it amenable to further analysis; we will discuss this point in more detail in Section 3.1 below. For any simply connected open subset
$H^*$
 solves a complex variant of the Burgers equation [Reference Kenyon and OkounkovKO07], which makes it amenable to further analysis; we will discuss this point in more detail in Section 3.1 below. For any simply connected open subset 
 $\mathfrak {R} \subset \mathbb {R}^2$
 with Lipschitz boundary and boundary height function
$\mathfrak {R} \subset \mathbb {R}^2$
 with Lipschitz boundary and boundary height function 
 $h: \partial \mathfrak {R} \rightarrow \mathbb {R}^2$
 admitting an admissible extension to
$h: \partial \mathfrak {R} \rightarrow \mathbb {R}^2$
 admitting an admissible extension to 
 $\mathfrak {R}$
, define the liquid region
$\mathfrak {R}$
, define the liquid region 
 $\mathfrak {L} = \mathfrak {L} (\mathfrak {R}; h) \subset \mathfrak {R}$
$\mathfrak {L} = \mathfrak {L} (\mathfrak {R}; h) \subset \mathfrak {R}$
 
 $$ \begin{align}\begin{split} &\mathfrak{L} = \big\{ u=(x,t) \in \mathfrak{R}: \big( \partial_x H^* (u), \partial_t H^* (u) \big) \in \mathcal{T} \big\}, \end{split} \end{align} $$
$$ \begin{align}\begin{split} &\mathfrak{L} = \big\{ u=(x,t) \in \mathfrak{R}: \big( \partial_x H^* (u), \partial_t H^* (u) \big) \in \mathcal{T} \big\}, \end{split} \end{align} $$
and the arctic boundary 
 $\mathfrak {A} = \mathfrak {A} (\mathfrak {R}; h) \subset \overline {\mathfrak {R}}$
 as the set of points
$\mathfrak {A} = \mathfrak {A} (\mathfrak {R}; h) \subset \overline {\mathfrak {R}}$
 as the set of points 
 $ u=(x,t) \in \partial {\mathfrak L}$
 such that for any sequence of points
$ u=(x,t) \in \partial {\mathfrak L}$
 such that for any sequence of points 
 $u_n \in {\mathfrak L}$
 converging to u,
$u_n \in {\mathfrak L}$
 converging to u, 
 $$ \begin{align}\begin{split} \big( \partial_x H^* (u_n), \partial_t H^* (u_n) \big) \rightarrow \partial\mathcal{T}, \end{split} \end{align} $$
$$ \begin{align}\begin{split} \big( \partial_x H^* (u_n), \partial_t H^* (u_n) \big) \rightarrow \partial\mathcal{T}, \end{split} \end{align} $$
where 
 $H^*$
 is as in Equation (2.4). By [Reference De Silva and SavinDSS10, Proposition 4.1], the set
$H^*$
 is as in Equation (2.4). By [Reference De Silva and SavinDSS10, Proposition 4.1], the set 
 $\mathfrak {L}$
 is open. The complement of the liquid region
$\mathfrak {L}$
 is open. The complement of the liquid region 
 $\mathfrak {R}\setminus {\mathfrak L}$
 is called the frozen region.
$\mathfrak {R}\setminus {\mathfrak L}$
 is called the frozen region.
 We will commonly be interested in the case when 
 $\mathfrak {R}$
 is a polygonal domain, given as follows.
$\mathfrak {R}$
 is a polygonal domain, given as follows.
Definition 2.2. A subset 
 $\mathfrak {P} \subset \mathbb {R}^2$
 is called polygonal if it is a simply connected polygon, and the boundary edges are in the axes directions of the triangular lattice. We assume the domainFootnote 
3
$\mathfrak {P} \subset \mathbb {R}^2$
 is called polygonal if it is a simply connected polygon, and the boundary edges are in the axes directions of the triangular lattice. We assume the domainFootnote 
3
 
 $\mathsf {P} = \mathsf {P}_n = n \overline {\mathfrak {P}} \cap \mathbb {T}$
 is tileable and thus associated with a (unique, up to global shift) boundary height function
$\mathsf {P} = \mathsf {P}_n = n \overline {\mathfrak {P}} \cap \mathbb {T}$
 is tileable and thus associated with a (unique, up to global shift) boundary height function 
 $\mathsf {h} = \mathsf {h}_n$
. By translating
$\mathsf {h} = \mathsf {h}_n$
. By translating 
 $\mathfrak {P}$
 if necessary, we will assume that
$\mathfrak {P}$
 if necessary, we will assume that 
 $\overline {\mathfrak {P}} \subset \mathbb {R} \times \mathbb {R}_{\geq 0}$
 and that
$\overline {\mathfrak {P}} \subset \mathbb {R} \times \mathbb {R}_{\geq 0}$
 and that 
 $(0, 0) \in \partial \mathfrak {P}$
. Then by shifting
$(0, 0) \in \partial \mathfrak {P}$
. Then by shifting 
 $\mathsf {h}$
 if necessary, we will further suppose that
$\mathsf {h}$
 if necessary, we will further suppose that 
 $\mathsf {h} (0, 0) = 0$
. Under this notation, we set
$\mathsf {h} (0, 0) = 0$
. Under this notation, we set 
 $h: \partial \mathfrak {P} \rightarrow \mathbb {R}$
 by
$h: \partial \mathfrak {P} \rightarrow \mathbb {R}$
 by 
 $h (u) = n^{-1} \mathsf {h} (nu)$
 for each
$h (u) = n^{-1} \mathsf {h} (nu)$
 for each 
 $u \in \partial \mathfrak {P}$
. Moreover, we abbreviate
$u \in \partial \mathfrak {P}$
. Moreover, we abbreviate 
 $\operatorname {\mathrm {Adm}} (\mathfrak {P}) = \operatorname {\mathrm {Adm}} (\mathfrak {P}; h)$
,
$\operatorname {\mathrm {Adm}} (\mathfrak {P}) = \operatorname {\mathrm {Adm}} (\mathfrak {P}; h)$
, 
 $\mathfrak {L} (\mathfrak {P}) = \mathfrak {L} (\mathfrak {P}; h)$
, and
$\mathfrak {L} (\mathfrak {P}) = \mathfrak {L} (\mathfrak {P}; h)$
, and 
 $\mathfrak {A} (\mathfrak {P}) = \mathfrak {A} (\mathfrak {P}; h)$
; they do not depend on the above choice of global shift fixing h. We further define the maximizer
$\mathfrak {A} (\mathfrak {P}) = \mathfrak {A} (\mathfrak {P}; h)$
; they do not depend on the above choice of global shift fixing h. We further define the maximizer 
 $H^* \in \operatorname {\mathrm {Adm}} (\mathfrak {R}; h)$
 as in Equation (2.4).
$H^* \in \operatorname {\mathrm {Adm}} (\mathfrak {R}; h)$
 as in Equation (2.4).
 We will make use of the following results from [Reference Kenyon and OkounkovKO07, Reference Astala, Duse, Prause and ZhongADPZ20] on the behavior of the limit shape 
 $H^*$
 and arctic boundary
$H^*$
 and arctic boundary 
 $\mathfrak {A}$
 when
$\mathfrak {A}$
 when 
 $\mathfrak {R}$
 is polygonal. The first statement in the below lemma is given by [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 1.9] and the second by [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 1.2, Theorem 1.10] (see also [Reference Kenyon and OkounkovKO07, Theorem 2, Proposition 5]).
$\mathfrak {R}$
 is polygonal. The first statement in the below lemma is given by [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 1.9] and the second by [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 1.2, Theorem 1.10] (see also [Reference Kenyon and OkounkovKO07, Theorem 2, Proposition 5]).
Lemma 2.3 [Reference Kenyon and OkounkovKO07, Reference Astala, Duse, Prause and ZhongADPZ20].
 Adopt the notation of Definition 2.2, and assume that the domain 
 $\mathfrak {R} = \mathfrak {P}$
 is polygonal with at least six sides. Then the following two statements hold.
$\mathfrak {R} = \mathfrak {P}$
 is polygonal with at least six sides. Then the following two statements hold. 
- 
(1) On  $\mathfrak {P} \setminus \mathfrak {L} (\mathfrak {P})$
, $\mathfrak {P} \setminus \mathfrak {L} (\mathfrak {P})$
, $\nabla H^* (x, t)$
 is piecewise constant, taking values in $\nabla H^* (x, t)$
 is piecewise constant, taking values in $\big \{ (0, 0), (1, 0), (1, -1) \big \}$
. $\big \{ (0, 0), (1, 0), (1, -1) \big \}$
.
- 
(2) The arctic boundary  $\mathfrak {A} (\mathfrak {P})$
 is an algebraic curve, and its singularities are all either ordinary cusps or tacnodes. $\mathfrak {A} (\mathfrak {P})$
 is an algebraic curve, and its singularities are all either ordinary cusps or tacnodes.
 The following is an integrality result for the limiting height function 
 $H^*$
 outside of the associated liquid region. We provide its proof in Appendix B below.
$H^*$
 outside of the associated liquid region. We provide its proof in Appendix B below.
Proposition 2.4. Adopt the notation of Definition 2.2.
- 
(1) Fix  $(x,t)\in \mathfrak P\setminus \mathfrak L$
 such that $(x,t)\in \mathfrak P\setminus \mathfrak L$
 such that $\nabla H^* (x, t) = (s, r)$
 exists and $\nabla H^* (x, t) = (s, r)$
 exists and $\nabla H^*$
 is continuous at $\nabla H^*$
 is continuous at $(x, t)$
. If $(x, t)$
. If $(s, r) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
, then $(s, r) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
, then $n(H^*(x,t)-sx-rt)\in {\mathbb Z}$
. $n(H^*(x,t)-sx-rt)\in {\mathbb Z}$
.
- 
(2) For any point  $v\in (\mathfrak P\setminus \overline {\mathfrak L})\cap (n^{-1} \cdot {\mathbb Z})^2$
, we have $v\in (\mathfrak P\setminus \overline {\mathfrak L})\cap (n^{-1} \cdot {\mathbb Z})^2$
, we have $n \cdot H^*(v)\in {\mathbb Z}$
. $n \cdot H^*(v)\in {\mathbb Z}$
.
 It will also be useful to further set notation for the local parabolic shape of 
 $\mathfrak {A} (\mathfrak {P})$
 around any nonsingular point
$\mathfrak {A} (\mathfrak {P})$
 around any nonsingular point 
 $(x_0, y_0) \in \mathfrak {A}$
.
$(x_0, y_0) \in \mathfrak {A}$
.
Definition 2.5. Fix a nonsingular point 
 $(x_0, y_0) \in \mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
; assume it is not a tangency location of
$(x_0, y_0) \in \mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
; assume it is not a tangency location of 
 $\mathfrak {A}$
, which is a point on
$\mathfrak {A}$
, which is a point on 
 $\mathfrak {A}$
 whose tangent line to
$\mathfrak {A}$
 whose tangent line to 
 $\mathfrak {A}$
 has slope in
$\mathfrak {A}$
 has slope in 
 $\{ 0, 1, \infty \}$
. Define the curvature parameters
$\{ 0, 1, \infty \}$
. Define the curvature parameters 
 $(\mathfrak {l}, \mathfrak {r}) = \big ( \mathfrak {l} (x_0, y_0; \mathfrak {A}), \mathfrak {r} (x_0, y_0; \mathfrak {A}) \big ) \in \mathbb {R}^2$
 associated with
$(\mathfrak {l}, \mathfrak {r}) = \big ( \mathfrak {l} (x_0, y_0; \mathfrak {A}), \mathfrak {r} (x_0, y_0; \mathfrak {A}) \big ) \in \mathbb {R}^2$
 associated with 
 $(x_0, y_0)$
 so that
$(x_0, y_0)$
 so that 
 $$ \begin{align} x - x_0 = \mathfrak{l} (y - y_0) + \mathfrak{q} (y - y_0)^2 + \mathcal{O} \big( (y - y_0^3) \big), \end{align} $$
$$ \begin{align} x - x_0 = \mathfrak{l} (y - y_0) + \mathfrak{q} (y - y_0)^2 + \mathcal{O} \big( (y - y_0^3) \big), \end{align} $$
for all 
 $(x, y) \in \mathfrak {A}$
 in a sufficiently small neighborhood of
$(x, y) \in \mathfrak {A}$
 in a sufficiently small neighborhood of 
 $(x_0, y_0)$
. Since
$(x_0, y_0)$
. Since 
 $(x_0, y_0)$
 is nonsingular and is not a tangency location, the parameters
$(x_0, y_0)$
 is nonsingular and is not a tangency location, the parameters 
 $(\mathfrak {l}, \mathfrak {q})$
 exist, with
$(\mathfrak {l}, \mathfrak {q})$
 exist, with 
 $\mathfrak {l} \notin \{ 0, 1,\infty \}$
 and
$\mathfrak {l} \notin \{ 0, 1,\infty \}$
 and 
 $\mathfrak {q} \notin \{ 0, \infty \}$
.
$\mathfrak {q} \notin \{ 0, \infty \}$
.
2.4 Edge statistics results
In order to state our results, we first require some notation on edge statistics.
Definition 2.6. For any 
 $s, t, x, y \in \mathbb {R}$
, the extended Airy kernel is given by
$s, t, x, y \in \mathbb {R}$
, the extended Airy kernel is given by 
 $$ \begin{align*} \mathcal{K} (s, x; t, y) &= \displaystyle\int_0^{\infty} e^{u (t - s)} \operatorname{\mathrm{Ai}} (x + u) \operatorname{\mathrm{Ai}} (y + u) \mathrm{d}u,\quad\hspace{6pt}\text{if}\; s \geq t; \\ \mathcal{K} (s, x; t, y) &= - \displaystyle\int_{-\infty}^0 e^{u (t - s)} \operatorname{\mathrm{Ai}} (x + u) \operatorname{\mathrm{Ai}} (y + u) \mathrm{d}u,\quad  \text{if}\; s < t, \end{align*} $$
$$ \begin{align*} \mathcal{K} (s, x; t, y) &= \displaystyle\int_0^{\infty} e^{u (t - s)} \operatorname{\mathrm{Ai}} (x + u) \operatorname{\mathrm{Ai}} (y + u) \mathrm{d}u,\quad\hspace{6pt}\text{if}\; s \geq t; \\ \mathcal{K} (s, x; t, y) &= - \displaystyle\int_{-\infty}^0 e^{u (t - s)} \operatorname{\mathrm{Ai}} (x + u) \operatorname{\mathrm{Ai}} (y + u) \mathrm{d}u,\quad  \text{if}\; s < t, \end{align*} $$
where we recall that the Airy function 
 $\operatorname {\mathrm {Ai}}: \mathbb {R} \rightarrow \mathbb {R}$
 is given by
$\operatorname {\mathrm {Ai}}: \mathbb {R} \rightarrow \mathbb {R}$
 is given by 
 $$ \begin{align*} \operatorname{\mathrm{Ai}} (x) = \displaystyle\frac{1}{\pi} \displaystyle\int_{-\infty}^{\infty} \cos \Big( \displaystyle\frac{z^3}{3} + xz \Big) \mathrm{d}z. \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Ai}} (x) = \displaystyle\frac{1}{\pi} \displaystyle\int_{-\infty}^{\infty} \cos \Big( \displaystyle\frac{z^3}{3} + xz \Big) \mathrm{d}z. \end{align*} $$
From this, we define the Airy line ensemble introduced in [Reference Prähofer and SpohnPS02, Reference Corwin and HammondCH14] (see also [Reference Aggarwal and HuangAH23], which provides another, probabilistic interpretation for this object), which will be limits for our edge statistics.
Definition 2.7. The Airy line ensemble 
 $\mathcal {A} = (\mathcal {A}_1, \mathcal {A}_2, \ldots )$
 is an infinite collection of continuous curves
$\mathcal {A} = (\mathcal {A}_1, \mathcal {A}_2, \ldots )$
 is an infinite collection of continuous curves 
 $\mathcal {A}_i: \mathbb {R} \rightarrow \mathbb {R}$
, ordered so that
$\mathcal {A}_i: \mathbb {R} \rightarrow \mathbb {R}$
, ordered so that 
 $\mathcal {A}_1 (t)> \mathcal {A}_2 (t) > \cdots $
 for each
$\mathcal {A}_1 (t)> \mathcal {A}_2 (t) > \cdots $
 for each 
 $t \in \mathbb {R}$
 such that
$t \in \mathbb {R}$
 such that 
 $$ \begin{align} \mathbb{P} \Bigg( \bigcap_{j = 1}^m \{ (x_j, t_j) \in \mathcal{A} \} \Bigg) = \det \big[ \mathcal{K} (t_i, x_i; t_j, x_j) \big]_{1 \leq i, j \leq m} \displaystyle\prod_{j = 1}^m dx_j, \end{align} $$
$$ \begin{align} \mathbb{P} \Bigg( \bigcap_{j = 1}^m \{ (x_j, t_j) \in \mathcal{A} \} \Bigg) = \det \big[ \mathcal{K} (t_i, x_i; t_j, x_j) \big]_{1 \leq i, j \leq m} \displaystyle\prod_{j = 1}^m dx_j, \end{align} $$
for any 
 $(x_1, t_1), (x_2, t_2), \ldots , (x_m, t_m) \in \mathbb {R}^2$
. Here, we have written
$(x_1, t_1), (x_2, t_2), \ldots , (x_m, t_m) \in \mathbb {R}^2$
. Here, we have written 
 $(x, t) \in \mathcal {A}$
 if there exists some integer
$(x, t) \in \mathcal {A}$
 if there exists some integer 
 $k \geq 1$
 such that
$k \geq 1$
 such that 
 $\mathcal {A}_k (t) = x$
. The existence of such an ensemble was shown as [Reference Corwin and HammondCH14, Theorem 3.1] (and the uniqueness follows from the explicit form (2.8) of its multipoint distributions).Footnote 
4
 We abbreviate
$\mathcal {A}_k (t) = x$
. The existence of such an ensemble was shown as [Reference Corwin and HammondCH14, Theorem 3.1] (and the uniqueness follows from the explicit form (2.8) of its multipoint distributions).Footnote 
4
 We abbreviate 
 $\mathcal {R} = \big (\mathcal {A}_1 (t) - t^2, \mathcal {A}_2 (t) - t^2, \ldots \big )$
, which may be viewed as a function
$\mathcal {R} = \big (\mathcal {A}_1 (t) - t^2, \mathcal {A}_2 (t) - t^2, \ldots \big )$
, which may be viewed as a function 
 $\mathcal {R}: \mathbb {Z}_{> 0} \times \mathbb {R} \rightarrow \mathbb {R}$
 by setting
$\mathcal {R}: \mathbb {Z}_{> 0} \times \mathbb {R} \rightarrow \mathbb {R}$
 by setting 
 $\mathcal {R} (i, t) = \mathcal {R}_i (t) = \mathcal {A}_i (t) - t^2$
.
$\mathcal {R} (i, t) = \mathcal {R}_i (t) = \mathcal {A}_i (t) - t^2$
.
 We next impose the following assumption of a polygonal subset 
 $\mathfrak {P} \subset \mathbb {R}^2$
, which excludes certain conditions on its arctic boundary.
$\mathfrak {P} \subset \mathbb {R}^2$
, which excludes certain conditions on its arctic boundary.
Assumption 2.8. Under the notation of Definition 2.2, assume the following four properties hold.
- 
(1) The arctic boundary  $\mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
 has no tacnode singularities. $\mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
 has no tacnode singularities.
- 
(2) No cusp singularity of  $\mathfrak {A}$
 is also a tangency location of $\mathfrak {A}$
 is also a tangency location of $\mathfrak {A}$
. $\mathfrak {A}$
.
- 
(3) There exists an axis  $\ell $
 of $\ell $
 of $\mathbb {T}$
 such that any line connecting two distinct cusp singularities of $\mathbb {T}$
 such that any line connecting two distinct cusp singularities of $\mathfrak {A}$
 is not parallel to $\mathfrak {A}$
 is not parallel to $\ell $
. $\ell $
.
- 
(4) Any intersection point between  $\mathfrak {A}$
 and $\mathfrak {A}$
 and $\partial \mathfrak {P}$
 must be a tangency location of $\partial \mathfrak {P}$
 must be a tangency location of $\mathfrak {A}$
. Moreover, $\mathfrak {A}$
. Moreover, $\nabla H^*(x,t)$
 is continuous at any point on $\nabla H^*(x,t)$
 is continuous at any point on $\mathfrak {A}$
 that is not a tangency location. $\mathfrak {A}$
 that is not a tangency location.
 We refer to Figure 1 above for depictions of the four forbidden scenarios. As we will show in Section 6, the polygonal region 
 $\mathfrak {P}$
 satisfying Assumption 2.8 can be decomposed into small pieces that are either frozen or are ‘double-sided trapezoids’. These ‘double-sided trapezoids’ do not contain tacnodes or cusp singularities, which are also tangency locations. In this case, the optimal height function of such a region has been proven in [Reference HuangHua24, Theorem 2.5] using a dynamical version of the loop equations.
$\mathfrak {P}$
 satisfying Assumption 2.8 can be decomposed into small pieces that are either frozen or are ‘double-sided trapezoids’. These ‘double-sided trapezoids’ do not contain tacnodes or cusp singularities, which are also tangency locations. In this case, the optimal height function of such a region has been proven in [Reference HuangHua24, Theorem 2.5] using a dynamical version of the loop equations.
Remark 2.9. It seems likely to us that the constraints listed in Assumption 2.8 hold for a generic polygonal domain with a fixed number of sides since each constraint should impose an algebraic relation between the side lengths of 
 $\mathfrak {P}$
. However, we will not pursue a rigorous proof of this here. We refer to Figure 5 for the arctic boundaries on a generic octagon and 12-gon (obtained by analytically solving for, and then plotting, the algebraic curves of the appropriate degrees tangent to all sides of these polygons); it is quickly seen that these arctic boundaries satisfy our assumption.
$\mathfrak {P}$
. However, we will not pursue a rigorous proof of this here. We refer to Figure 5 for the arctic boundaries on a generic octagon and 12-gon (obtained by analytically solving for, and then plotting, the algebraic curves of the appropriate degrees tangent to all sides of these polygons); it is quickly seen that these arctic boundaries satisfy our assumption.

Figure 5 Shown to the left is the arctic boundary of an octagon, and shown to the right is the arctic boundary of a 
 $12$
-gon. Both examples satisfy the constraints listed in Assumption 2.8.
$12$
-gon. Both examples satisfy the constraints listed in Assumption 2.8.
Now, we can state the following theorem on the convergence to the Airy line ensemble for edge statistics of uniformly random tilings on polygonal domains satisfying Assumption 2.8. In what follows, we recall the nonintersecting Bernoulli walk ensemble associated with any tiling of a domain from Section 2.2 and the curvature parameters from Definition 2.5. Observe that the quantity K defined in the below theorem is an integer by the first part of Proposition 2.4.
Theorem 2.10. Adopt the notation of Definition 2.2 and the constraint from Assumption 2.8. Fix some point 
 $(x_0, t_0) \in \mathfrak {A} (\mathfrak {P})$
 that is not a tangency or cusp location of
$(x_0, t_0) \in \mathfrak {A} (\mathfrak {P})$
 that is not a tangency or cusp location of 
 $\mathfrak {A} (\mathfrak {P})$
, and assume that
$\mathfrak {A} (\mathfrak {P})$
, and assume that 
 $\nabla H^* (x_0 + \varepsilon , t_0) = (0, 0)$
 for all sufficiently small
$\nabla H^* (x_0 + \varepsilon , t_0) = (0, 0)$
 for all sufficiently small 
 $\varepsilon \ge 0$
. Denote the curvature parameters associated with
$\varepsilon \ge 0$
. Denote the curvature parameters associated with 
 $(x_0, t_0)$
 by
$(x_0, t_0)$
 by 
 $(\mathfrak {l}, \mathfrak {q})$
, and set
$(\mathfrak {l}, \mathfrak {q})$
, and set 
 $$ \begin{align} \mathfrak{s} = \bigg| \displaystyle\frac{\mathfrak{l}^{2/3} (1 - \mathfrak{l})^{2/3}}{4^{1/3} \mathfrak{q}^{1/3}} \bigg|; \qquad \mathfrak{r} = \bigg| \displaystyle\frac{\mathfrak{l}^{1/3} (1 - \mathfrak{l})^{1/3}}{2^{1/3} \mathfrak{q}^{2/3}} \bigg|. \end{align} $$
$$ \begin{align} \mathfrak{s} = \bigg| \displaystyle\frac{\mathfrak{l}^{2/3} (1 - \mathfrak{l})^{2/3}}{4^{1/3} \mathfrak{q}^{1/3}} \bigg|; \qquad \mathfrak{r} = \bigg| \displaystyle\frac{\mathfrak{l}^{1/3} (1 - \mathfrak{l})^{1/3}}{2^{1/3} \mathfrak{q}^{2/3}} \bigg|. \end{align} $$
 Let 
 $\mathscr {M}$
 denote a uniformly random tiling of
$\mathscr {M}$
 denote a uniformly random tiling of 
 $\mathsf {P}$
, which is associated with a (random) family
$\mathsf {P}$
, which is associated with a (random) family 
 $\big ( \mathsf {x}_j (t) \big )$
 of nonintersecting Bernoulli walks. Denote
$\big ( \mathsf {x}_j (t) \big )$
 of nonintersecting Bernoulli walks. Denote 
 $K = n H^* (x_0, t_0)$
, and define the family of functions
$K = n H^* (x_0, t_0)$
, and define the family of functions 
 $\mathcal {X}_n = (\mathsf {X}_1, \mathsf {X}_2, \ldots )$
 by, for each
$\mathcal {X}_n = (\mathsf {X}_1, \mathsf {X}_2, \ldots )$
 by, for each 
 $i \geq 0$
, setting
$i \geq 0$
, setting 
 $$ \begin{align} \mathsf{X}_{i+1 } (t) = \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{x}_{K - i} (t_0 n + \mathfrak{r} n^{2/3} t) - n x_0 - \mathfrak{l} n^{2/3} t \Big). \end{align} $$
$$ \begin{align} \mathsf{X}_{i+1 } (t) = \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{x}_{K - i} (t_0 n + \mathfrak{r} n^{2/3} t) - n x_0 - \mathfrak{l} n^{2/3} t \Big). \end{align} $$
 Then 
 $\mathcal {X}_n$
 converges to
$\mathcal {X}_n$
 converges to 
 $\mathcal {R}$
, uniformly on compact subsets of
$\mathcal {R}$
, uniformly on compact subsets of 
 $\mathbb {Z}_{> 0} \times \mathbb {R}$
, as n tends to
$\mathbb {Z}_{> 0} \times \mathbb {R}$
, as n tends to 
 $\infty $
.
$\infty $
.
 Observe that Theorem 2.10 stipulates 
 $\nabla H^* (x_0 + \varepsilon , t_0) = (0, 0)$
 for small
$\nabla H^* (x_0 + \varepsilon , t_0) = (0, 0)$
 for small 
 $\varepsilon $
. Since for a polygonal domain
$\varepsilon $
. Since for a polygonal domain 
 $\mathfrak {P} \subset \mathbb {R}^2$
 we have
$\mathfrak {P} \subset \mathbb {R}^2$
 we have 
 $\nabla H^* (x, y) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
 for almost any
$\nabla H^* (x, y) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
 for almost any 
 $(x, y) \notin \mathfrak {L} (\mathfrak {P})$
 (by the first statement of Lemma 2.3), there are six possibilities for the behavior of
$(x, y) \notin \mathfrak {L} (\mathfrak {P})$
 (by the first statement of Lemma 2.3), there are six possibilities for the behavior of 
 $\nabla H^*$
 around any
$\nabla H^*$
 around any 
 $(x_0, t_0) \in \mathfrak {A} (\mathfrak {P})$
. Specifically, we either have
$(x_0, t_0) \in \mathfrak {A} (\mathfrak {P})$
. Specifically, we either have 
 $\nabla H^* (x_0 + \varepsilon , t_0) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
 or
$\nabla H^* (x_0 + \varepsilon , t_0) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
 or 
 $\nabla H^* (x_0 - \varepsilon , t_0) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
, with the former if
$\nabla H^* (x_0 - \varepsilon , t_0) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
, with the former if 
 $(x_0, t_0)$
 is on a ‘right part’ of the arctic boundary and the latter if it is on a ‘left part’. By rotating or reflecting the tileable domain
$(x_0, t_0)$
 is on a ‘right part’ of the arctic boundary and the latter if it is on a ‘left part’. By rotating or reflecting the tileable domain 
 $\mathsf {P}$
 if necessary, establishing convergence for the edge statistics in any one of these six situations also shows it for the remaining cases, and so for brevity we only stated Theorem 2.10 when
$\mathsf {P}$
 if necessary, establishing convergence for the edge statistics in any one of these six situations also shows it for the remaining cases, and so for brevity we only stated Theorem 2.10 when 
 $\nabla H^* (x_0, t_0 + \varepsilon ) = (0, 0)$
.
$\nabla H^* (x_0, t_0 + \varepsilon ) = (0, 0)$
.
3 Convergence of edge statistics
In this section, we establish Theorem 2.10, assuming the concentration estimate Theorem 3.10 below. We begin in Section 3.1 by recalling complex analytic properties of tiling limit shapes in relation to the complex Burgers equation; in Section 3.2, we discuss classical locations of these limit shapes. Next, in Section 3.3 we state a concentration bound for the tiling height function of polygonal domains satisfying Assumption 2.8, which we use in Section 3.4 to compare the edge statistics on such polygons to those on hexagonal domains. We then establish Theorem 2.10 in Section 3.5.
3.1 Complex slopes and complex Burgers equation
 In this section, we recall from [Reference Kenyon and OkounkovKO07, Reference Astala, Duse, Prause and ZhongADPZ20] various complex analytic aspects of the tiling limit shapes discussed in Section 2.3; they will be briefly used in the proof of Lemma 3.7 below, and then more extensively in our discussion of tilted height profiles later. In what follows, we fix a simply connected open subset 
 $\mathfrak {R} \subset \mathbb {R}^2$
 and a boundary height function
$\mathfrak {R} \subset \mathbb {R}^2$
 and a boundary height function 
 $h: \partial \mathfrak {R} \rightarrow \mathbb {R}$
. We recall the maximizer
$h: \partial \mathfrak {R} \rightarrow \mathbb {R}$
. We recall the maximizer 
 $H^* \in \operatorname {\mathrm {Adm}} (\mathfrak {R}; h)$
 of
$H^* \in \operatorname {\mathrm {Adm}} (\mathfrak {R}; h)$
 of 
 $\mathcal {E}$
 defined in Equation (2.4), as well as the liquid region
$\mathcal {E}$
 defined in Equation (2.4), as well as the liquid region 
 $\mathfrak {L} = \mathfrak {L} (\mathfrak {R}; h)$
 and arctic boundary
$\mathfrak {L} = \mathfrak {L} (\mathfrak {R}; h)$
 and arctic boundary 
 $\mathfrak {A} = \mathfrak {A} (\mathfrak {R}; h)$
 defined in Equations (2.5) and (2.6).
$\mathfrak {A} = \mathfrak {A} (\mathfrak {R}; h)$
 defined in Equations (2.5) and (2.6).
 Then define the complex slope 
 $f: \mathfrak {L} \rightarrow \mathbb {H}^-$
 by, for any
$f: \mathfrak {L} \rightarrow \mathbb {H}^-$
 by, for any 
 $u \in \mathfrak {L}$
, setting
$u \in \mathfrak {L}$
, setting 
 $f(u) \in \mathbb {H}^-$
 to be the unique complex number satisfying
$f(u) \in \mathbb {H}^-$
 to be the unique complex number satisfying 
 $$ \begin{align} \arg^* f(u) = - \pi \partial_x H^* (u); \qquad \arg^* \big( f(u) + 1 \big) = \pi \partial_y H^* (u), \end{align} $$
$$ \begin{align} \arg^* f(u) = - \pi \partial_x H^* (u); \qquad \arg^* \big( f(u) + 1 \big) = \pi \partial_y H^* (u), \end{align} $$
where for any 
 $z \in \overline {\mathbb {H}^-}$
 we have set
$z \in \overline {\mathbb {H}^-}$
 we have set 
 $\arg ^* z = \theta \in [-\pi , 0]$
 to be the unique number in
$\arg ^* z = \theta \in [-\pi , 0]$
 to be the unique number in 
 $[-\pi , 0]$
 satisfying
$[-\pi , 0]$
 satisfying 
 $e^{-\mathrm {i} \theta } z \in \mathbb {R}$
; see Figure 6 for a depiction, where there we interpret
$e^{-\mathrm {i} \theta } z \in \mathbb {R}$
; see Figure 6 for a depiction, where there we interpret 
 $1 - \partial _x H^* (u)$
 and
$1 - \partial _x H^* (u)$
 and 
 $-\partial _y H^* (u)$
 as the approximate proportions of tiles of types
$-\partial _y H^* (u)$
 as the approximate proportions of tiles of types 
 $1$
 and
$1$
 and 
 $2$
 around
$2$
 around 
 $nu \in \mathsf {R}_n$
, respectively (which follows from the definition of the height function from Section 2.1).
$nu \in \mathsf {R}_n$
, respectively (which follows from the definition of the height function from Section 2.1).

Figure 6 Shown above the complex slope 
 $f = f (u)$
.
$f = f (u)$
.
The following result from [Reference Kenyon and OkounkovKO07] indicates that f satisfies the complex Burgers equation.
Proposition 3.1 [Reference Kenyon and OkounkovKO07, Theorem 1].
 For any 
 $(x, t) \in \mathfrak {L}$
, let
$(x, t) \in \mathfrak {L}$
, let 
 $f_t (x) = f (x, t)$
 we have
$f_t (x) = f (x, t)$
 we have 
 $$ \begin{align} \partial_t f_t (x) + \partial_x f_t (x) \displaystyle\frac{f_t (x)}{f_t (x) + 1} = 0. \end{align} $$
$$ \begin{align} \partial_t f_t (x) + \partial_x f_t (x) \displaystyle\frac{f_t (x)}{f_t (x) + 1} = 0. \end{align} $$
Remark 3.2. As explained in [Reference Astala, Duse, Prause and ZhongADPZ20, Section 3.2.2], the composition 
 $\widetilde {f} = M \circ f$
 of
$\widetilde {f} = M \circ f$
 of 
 $f_t (z)$
 with a certain Möbius transformation M solves the Beltrami equation
$f_t (z)$
 with a certain Möbius transformation M solves the Beltrami equation 
 $\partial _{\overline {z}} \widetilde {f} = \widetilde {f} \cdot \partial _z \widetilde {f}$
. Thus, after a change of variables, any solution of the complex Burgers equation also solves the Beltrami equation.
$\partial _{\overline {z}} \widetilde {f} = \widetilde {f} \cdot \partial _z \widetilde {f}$
. Thus, after a change of variables, any solution of the complex Burgers equation also solves the Beltrami equation.
 The following result from [Reference Kenyon and OkounkovKO07, Reference Astala, Duse, Prause and ZhongADPZ20] describes properties of the complex slope 
 $f_t (x)$
 when
$f_t (x)$
 when 
 $\mathfrak {R}$
 is polygonal.
$\mathfrak {R}$
 is polygonal.
Proposition 3.3 [Reference Kenyon and OkounkovKO07, Reference Astala, Duse, Prause and ZhongADPZ20].
 Adopt the notation of Definition 2.2, and assume that the domain 
 $\mathfrak {R} = \mathfrak {P}$
 polygonal with at least six sides. Then the following three statements hold.
$\mathfrak {R} = \mathfrak {P}$
 polygonal with at least six sides. Then the following three statements hold. 
- 
(1) The complex slope  $f_t (x)$
 extends continuously to the arctic boundary $f_t (x)$
 extends continuously to the arctic boundary $\mathfrak {A} (\mathfrak {P})$
. $\mathfrak {A} (\mathfrak {P})$
.
- 
(2) Fix  $(x_0, t_0) \in \overline {\mathfrak {L}}$
. There exists a neighborhood $(x_0, t_0) \in \overline {\mathfrak {L}}$
. There exists a neighborhood $\mathfrak {U} \subset \mathbb {C}^2$
 of $\mathfrak {U} \subset \mathbb {C}^2$
 of $(x_0, t_0)$
 and a real analytic function $(x_0, t_0)$
 and a real analytic function $Q_0 : \mathfrak {U} \rightarrow \mathbb {C}$
 such that, for any $Q_0 : \mathfrak {U} \rightarrow \mathbb {C}$
 such that, for any $(x, t) \in \mathfrak {U} \cap \overline {\mathfrak {L}}$
, we have (3.3)There exists a nonzero rational function $(x, t) \in \mathfrak {U} \cap \overline {\mathfrak {L}}$
, we have (3.3)There exists a nonzero rational function $$ \begin{align} Q_0 \big( f_t (x) \big) = x \big( f_t (x) + 1 \big) - t f_t (x). \end{align} $$ $$ \begin{align} Q_0 \big( f_t (x) \big) = x \big( f_t (x) + 1 \big) - t f_t (x). \end{align} $$ $Q : \mathbb {C}^2 \rightarrow \mathbb {C}^2$
 such that, for any $Q : \mathbb {C}^2 \rightarrow \mathbb {C}^2$
 such that, for any $(x,t)\in \overline {\mathfrak {L}}$
, we have (3.4) $(x,t)\in \overline {\mathfrak {L}}$
, we have (3.4) $$ \begin{align} Q \bigg( f_t (x), x - \displaystyle\frac{t f_t (x)}{f_t (x) + 1} \bigg) = 0. \end{align} $$ $$ \begin{align} Q \bigg( f_t (x), x - \displaystyle\frac{t f_t (x)}{f_t (x) + 1} \bigg) = 0. \end{align} $$
- 
(3) For any  $(x, t) \in \mathfrak {U} \cap \overline {\mathfrak {L}}$
, $(x, t) \in \mathfrak {U} \cap \overline {\mathfrak {L}}$
, $f_t (x)$
 is a double root of Equation (3.3) if and only if $f_t (x)$
 is a double root of Equation (3.3) if and only if $(x, t) \in \partial \mathfrak {L}$
. $(x, t) \in \partial \mathfrak {L}$
.
Remark 3.4. The first statement of Proposition 3.3 is [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 1.10]. The local existence of 
 $Q_0$
 in Equation (3.3) in the second is [Reference GorinGor21, Theorem 10.5] (see also [Reference Kenyon and OkounkovKO07, Corollary 2] or [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 5.2]), and the global existence of Q in Equation (3.4) is a quick consequence of the first part and Equation (3.3); see [Reference HuangHua24, Proposition A.2(3)]. The third statement follows from the facts that
$Q_0$
 in Equation (3.3) in the second is [Reference GorinGor21, Theorem 10.5] (see also [Reference Kenyon and OkounkovKO07, Corollary 2] or [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 5.2]), and the global existence of Q in Equation (3.4) is a quick consequence of the first part and Equation (3.3); see [Reference HuangHua24, Proposition A.2(3)]. The third statement follows from the facts that 
 $(x, t) \in \mathfrak {A} (\mathfrak {P})$
 if and only if
$(x, t) \in \mathfrak {A} (\mathfrak {P})$
 if and only if 
 $f_t (x) \in \mathbb {R}$
, by Equation (3.1) and that any root of
$f_t (x) \in \mathbb {R}$
, by Equation (3.1) and that any root of 
 $Q_0$
 is real if and only if it is a double root, as
$Q_0$
 is real if and only if it is a double root, as 
 $Q_0$
 is real anaytic (see also the discussion at the end of [Reference Kenyon and OkounkovKO07, Section 1.6]).
$Q_0$
 is real anaytic (see also the discussion at the end of [Reference Kenyon and OkounkovKO07, Section 1.6]).
3.2 Classical Locations
 In the remainder of this section, we adopt the notation of Theorem 2.10. To establish Theorem 2.10, we will use a concentration estimate for the Bernoulli walk locations 
 $\mathsf {x}_i$
 associated with the uniformly random tiling
$\mathsf {x}_i$
 associated with the uniformly random tiling 
 $\mathscr {M}$
 of
$\mathscr {M}$
 of 
 $\mathsf {P}$
. To state this result, we require some additional notation that will be in use throughout the remainder of this paper.
$\mathsf {P}$
. To state this result, we require some additional notation that will be in use throughout the remainder of this paper.
Definition 3.5. For any integer 
 $i \in \mathbb {Z}$
 and real number
$i \in \mathbb {Z}$
 and real number 
 $t \geq 0$
, define the classical location
$t \geq 0$
, define the classical location 
 $\gamma _i (t)$
 to be the (deterministic) real number
$\gamma _i (t)$
 to be the (deterministic) real number 
 $$ \begin{align} \gamma_i(t):=\inf \big\{x\in \mathbb{R}: n H^* ( x, t ) = i \big\}, \end{align} $$
$$ \begin{align} \gamma_i(t):=\inf \big\{x\in \mathbb{R}: n H^* ( x, t ) = i \big\}, \end{align} $$
if it exists (whenever this quantity is used, we will always implicitly assume that the parameters 
 $(i, t)$
 are such that it exists).
$(i, t)$
 are such that it exists).
 We will use an estimate for the classical locations 
 $\gamma _i (t)$
 around the arctic boundary.
$\gamma _i (t)$
 around the arctic boundary.
Lemma 3.6. Recall Definition 2.5, and adopt the notation of Theorem 2.10. For any integer 
 $j \geq 0$
 and
$j \geq 0$
 and 
 $t \in \mathbb {R}_{\geq 0}$
, we have
$t \in \mathbb {R}_{\geq 0}$
, we have 
 $$ \begin{align*} \gamma_{K - j} (t) = x_0 + \mathfrak{l} (t - t_0) + \mathfrak{q} (t - t_0)^2 - \mathfrak{s}^{3 / 2} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} \big( j n^{-1} + |t - t_0|^3 \big), \end{align*} $$
$$ \begin{align*} \gamma_{K - j} (t) = x_0 + \mathfrak{l} (t - t_0) + \mathfrak{q} (t - t_0)^2 - \mathfrak{s}^{3 / 2} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} \big( j n^{-1} + |t - t_0|^3 \big), \end{align*} $$
where the implicit constant in the error is uniform if 
 $(x_0, t_0)$
 is bounded away from a singularity or tangency location of
$(x_0, t_0)$
 is bounded away from a singularity or tangency location of 
 $\mathfrak {A} (\mathfrak {P})$
.
$\mathfrak {A} (\mathfrak {P})$
.
 To establish Lemma 3.6, we require the following lemma expressing the curvature parameters 
 $(\mathfrak {l}, \mathfrak {r})$
 in terms of the analytic function
$(\mathfrak {l}, \mathfrak {r})$
 in terms of the analytic function 
 $Q_0$
 from Proposition 3.3 (associated with some point
$Q_0$
 from Proposition 3.3 (associated with some point 
 $(x_0, t_0) \in \overline {\mathfrak {L}}$
). Its proof, which essentially follows from a Taylor expansion, is given in Appendix A below.
$(x_0, t_0) \in \overline {\mathfrak {L}}$
). Its proof, which essentially follows from a Taylor expansion, is given in Appendix A below.
Lemma 3.7. Recall Definition 2.5, adopting the notation of Theorem 2.10, and abbreviate 
 $(x, t) = (x_0, t_0)$
. We have
$(x, t) = (x_0, t_0)$
. We have 
 $$ \begin{align*} \mathfrak{l} = \displaystyle\frac{f_t (x)}{f_t (x) + 1}; \qquad \mathfrak{q} = - \frac{1}{2} \big( f_t (x) + 1 \big)^{-3} Q_0" \big( f_t (x) \big)^{-1}. \end{align*} $$
$$ \begin{align*} \mathfrak{l} = \displaystyle\frac{f_t (x)}{f_t (x) + 1}; \qquad \mathfrak{q} = - \frac{1}{2} \big( f_t (x) + 1 \big)^{-3} Q_0" \big( f_t (x) \big)^{-1}. \end{align*} $$
Now, we can establish Lemma 3.6.
Proof of Lemma 3.6.
 We may assume throughout that 
 $|t - t_0|$
 and
$|t - t_0|$
 and 
 $jn^{-1}$
 are sufficiently small, for otherwise
$jn^{-1}$
 are sufficiently small, for otherwise 
 $|t - t_0|^3 + jn^{-1}$
 is of order
$|t - t_0|^3 + jn^{-1}$
 is of order 
 $1$
 (and thus of
$1$
 (and thus of 
 $\operatorname {\mathrm {diam}} (\mathfrak {P}) \geq \gamma _{K - j} (t)$
). Then, observe that
$\operatorname {\mathrm {diam}} (\mathfrak {P}) \geq \gamma _{K - j} (t)$
). Then, observe that 
 $\big ( \gamma _K (s), s \big ) \in \mathfrak {A} (\mathfrak {P})$
 for each s in a neighborhood of
$\big ( \gamma _K (s), s \big ) \in \mathfrak {A} (\mathfrak {P})$
 for each s in a neighborhood of 
 $t_0$
. Indeed, since
$t_0$
. Indeed, since 
 $\nabla H^* (w, s) = (0, 0)$
 for all
$\nabla H^* (w, s) = (0, 0)$
 for all 
 $(w, s) \in \mathfrak {P} \setminus \mathfrak {L} (\mathfrak {P})$
 sufficiently close to
$(w, s) \in \mathfrak {P} \setminus \mathfrak {L} (\mathfrak {P})$
 sufficiently close to 
 $(x_0, t_0)$
, we have
$(x_0, t_0)$
, we have 
 $n H^* (w, s) = n H (x_0, t_0) = K$
 for each
$n H^* (w, s) = n H (x_0, t_0) = K$
 for each 
 $(w, s) \in \mathfrak {A} (\mathfrak {P})$
 in a neighborhood of
$(w, s) \in \mathfrak {A} (\mathfrak {P})$
 in a neighborhood of 
 $(x_0, t_0)$
, implying
$(x_0, t_0)$
, implying 
 $\big ( \gamma _K (s), s \big ) = (w, s) \in \mathfrak {A} (\mathfrak {P})$
.
$\big ( \gamma _K (s), s \big ) = (w, s) \in \mathfrak {A} (\mathfrak {P})$
.
 Throughout this proof, set 
 $\widetilde {x} = \gamma _{K - j} (t)$
. We first consider the case
$\widetilde {x} = \gamma _{K - j} (t)$
. We first consider the case 
 $t = t_0$
. Fix some
$t = t_0$
. Fix some 
 $x \in [\widetilde {x}, x_0]$
, and abbreviate
$x \in [\widetilde {x}, x_0]$
, and abbreviate 
 $f_0 = f_{t_0} (x_0)$
 and
$f_0 = f_{t_0} (x_0)$
 and 
 $f = f_{t_0} (x)$
. We will approximately express f in terms of
$f = f_{t_0} (x)$
. We will approximately express f in terms of 
 $f_0$
, and then we will use this with (3.1) to compare the classical locations
$f_0$
, and then we will use this with (3.1) to compare the classical locations 
 $\gamma _{K - j} (t_0)$
 and
$\gamma _{K - j} (t_0)$
 and 
 $\gamma _K (t_0) = x_0$
. To that end, the second part of Proposition 3.3 implies
$\gamma _K (t_0) = x_0$
. To that end, the second part of Proposition 3.3 implies 
 $$ \begin{align*} Q_0 (f_0) = x_0 (f_0 + 1) - t_0 f_0; \qquad Q_0 (f) = x (f + 1) - t_0 f. \end{align*} $$
$$ \begin{align*} Q_0 (f_0) = x_0 (f_0 + 1) - t_0 f_0; \qquad Q_0 (f) = x (f + 1) - t_0 f. \end{align*} $$
Subtracting these and applying a Taylor expansion yields
 $$ \begin{align*} (f - f_0) Q_0' (f_0) + \displaystyle\frac{(f - f_0)^2}{2} Q_0" (f_0) + \mathcal{O} \big( |f - f_0|^3 \big) & = Q_0 (f) - Q_0 (f_0) \\ & = (f + 1) (x - x_0) + (x_0 - t_0) (f - f_0), \end{align*} $$
$$ \begin{align*} (f - f_0) Q_0' (f_0) + \displaystyle\frac{(f - f_0)^2}{2} Q_0" (f_0) + \mathcal{O} \big( |f - f_0|^3 \big) & = Q_0 (f) - Q_0 (f_0) \\ & = (f + 1) (x - x_0) + (x_0 - t_0) (f - f_0), \end{align*} $$
where the error depends on the first three derivatives of 
 $Q_0$
 at f, which is uniformly bounded if
$Q_0$
 at f, which is uniformly bounded if 
 $(x_0, t_0)$
 is bounded away from a singularity or tangency location of
$(x_0, t_0)$
 is bounded away from a singularity or tangency location of 
 $\mathfrak {A} (\mathfrak {P})$
. Since the third part of Proposition 3.3 gives
$\mathfrak {A} (\mathfrak {P})$
. Since the third part of Proposition 3.3 gives 
 $Q_0' (f_0) = x_0 - t_0$
, we find that
$Q_0' (f_0) = x_0 - t_0$
, we find that 
 $$ \begin{align*} (f - f_0)^2 = \displaystyle\frac{2 (f_0 + 1)}{Q_0" (f_0)} (x - x_0) + \mathcal{O} \big( |f - f_0|^3 \big). \end{align*} $$
$$ \begin{align*} (f - f_0)^2 = \displaystyle\frac{2 (f_0 + 1)}{Q_0" (f_0)} (x - x_0) + \mathcal{O} \big( |f - f_0|^3 \big). \end{align*} $$
 In particular, 
 $|f_0 - f| = \mathcal {O} \big ( |x - x_0|^{1/2} \big )$
 and, more specifically,
$|f_0 - f| = \mathcal {O} \big ( |x - x_0|^{1/2} \big )$
 and, more specifically, 
 $$ \begin{align} f - f_0 = \bigg( \displaystyle\frac{2 (f_0 + 1)}{Q_0" (f_0)} \bigg)^{1/2} (x - x_0)^{1/2} + \mathcal{O} \big( |x - x_0|^{3 / 2} \big). \end{align} $$
$$ \begin{align} f - f_0 = \bigg( \displaystyle\frac{2 (f_0 + 1)}{Q_0" (f_0)} \bigg)^{1/2} (x - x_0)^{1/2} + \mathcal{O} \big( |x - x_0|^{3 / 2} \big). \end{align} $$
 Since 
 $\gamma _{K - j} (t_0) = \widetilde {x} \leq x \leq x_0 \leq \gamma _K (t_0)$
, we have
$\gamma _{K - j} (t_0) = \widetilde {x} \leq x \leq x_0 \leq \gamma _K (t_0)$
, we have 
 $(x - x_0)^{1/2} \in \mathrm {i} \mathbb {R}$
. Since moreover
$(x - x_0)^{1/2} \in \mathrm {i} \mathbb {R}$
. Since moreover 
 $f_0 \in \mathbb {R}$
, which implies
$f_0 \in \mathbb {R}$
, which implies 
 $Q_0" (f_0) \in \mathbb {R}$
, we deduce
$Q_0" (f_0) \in \mathbb {R}$
, we deduce 
 $$ \begin{align*} \arg^* f & = f_0^{-1} \operatorname{\mathrm{Im}} (f - f_0) + \mathcal{O} \big( |f - f_0|^3 + |x - x_0|^{3/2} \big) \\ & = \bigg( \displaystyle\frac{2 |f_0 + 1|}{f_0^2 \big| Q_0" (f_0) \big|} \bigg)^{1/2} (x_0 - x)^{1/2} + \mathcal{O} \big( |x_0 - x|^{3 / 2} \big). \end{align*} $$
$$ \begin{align*} \arg^* f & = f_0^{-1} \operatorname{\mathrm{Im}} (f - f_0) + \mathcal{O} \big( |f - f_0|^3 + |x - x_0|^{3/2} \big) \\ & = \bigg( \displaystyle\frac{2 |f_0 + 1|}{f_0^2 \big| Q_0" (f_0) \big|} \bigg)^{1/2} (x_0 - x)^{1/2} + \mathcal{O} \big( |x_0 - x|^{3 / 2} \big). \end{align*} $$
 In particular, since 
 $n H^* (x_0, t_0) = K$
 and
$n H^* (x_0, t_0) = K$
 and 
 $n H^* (\widetilde {x}, t_0) = K - j$
, this implies by Equation (3.1) that
$n H^* (\widetilde {x}, t_0) = K - j$
, this implies by Equation (3.1) that 
 $$ \begin{align*} \displaystyle\frac{j}{n} = H^* (x_0, t_0) - H^* (\widetilde{x}, t_0) & = \displaystyle\int_{\widetilde{x}}^{x_0} \partial_x H^* (w, t_0) \mathrm{d}w \\ & = \displaystyle\frac{1}{\pi} \displaystyle\int_{\widetilde{x}}^{x_0} \arg^* f_{t_0} (w) \mathrm{d} w \\ & = \bigg( \displaystyle\frac{8 |f_0 + 1|}{9 \pi^2 f_0^2 \big| Q_0" (f_0) \big|} \bigg)^{1 / 2} (x_0 - \widetilde{x})^{3/2} + \mathcal{O} \big( |x_0 - \widetilde{x}|^{5 / 2} \big). \end{align*} $$
$$ \begin{align*} \displaystyle\frac{j}{n} = H^* (x_0, t_0) - H^* (\widetilde{x}, t_0) & = \displaystyle\int_{\widetilde{x}}^{x_0} \partial_x H^* (w, t_0) \mathrm{d}w \\ & = \displaystyle\frac{1}{\pi} \displaystyle\int_{\widetilde{x}}^{x_0} \arg^* f_{t_0} (w) \mathrm{d} w \\ & = \bigg( \displaystyle\frac{8 |f_0 + 1|}{9 \pi^2 f_0^2 \big| Q_0" (f_0) \big|} \bigg)^{1 / 2} (x_0 - \widetilde{x})^{3/2} + \mathcal{O} \big( |x_0 - \widetilde{x}|^{5 / 2} \big). \end{align*} $$
Hence,
 $$ \begin{align} \gamma_K (t_0) - \gamma_{K - j} (t_0) = x_0 - \widetilde{x} = \bigg( \displaystyle\frac{f_0^2 \big| Q_0" (f_0) \big|}{2 |f_0 + 1|} \bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} (j n^{-1}), \end{align} $$
$$ \begin{align} \gamma_K (t_0) - \gamma_{K - j} (t_0) = x_0 - \widetilde{x} = \bigg( \displaystyle\frac{f_0^2 \big| Q_0" (f_0) \big|}{2 |f_0 + 1|} \bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} (j n^{-1}), \end{align} $$
which by Lemma 3.7 and the definition of 
 $\mathfrak {s}$
 from (2.9) implies the lemma when
$\mathfrak {s}$
 from (2.9) implies the lemma when 
 $t = t_0$
.
$t = t_0$
.
 If 
 $t \ne t_0$
, then set
$t \ne t_0$
, then set 
 $\widehat {x}_0 = \gamma _K (t) \in \mathfrak {A} (\mathfrak {P})$
 and
$\widehat {x}_0 = \gamma _K (t) \in \mathfrak {A} (\mathfrak {P})$
 and 
 $\widehat {f}_0 = f_t (\widehat {x}_0)$
. Then, the same reasoning as used to deduce Equation (3.7) implies
$\widehat {f}_0 = f_t (\widehat {x}_0)$
. Then, the same reasoning as used to deduce Equation (3.7) implies 
 $$ \begin{align*} \gamma_K (t) - \gamma_{K - j} (t) & = \Bigg( \displaystyle\frac{\widehat{f}_0^2 \big| Q_0" (\widehat{f}_0)\big|}{2 |\widehat{f}_0 + 1|} \Bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} (jn^{-1}) \\ & = \bigg( \displaystyle\frac{f_0^2 \big| Q_0" (f_0) \big|}{2 |f_0 + 1|} \bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} \big( |t - t_0| j^{2/3} n^{-2/3} + jn^{-1} \big), \end{align*} $$
$$ \begin{align*} \gamma_K (t) - \gamma_{K - j} (t) & = \Bigg( \displaystyle\frac{\widehat{f}_0^2 \big| Q_0" (\widehat{f}_0)\big|}{2 |\widehat{f}_0 + 1|} \Bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} (jn^{-1}) \\ & = \bigg( \displaystyle\frac{f_0^2 \big| Q_0" (f_0) \big|}{2 |f_0 + 1|} \bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} \big( |t - t_0| j^{2/3} n^{-2/3} + jn^{-1} \big), \end{align*} $$
where in the last equality we used the fact that 
 $f_t$
 is uniformly smooth in t around
$f_t$
 is uniformly smooth in t around 
 $t_0$
 along
$t_0$
 along 
 $\mathfrak {A} (\mathfrak {P})$
. By Equation (2.7), it follows that
$\mathfrak {A} (\mathfrak {P})$
. By Equation (2.7), it follows that 
 $$ \begin{align*} \gamma_{K - j} (t) & = \gamma_K (t) - \bigg( \displaystyle\frac{f_0^2 \big| Q_0" (f_0) \big|}{2 |f_0 + 1|} \bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} \big( |t - t_0| j^{2/3} n^{-2/3} + jn^{-1} \big) \\ & = \gamma_K (t_0) + \mathfrak{l} (t - t_0) + \mathfrak{q} (t - t_0)^2 - \bigg( \displaystyle\frac{f_0^2 \big| Q_0" (f_0) \big|}{2 |f_0 + 1|} \bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} \big( jn^{-1} + |t - t_0|^3 \big), \end{align*} $$
$$ \begin{align*} \gamma_{K - j} (t) & = \gamma_K (t) - \bigg( \displaystyle\frac{f_0^2 \big| Q_0" (f_0) \big|}{2 |f_0 + 1|} \bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} \big( |t - t_0| j^{2/3} n^{-2/3} + jn^{-1} \big) \\ & = \gamma_K (t_0) + \mathfrak{l} (t - t_0) + \mathfrak{q} (t - t_0)^2 - \bigg( \displaystyle\frac{f_0^2 \big| Q_0" (f_0) \big|}{2 |f_0 + 1|} \bigg)^{1/3} \bigg( \displaystyle\frac{3 \pi j}{2 n} \bigg)^{2/3} + \mathcal{O} \big( jn^{-1} + |t - t_0|^3 \big), \end{align*} $$
which implies the lemma due to Lemma 3.7 and Equation (2.9) again.
3.3 Concentration estimate for the height function
 In this section, we state a concentration estimate for the height function of a random tiling of 
 $\mathsf {P}$
. We begin with the following definition for events that hold with very high probability.
$\mathsf {P}$
. We begin with the following definition for events that hold with very high probability.
Definition 3.8. We say that an event 
 $\mathscr {E}_n$
 occurs with overwhelming probability if the following holds. For any real number
$\mathscr {E}_n$
 occurs with overwhelming probability if the following holds. For any real number 
 $D> 1$
, there exists a constant
$D> 1$
, there exists a constant 
 $C> 1$
 (dependent on D and also possibly on other implicit parameters, but not n, involved in the definition of
$C> 1$
 (dependent on D and also possibly on other implicit parameters, but not n, involved in the definition of 
 $\mathscr {E}_n$
) such that
$\mathscr {E}_n$
) such that 
 $\mathbb {P} (\mathscr {E}_n) \geq 1 - n^{-D}$
 for any integer
$\mathbb {P} (\mathscr {E}_n) \geq 1 - n^{-D}$
 for any integer 
 $n> C$
.
$n> C$
.
 Recalling the notation of Theorem 2.10 and letting 
 $\mathsf {H}$
 denote the height function associated with the random tiling
$\mathsf {H}$
 denote the height function associated with the random tiling 
 $\mathscr {M}$
 of
$\mathscr {M}$
 of 
 $\mathsf {P}$
, our concentration estimate will state that the following two points with overwhelming probability. First,
$\mathsf {P}$
, our concentration estimate will state that the following two points with overwhelming probability. First, 
 $\mathsf {H}$
 is within
$\mathsf {H}$
 is within 
 $n^{\delta }$
 of the deterministic function
$n^{\delta }$
 of the deterministic function 
 $n H^*$
 everywhere on
$n H^*$
 everywhere on 
 $\mathfrak {P}$
. Second,
$\mathfrak {P}$
. Second, 
 $\mathsf {H}$
 is frozen (deterministic) at a ‘sufficiently far mesoscopic distance’ from the liquid region
$\mathsf {H}$
 is frozen (deterministic) at a ‘sufficiently far mesoscopic distance’ from the liquid region 
 $\mathfrak {L} (\mathfrak {P})$
. To make the latter point precise, we require the following definition.
$\mathfrak {L} (\mathfrak {P})$
. To make the latter point precise, we require the following definition.
Definition 3.9. Adopt the notation of Theorem 2.10, and abbreviate 
 $\mathfrak {L} = \mathfrak {L} (\mathfrak {P})$
 and
$\mathfrak {L} = \mathfrak {L} (\mathfrak {P})$
 and 
 $\mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
. Then, define the augmented liquid region
$\mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
. Then, define the augmented liquid region 
 $$ \begin{align*} \mathfrak{L}_+^{\delta} (\mathfrak{P}) = \mathfrak{L} \cup \bigcup_{u \in \mathfrak{A}} \mathfrak{B} (u; n^{\delta - 2/3}). \end{align*} $$
$$ \begin{align*} \mathfrak{L}_+^{\delta} (\mathfrak{P}) = \mathfrak{L} \cup \bigcup_{u \in \mathfrak{A}} \mathfrak{B} (u; n^{\delta - 2/3}). \end{align*} $$
 Under this notation, the following theorem then provides a concentration bound for the height function associated with 
 $\mathscr {M}$
; it will be established in Section 6.2 below.
$\mathscr {M}$
; it will be established in Section 6.2 below.
Theorem 3.10. Adopt the notation of Theorem 2.10, and let 
 $\mathsf {H}: \mathsf {P} \rightarrow \mathbb {Z}$
 denote the height function associated with
$\mathsf {H}: \mathsf {P} \rightarrow \mathbb {Z}$
 denote the height function associated with 
 $\mathscr {M}$
. For any real number
$\mathscr {M}$
. For any real number 
 $\delta> 0$
, the following two statements hold with overwhelming probability.
$\delta> 0$
, the following two statements hold with overwhelming probability. 
- 
(1) We have  $\big | \mathsf {H} (nu) - n H^* (u) \big | < n^{\delta }$
 for any $\big | \mathsf {H} (nu) - n H^* (u) \big | < n^{\delta }$
 for any $u \in \overline {\mathfrak {P}}$
. $u \in \overline {\mathfrak {P}}$
.
- 
(2) For any  $u \in \overline {\mathfrak {P}} \setminus \mathfrak {L}_+^{\delta } (\mathfrak {P})$
, we have $u \in \overline {\mathfrak {P}} \setminus \mathfrak {L}_+^{\delta } (\mathfrak {P})$
, we have $\mathsf {H} (nu) = n H^* (u)$
. $\mathsf {H} (nu) = n H^* (u)$
.
 Together with Lemma 3.6, Theorem 3.10 implies the following corollary that estimates trajectories for the random Bernoulli walks associated with 
 $\mathscr {M}$
 (recall the Bernoulli walk locations associated with the uniformly random tiling
$\mathscr {M}$
 (recall the Bernoulli walk locations associated with the uniformly random tiling 
 $\mathscr {M}$
 of
$\mathscr {M}$
 of 
 $\mathsf {P}$
 from Theorem 2.10 and Section 2.2) near the arctic boundary. The estimate is optimal up to the extra
$\mathsf {P}$
 from Theorem 2.10 and Section 2.2) near the arctic boundary. The estimate is optimal up to the extra 
 $n^\delta $
 factor and is reminiscent of the optimal rigidity estimates of edge eigenvalues in random matrix theory; see [Reference Erdős, Yau and YinEYY12, Theorem 2.2].
$n^\delta $
 factor and is reminiscent of the optimal rigidity estimates of edge eigenvalues in random matrix theory; see [Reference Erdős, Yau and YinEYY12, Theorem 2.2].
Corollary 3.11. Adopt the notation of Theorem 2.10, and fix a real number 
 $\delta \in \big ( 0, 1/100 \big )$
. For any integers
$\delta \in \big ( 0, 1/100 \big )$
. For any integers 
 $j \in [1, 2n^{10 \delta }]$
 and
$j \in [1, 2n^{10 \delta }]$
 and 
 $s \in [-n^{2/3 + 20 \delta }, n^{2/3 + 20 \delta }]$
, we have with overwhelming probability that
$s \in [-n^{2/3 + 20 \delta }, n^{2/3 + 20 \delta }]$
, we have with overwhelming probability that 
 $$ \begin{align*} \Bigg| \mathsf{x}_{K - j + 1} (s + t_0 n) - \bigg( x_0 n + \mathfrak{l} s + \mathfrak{q} n^{-1} s^2 - \mathfrak{s}^{3 / 2} \Big( \displaystyle\frac{3 \pi j}{2} \Big)^{2/3} n^{1/3} \bigg) \Bigg| \leq j^{-1/3} n^{1/3 + \delta}. \end{align*} $$
$$ \begin{align*} \Bigg| \mathsf{x}_{K - j + 1} (s + t_0 n) - \bigg( x_0 n + \mathfrak{l} s + \mathfrak{q} n^{-1} s^2 - \mathfrak{s}^{3 / 2} \Big( \displaystyle\frac{3 \pi j}{2} \Big)^{2/3} n^{1/3} \bigg) \Bigg| \leq j^{-1/3} n^{1/3 + \delta}. \end{align*} $$
Proof. We first show that Theorem 3.10 implies, for any t in a sufficiently small (independent of n) neighborhood of 
 $t_0$
, that with overwhelming probability we have
$t_0$
, that with overwhelming probability we have 
 $$ \begin{align} \gamma_{K - j - n^{\delta} + 1} (t) - n^{-1} \leq n^{-1} \mathsf{x}_{K - j + 1} (tn) \leq \min \big\{ \gamma_{K - j + n^{\delta} + 1} (t), \gamma_K (t) + n^{\delta / 2 - 2/3} \}, \end{align} $$
$$ \begin{align} \gamma_{K - j - n^{\delta} + 1} (t) - n^{-1} \leq n^{-1} \mathsf{x}_{K - j + 1} (tn) \leq \min \big\{ \gamma_{K - j + n^{\delta} + 1} (t), \gamma_K (t) + n^{\delta / 2 - 2/3} \}, \end{align} $$
where we recall the classical locations 
 $\gamma _i(t)$
 from Definition 3.5 (and we assume that
$\gamma _i(t)$
 from Definition 3.5 (and we assume that 
 $tn \in \mathbb {Z}$
 for notational convenience). Let us only show the second bound in Equation (3.8), for the proof of the first is entirely analogous. Then, from the bijection between tilings and nonintersecting Bernoulli walk ensembles described in Section 2.2, we have
$tn \in \mathbb {Z}$
 for notational convenience). Let us only show the second bound in Equation (3.8), for the proof of the first is entirely analogous. Then, from the bijection between tilings and nonintersecting Bernoulli walk ensembles described in Section 2.2, we have 
 $n^{-1} (\mathsf {x}_{K - j + 1} (tn)+1) \leq x$
 if and only if
$n^{-1} (\mathsf {x}_{K - j + 1} (tn)+1) \leq x$
 if and only if 
 $\mathsf {H} (xn, tn) \geq K - j + 1$
.
$\mathsf {H} (xn, tn) \geq K - j + 1$
.
 So, setting 
 $\gamma = \gamma _{K - j + n^{\delta } + 1} (t)$
, the first part of Theorem 3.10 implies with overwhelming probability that
$\gamma = \gamma _{K - j + n^{\delta } + 1} (t)$
, the first part of Theorem 3.10 implies with overwhelming probability that 
 $\mathsf {H} (\gamma n, tn) \geq n H^* (\gamma n, tn) - n^{\delta } = K - j + 1$
. Hence,
$\mathsf {H} (\gamma n, tn) \geq n H^* (\gamma n, tn) - n^{\delta } = K - j + 1$
. Hence, 
 $\mathsf {x}_{K - j + 1} (tn) \leq \gamma _{K - j + n^{\delta } + 1} (t)$
 holds with overwhelming probability. Moreover, denoting
$\mathsf {x}_{K - j + 1} (tn) \leq \gamma _{K - j + n^{\delta } + 1} (t)$
 holds with overwhelming probability. Moreover, denoting 
 $x^{\prime } = \gamma _K (t) + N^{\delta / 2 - 2/3}$
, we have by the second part of Theorem 3.10 that
$x^{\prime } = \gamma _K (t) + N^{\delta / 2 - 2/3}$
, we have by the second part of Theorem 3.10 that 
 $\mathsf {H} (x^{\prime } n, tn) = n H^* (x^{\prime }, t) = n H^* \big ( \gamma _K (t), t \big ) = K$
 with overwhelming probability, where in the second equality we used the fact that
$\mathsf {H} (x^{\prime } n, tn) = n H^* (x^{\prime }, t) = n H^* \big ( \gamma _K (t), t \big ) = K$
 with overwhelming probability, where in the second equality we used the fact that 
 $\nabla H^* (x, t) = (0, 0)$
 for
$\nabla H^* (x, t) = (0, 0)$
 for 
 $(x, t)$
 in a neighborhood of
$(x, t)$
 in a neighborhood of 
 $(x_0, t_0)$
 to the right of
$(x_0, t_0)$
 to the right of 
 $\mathfrak {A}$
. Hence,
$\mathfrak {A}$
. Hence, 
 $n^{-1} \mathsf {x}_{K - j + 1} (t) \leq n^{-1} \mathsf {x}_K (t) \leq x^{\prime } = \gamma _K (t) + N^{\delta / 2 - 2/3}$
 with overwhelming probability. This confirms Equation(3.8).
$n^{-1} \mathsf {x}_{K - j + 1} (t) \leq n^{-1} \mathsf {x}_K (t) \leq x^{\prime } = \gamma _K (t) + N^{\delta / 2 - 2/3}$
 with overwhelming probability. This confirms Equation(3.8).
Now, Equation (3.8) and Lemma 3.6 together imply that
 $$ \begin{align} n^{-1} \mathsf{x}_{K - j + 1} (tn) & = x_0 + \mathfrak{l} (t - t_0) + \mathfrak{q} (t - t_0)^2 - \mathfrak{s}^{3 / 2} \Big( \displaystyle\frac{3 \pi j}{2n} \Big)^{2/3} \nonumber\\ & \quad + \mathcal{O} \big( j n^{-1} + |t - t_0|^3 + j^{-1/3} n^{\delta - 2/3} \big) \end{align} $$
$$ \begin{align} n^{-1} \mathsf{x}_{K - j + 1} (tn) & = x_0 + \mathfrak{l} (t - t_0) + \mathfrak{q} (t - t_0)^2 - \mathfrak{s}^{3 / 2} \Big( \displaystyle\frac{3 \pi j}{2n} \Big)^{2/3} \nonumber\\ & \quad + \mathcal{O} \big( j n^{-1} + |t - t_0|^3 + j^{-1/3} n^{\delta - 2/3} \big) \end{align} $$
holds for each 
 $j \in \mathbb {Z}$
 and
$j \in \mathbb {Z}$
 and 
 $t \in \mathbb {R}$
, with overwhelming probability. Here, we have also used the fact that Lemma 3.6 implies the classical locations
$t \in \mathbb {R}$
, with overwhelming probability. Here, we have also used the fact that Lemma 3.6 implies the classical locations 
 $\gamma _j (t)$
 (from Definition 3.5) with respect to
$\gamma _j (t)$
 (from Definition 3.5) with respect to 
 $\mathfrak {P}$
 satisfy
$\mathfrak {P}$
 satisfy 
 $\gamma _j (t) - \gamma _{j - n^{\delta }} (t) = \mathcal {O} (j^{-1/3} n^{- 2/3})$
. Since
$\gamma _j (t) - \gamma _{j - n^{\delta }} (t) = \mathcal {O} (j^{-1/3} n^{- 2/3})$
. Since 
 $\delta \in \big ( 0, 1/100 \big )$
 and
$\delta \in \big ( 0, 1/100 \big )$
 and 
 $j \in [1, 2 n^{10 \delta }]$
, we have
$j \in [1, 2 n^{10 \delta }]$
, we have 
 $jn^{-1} + |t - t_0|^3 + j^{-1/3} n^{\delta - 2/3} \leq 3 j^{-1/3} n^{\delta - 2/3}$
 for
$jn^{-1} + |t - t_0|^3 + j^{-1/3} n^{\delta - 2/3} \leq 3 j^{-1/3} n^{\delta - 2/3}$
 for 
 $j \in [1, 2n^{10 \delta }]$
 and
$j \in [1, 2n^{10 \delta }]$
 and 
 $|t - t_0| \leq n^{20 \delta - 2/3}$
. Setting
$|t - t_0| \leq n^{20 \delta - 2/3}$
. Setting 
 $s = (t - t_0) n$
 in (3.9) then yields the corollary.
$s = (t - t_0) n$
 in (3.9) then yields the corollary.
3.4 Comparison to hexagonal edge statistics
 We will prove Theorem 2.10 through a local comparison of a random tiling of 
 $\mathsf {P}$
 with one of a suitably chosen hexagonal domain, whose universality of edge statistics has been proven in [Reference PetrovPet14, Reference Duse and MetcalfeDM18, Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Reference Nica, Dauvergne and VirágDDV23]. In this section, we set notation and state known properties for such hexagonal domains.
$\mathsf {P}$
 with one of a suitably chosen hexagonal domain, whose universality of edge statistics has been proven in [Reference PetrovPet14, Reference Duse and MetcalfeDM18, Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Reference Nica, Dauvergne and VirágDDV23]. In this section, we set notation and state known properties for such hexagonal domains.
Definition 3.12. For any real numbers 
 $a, b, c> 0$
, let
$a, b, c> 0$
, let 
 $\mathfrak {E}_{a, b, c} \subset \mathbb {R}^2$
 denote the
$\mathfrak {E}_{a, b, c} \subset \mathbb {R}^2$
 denote the 
 $a \times b \times c$
 hexagon, that is, the polygon with vertices
$a \times b \times c$
 hexagon, that is, the polygon with vertices 
 $\big \{ (0, 0), (a, 0), (a + c, c), (a + c, b + c), (c, b + c), (0, b) \big \}$
. By [Reference Cohn, Larsen and ProppCLP98, Theorem 1.1], its liquid region
$\big \{ (0, 0), (a, 0), (a + c, c), (a + c, b + c), (c, b + c), (0, b) \big \}$
. By [Reference Cohn, Larsen and ProppCLP98, Theorem 1.1], its liquid region 
 $\mathfrak {L}_{a, b, c} = \mathfrak {L} (\mathfrak {E}_{a, b, c})$
 is bounded by the ellipse inscribed in
$\mathfrak {L}_{a, b, c} = \mathfrak {L} (\mathfrak {E}_{a, b, c})$
 is bounded by the ellipse inscribed in 
 $\mathfrak {E}_{a, b, c}$
.
$\mathfrak {E}_{a, b, c}$
.
 We refer to the middle of Figure 3 for a depiction when 
 $(a, b, c) = (5, 4, 3)$
. The following result from [Reference PetrovPet14, Reference Duse and MetcalfeDM18, Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Reference Nica, Dauvergne and VirágDDV23] is the case of Theorem 2.10 when
$(a, b, c) = (5, 4, 3)$
. The following result from [Reference PetrovPet14, Reference Duse and MetcalfeDM18, Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Reference Nica, Dauvergne and VirágDDV23] is the case of Theorem 2.10 when 
 $\mathsf {P}$
 is a hexagon.
$\mathsf {P}$
 is a hexagon.
Proposition 3.13 [Reference PetrovPet14, Reference Duse and MetcalfeDM18, Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Reference Nica, Dauvergne and VirágDDV23].
 Let 
 $a = a_n$
,
$a = a_n$
, 
 $b = b_n$
 and
$b = b_n$
 and 
 $c = c_n$
 be real numbers bounded away from
$c = c_n$
 be real numbers bounded away from 
 $0$
 and
$0$
 and 
 $\infty $
, and set
$\infty $
, and set 
 $(\mathsf {a}, \mathsf {b}, \mathsf {c}) = (\mathsf {a}_n, \mathsf {b}_n, \mathsf {c}_n) = (na, nb, nc)$
; assume that
$(\mathsf {a}, \mathsf {b}, \mathsf {c}) = (\mathsf {a}_n, \mathsf {b}_n, \mathsf {c}_n) = (na, nb, nc)$
; assume that 
 $\mathsf {a}, \mathsf {b}, \mathsf {c} \in \mathbb {Z}$
. Then Theorem 2.10 holds with the
$\mathsf {a}, \mathsf {b}, \mathsf {c} \in \mathbb {Z}$
. Then Theorem 2.10 holds with the 
 $\mathfrak {P}$
 there equal to the
$\mathfrak {P}$
 there equal to the 
 $a \times b \times c$
 hexagon (and
$a \times b \times c$
 hexagon (and 
 $\mathsf {P}$
 equal to the
$\mathsf {P}$
 equal to the 
 $\mathsf {a} \times \mathsf {b} \times \mathsf {c}$
 hexagon).
$\mathsf {a} \times \mathsf {b} \times \mathsf {c}$
 hexagon).
Remark 3.14. Since Proposition 3.13 does not appear to have been stated exactly in above form in the literature, let us briefly outline how it follows from known results. First, [Reference Nica, Dauvergne and VirágDDV23, Theorem 4.1] (see also the proof of [Reference Nica, Dauvergne and VirágDDV23, Theorem 1.5]) indicates that, to show uniform convergence of the normalized discrete nonintersecting Bernoulli walks 
 $\mathsf {X}_i (t)$
 from Equation (2.10) to the shifted Airy line ensemble
$\mathsf {X}_i (t)$
 from Equation (2.10) to the shifted Airy line ensemble 
 $\mathcal {R}$
 from Definition 2.7, it suffices to establish convergence in the sense of distributions, that is,
$\mathcal {R}$
 from Definition 2.7, it suffices to establish convergence in the sense of distributions, that is, 
 $$ \begin{align} \displaystyle\lim_{n \rightarrow \infty} \mathbb{P} \Bigg(\bigcap_{i = 1}^m \big\{ \mathsf{X}_{j_i} (t_i) \leq z_i \big\} \Bigg) = \mathbb{P} \Bigg( \bigcap_{i = 1}^m \big\{ \mathcal{R}_{j_i} (t_i) \leq z_i \big\} \Bigg), \end{align} $$
$$ \begin{align} \displaystyle\lim_{n \rightarrow \infty} \mathbb{P} \Bigg(\bigcap_{i = 1}^m \big\{ \mathsf{X}_{j_i} (t_i) \leq z_i \big\} \Bigg) = \mathbb{P} \Bigg( \bigcap_{i = 1}^m \big\{ \mathcal{R}_{j_i} (t_i) \leq z_i \big\} \Bigg), \end{align} $$
for any 
 $j_1, j_2, \ldots , j_m \in \mathbb {Z}_{\geq 1}$
 and
$j_1, j_2, \ldots , j_m \in \mathbb {Z}_{\geq 1}$
 and 
 $t_1, t_2, \ldots , t_m, z_1, z_2, \ldots , z_m \in \mathbb {R}$
. Next, [Reference PetrovPet14, Theorem 8.1] and [Reference Duse and MetcalfeDM18, Theorem 1.12] show that the nonintersecting Bernoulli walk ensemble
$t_1, t_2, \ldots , t_m, z_1, z_2, \ldots , z_m \in \mathbb {R}$
. Next, [Reference PetrovPet14, Theorem 8.1] and [Reference Duse and MetcalfeDM18, Theorem 1.12] show that the nonintersecting Bernoulli walk ensemble 
 $\big ( \mathsf {x}_j (t) \big )$
 is a determinantal point process, whose correlation kernel under the scaling (2.10) converges to the extended Airy kernel from Definition 2.6. Since probabilities as in the left side of Equation (3.10) are expressible in terms of unbounded sums involving this correlation kernel (see, for example, [Reference JohanssonJoh05, Equation (3.9)]), to conclude the distributional convergence (3.10) from the kernel limit, it suffices to show one-point tightness of the extremal Bernoulli walk
$\big ( \mathsf {x}_j (t) \big )$
 is a determinantal point process, whose correlation kernel under the scaling (2.10) converges to the extended Airy kernel from Definition 2.6. Since probabilities as in the left side of Equation (3.10) are expressible in terms of unbounded sums involving this correlation kernel (see, for example, [Reference JohanssonJoh05, Equation (3.9)]), to conclude the distributional convergence (3.10) from the kernel limit, it suffices to show one-point tightness of the extremal Bernoulli walk 
 $\mathsf {X}_1$
 (in order to effectively cut offFootnote 
5
 the sum mentioned above). This tightness is provided by [Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Theorem 3.14], which in fact shows that the one-point law of
$\mathsf {X}_1$
 (in order to effectively cut offFootnote 
5
 the sum mentioned above). This tightness is provided by [Reference Baik, Kriecherbauer, McLaughlin and MillerBKMM07, Theorem 3.14], which in fact shows that the one-point law of 
 $\mathsf {X}_1$
 converges to the Tracy–Widom distribution, originally derived from studying the largest eigenvalue of the Gaussian Unitary ensemble.
$\mathsf {X}_1$
 converges to the Tracy–Widom distribution, originally derived from studying the largest eigenvalue of the Gaussian Unitary ensemble.
 To proceed, we require some additional notation on nonintersecting Bernoulli walk ensembles. Let 
 $\mathsf {X} = (\mathsf {x}_l, \mathsf {x}_{l + 1}, \ldots , \mathsf {x}_m)$
 denote a family of nonintersecting Bernoulli walks, each with time span
$\mathsf {X} = (\mathsf {x}_l, \mathsf {x}_{l + 1}, \ldots , \mathsf {x}_m)$
 denote a family of nonintersecting Bernoulli walks, each with time span 
 $[s, t]$
, so that
$[s, t]$
, so that 
 $\mathsf {x}_j = \big ( \mathsf {x}_j (s), \mathsf {x}_j (s + 1), \ldots , \mathsf {x}_j (t) \big )$
 for each
$\mathsf {x}_j = \big ( \mathsf {x}_j (s), \mathsf {x}_j (s + 1), \ldots , \mathsf {x}_j (t) \big )$
 for each 
 $j \in [l, m]$
. Given functions
$j \in [l, m]$
. Given functions 
 $\mathsf {f}, \mathsf {g}: [s, t] \rightarrow \mathbb {R}$
, we say that
$\mathsf {f}, \mathsf {g}: [s, t] \rightarrow \mathbb {R}$
, we say that 
 $\mathsf {X}$
 has
$\mathsf {X}$
 has 
 $(\mathsf {f}; \mathsf {g})$
 as a boundary condition if
$(\mathsf {f}; \mathsf {g})$
 as a boundary condition if 
 $\mathsf {f} (r) \leq \mathsf {x}_j (r) \leq \mathsf {g} (r)$
 for each
$\mathsf {f} (r) \leq \mathsf {x}_j (r) \leq \mathsf {g} (r)$
 for each 
 $r \in [s, t]$
. We refer to
$r \in [s, t]$
. We refer to 
 $\mathsf {f}$
 and
$\mathsf {f}$
 and 
 $\mathsf {g}$
 as a left boundary and right boundary for
$\mathsf {g}$
 as a left boundary and right boundary for 
 $\mathsf {X}$
, respectively, and allow
$\mathsf {X}$
, respectively, and allow 
 $\mathsf {f}$
 and
$\mathsf {f}$
 and 
 $\mathsf {g}$
 to be
$\mathsf {g}$
 to be 
 $-\infty $
 or
$-\infty $
 or 
 $\infty $
. We further say that
$\infty $
. We further say that 
 $\mathsf {X}$
 has entrance data
$\mathsf {X}$
 has entrance data 
 $\mathsf {d} = (\mathsf {d}_l, \mathsf {d}_{l + 1}, \ldots , \mathsf {d}_m)$
 and exit data
$\mathsf {d} = (\mathsf {d}_l, \mathsf {d}_{l + 1}, \ldots , \mathsf {d}_m)$
 and exit data 
 $\mathsf {e} = (\mathsf {e}_l, \mathsf {e}_{l + 1}, \ldots , \mathsf {e}_m)$
 if
$\mathsf {e} = (\mathsf {e}_l, \mathsf {e}_{l + 1}, \ldots , \mathsf {e}_m)$
 if 
 $\mathsf {x}_j (s) = \mathsf {d}_j$
 and
$\mathsf {x}_j (s) = \mathsf {d}_j$
 and 
 $\mathsf {x}_j (t) = \mathsf {e}_j$
, for each
$\mathsf {x}_j (t) = \mathsf {e}_j$
, for each 
 $j \in [l, m]$
; see Figure 7 for a depiction. Then, there is a finite number of nonintersecting Bernoulli walk ensembles with any given entrance and exit data
$j \in [l, m]$
; see Figure 7 for a depiction. Then, there is a finite number of nonintersecting Bernoulli walk ensembles with any given entrance and exit data 
 $(\mathsf {d}; \mathsf {e})$
 and (possibly infinite) boundary conditions
$(\mathsf {d}; \mathsf {e})$
 and (possibly infinite) boundary conditions 
 $(\mathsf {f}; \mathsf {g})$
.
$(\mathsf {f}; \mathsf {g})$
.

Figure 7 Shown above is an ensemble of nonintersecting Bernoulli walks 
 $\mathsf {X} = ( \mathsf {x}_{-1}, \mathsf {x}_0, \mathsf {x}_1, \mathsf {x}_2)$
 with initial data
$\mathsf {X} = ( \mathsf {x}_{-1}, \mathsf {x}_0, \mathsf {x}_1, \mathsf {x}_2)$
 with initial data 
 $\mathsf {d} = (\mathsf {d}_{-1}, \mathsf {d}_0, \mathsf {d}_1, \mathsf {d}_2)$
; ending data
$\mathsf {d} = (\mathsf {d}_{-1}, \mathsf {d}_0, \mathsf {d}_1, \mathsf {d}_2)$
; ending data 
 $\mathsf {e} = (\mathsf {e}_{-1}, \mathsf {e}_0, \mathsf {e}_1, \mathsf {e}_2)$
; left boundary
$\mathsf {e} = (\mathsf {e}_{-1}, \mathsf {e}_0, \mathsf {e}_1, \mathsf {e}_2)$
; left boundary 
 $\mathsf {f}$
; and right boundary
$\mathsf {f}$
; and right boundary 
 $\mathsf {g}$
.
$\mathsf {g}$
.
 The below lemma from [Reference Cohn, Elkies and ProppCEP96] provides a monotonicity property for nonintersecting Bernoulli walk ensembles randomly sampled under the uniform measure on the set of such families with prescribed entrance, exit and boundary conditions. In what follows, for any functions 
 $\mathsf {f}, \mathsf {f}': [s, t] \rightarrow \mathbb {R}$
 we write
$\mathsf {f}, \mathsf {f}': [s, t] \rightarrow \mathbb {R}$
 we write 
 $\mathsf {f} \leq \mathsf {f}'$
 if
$\mathsf {f} \leq \mathsf {f}'$
 if 
 $\mathsf {f} (r) \leq \mathsf {f}' (r)$
 for each
$\mathsf {f} (r) \leq \mathsf {f}' (r)$
 for each 
 $r \in [s, t]$
. Similarly, for any sequences
$r \in [s, t]$
. Similarly, for any sequences 
 $\mathsf {d} = (\mathsf {d}_l, \mathsf {d}_{l + 1}, \ldots , \mathsf {d}_m) \subset \mathbb {R}$
 and
$\mathsf {d} = (\mathsf {d}_l, \mathsf {d}_{l + 1}, \ldots , \mathsf {d}_m) \subset \mathbb {R}$
 and 
 $\mathsf {d}' = (\mathsf {d}_l', \mathsf {d}_{l + 1}', \ldots , \mathsf {d}_m') \subset \mathbb {R}$
, we write
$\mathsf {d}' = (\mathsf {d}_l', \mathsf {d}_{l + 1}', \ldots , \mathsf {d}_m') \subset \mathbb {R}$
, we write 
 $\mathsf {d} \leq \mathsf {d}'$
 if
$\mathsf {d} \leq \mathsf {d}'$
 if 
 $\mathsf {d}_j \leq \mathsf {d}_j'$
 for each
$\mathsf {d}_j \leq \mathsf {d}_j'$
 for each 
 $j \in [l, m]$
.
$j \in [l, m]$
.
Lemma 3.15 [Reference Cohn, Elkies and ProppCEP96, Lemma 18].
 Fix integers 
 $s \leq t$
 and
$s \leq t$
 and 
 $l \leq m$
; functions
$l \leq m$
; functions 
 $\mathsf {f}, \mathsf {f}, \mathsf {g}, \mathsf {g}' : [s, t] \rightarrow \mathbb {R}$
; and
$\mathsf {f}, \mathsf {f}, \mathsf {g}, \mathsf {g}' : [s, t] \rightarrow \mathbb {R}$
; and 
 $(m - l + 1)$
-tuples
$(m - l + 1)$
-tuples 
 $\mathsf {d}, \mathsf {d}', \mathsf {e}, \mathsf {e}'$
 with coordinates indexed by
$\mathsf {d}, \mathsf {d}', \mathsf {e}, \mathsf {e}'$
 with coordinates indexed by 
 $[l, m]$
. Let
$[l, m]$
. Let 
 $\mathsf {X} = (\mathsf {x}_l, \mathsf {x}_{l + 1}, \ldots , \mathsf {x}_m)$
 denote a uniformly random nonintersecting Bernoulli walk ensemble with boundary data
$\mathsf {X} = (\mathsf {x}_l, \mathsf {x}_{l + 1}, \ldots , \mathsf {x}_m)$
 denote a uniformly random nonintersecting Bernoulli walk ensemble with boundary data 
 $(\mathsf {f}, \mathsf {g})$
, entrance data
$(\mathsf {f}, \mathsf {g})$
, entrance data 
 $\mathsf {d}$
 and exit data
$\mathsf {d}$
 and exit data 
 $\mathsf {e}$
. Define
$\mathsf {e}$
. Define 
 $\mathsf {X}' = (\mathsf {x}_l', \mathsf {x}_{l + 1}', \ldots , \mathsf {x}_m')$
 similarly but with respect to
$\mathsf {X}' = (\mathsf {x}_l', \mathsf {x}_{l + 1}', \ldots , \mathsf {x}_m')$
 similarly but with respect to 
 $(\mathsf {f}'; \mathsf {g}')$
 and
$(\mathsf {f}'; \mathsf {g}')$
 and 
 $ (\mathsf {d}'; \mathsf {e}')$
. If
$ (\mathsf {d}'; \mathsf {e}')$
. If 
 $\mathsf {f} \leq \mathsf {f}'$
,
$\mathsf {f} \leq \mathsf {f}'$
, 
 $\mathsf {g} \leq \mathsf {g}'$
,
$\mathsf {g} \leq \mathsf {g}'$
, 
 $\mathsf {d} \leq \mathsf {d}'$
 and
$\mathsf {d} \leq \mathsf {d}'$
 and 
 $\mathsf {e} \leq \mathsf {e}'$
, then there exists coupling between
$\mathsf {e} \leq \mathsf {e}'$
, then there exists coupling between 
 $\mathsf {X}$
 and
$\mathsf {X}$
 and 
 $\mathsf {X}'$
 such that
$\mathsf {X}'$
 such that 
 $\mathsf {x}_j \leq \mathsf {x}_j'$
 almost surely, for each
$\mathsf {x}_j \leq \mathsf {x}_j'$
 almost surely, for each 
 $j \in [l, m]$
.
$j \in [l, m]$
.
Remark 3.16. An equivalent way of stating Lemma 3.15 (as was done in [Reference Cohn, Elkies and ProppCEP96]) is through the height functions associated with the Bernoulli walk ensembles 
 $\mathsf {X}$
 and
$\mathsf {X}$
 and 
 $\mathsf {X}'$
. Specifically, let
$\mathsf {X}'$
. Specifically, let 
 $\mathsf {D} \subset \mathbb {T}$
 be a finite domain, and let
$\mathsf {D} \subset \mathbb {T}$
 be a finite domain, and let 
 $\mathsf {h}, \mathsf {h}' : \partial \mathsf {H} \rightarrow \mathbb {Z}$
 denote two boundary height functions such that
$\mathsf {h}, \mathsf {h}' : \partial \mathsf {H} \rightarrow \mathbb {Z}$
 denote two boundary height functions such that 
 $\mathsf {h} (v) \geq \mathsf {h}' (v)$
, for each
$\mathsf {h} (v) \geq \mathsf {h}' (v)$
, for each 
 $v \in \partial \mathsf {D}$
. Let
$v \in \partial \mathsf {D}$
. Let 
 $\mathsf {H}, \mathsf {H}' : \mathsf {D} \rightarrow \mathbb {Z}$
 denote two uniformly random height functions on
$\mathsf {H}, \mathsf {H}' : \mathsf {D} \rightarrow \mathbb {Z}$
 denote two uniformly random height functions on 
 $\mathsf {D}$
 with boundary data
$\mathsf {D}$
 with boundary data 
 $\mathsf {H} |_{\partial \mathsf {D}} = \mathsf {h}$
 and
$\mathsf {H} |_{\partial \mathsf {D}} = \mathsf {h}$
 and 
 $\mathsf {H}' |_{\partial \mathsf {D}} = \mathsf {h}'$
. Then, Lemma 3.15 implies (and is equivalent to) the existence of a coupling between
$\mathsf {H}' |_{\partial \mathsf {D}} = \mathsf {h}'$
. Then, Lemma 3.15 implies (and is equivalent to) the existence of a coupling between 
 $\mathsf {H}$
 and
$\mathsf {H}$
 and 
 $\mathsf {H}'$
 such that
$\mathsf {H}'$
 such that 
 $\mathsf {H} (\mathsf {u}) \geq \mathsf {H}' (\mathsf {u})$
 almost surely, for each
$\mathsf {H} (\mathsf {u}) \geq \mathsf {H}' (\mathsf {u})$
 almost surely, for each 
 $\mathsf {u} \in \mathsf {D}$
.
$\mathsf {u} \in \mathsf {D}$
.
Remark 3.17. Due to the correspondence from Section 2.2 between tilings and nonintersecting Bernoulli walk ensembles, the uniform measure on the set of free tilingsFootnote 
6
 of a strip domain of the form 
 $\mathbb {Z} \times [s, t] \subset \mathbb {T}$
 is equivalent to that on the set of nonintersecting Bernoulli walk ensembles with time spans
$\mathbb {Z} \times [s, t] \subset \mathbb {T}$
 is equivalent to that on the set of nonintersecting Bernoulli walk ensembles with time spans 
 $[s, t]$
 under specified entrance, exit and boundary conditions. Moreover, if
$[s, t]$
 under specified entrance, exit and boundary conditions. Moreover, if 
 $\mathsf {X} = (\mathsf {x}_l, \mathsf {x}_{l + 1}, \ldots , \mathsf {x}_m)$
 is sampled under the uniform measure, then it is quickly verified that it satisfies the following Gibbs property
Footnote 
7
 [Reference GorinGor21, Section 13.2]. For any
$\mathsf {X} = (\mathsf {x}_l, \mathsf {x}_{l + 1}, \ldots , \mathsf {x}_m)$
 is sampled under the uniform measure, then it is quickly verified that it satisfies the following Gibbs property
Footnote 
7
 [Reference GorinGor21, Section 13.2]. For any 
 $s \leq s' \leq t' \leq t$
 and
$s \leq s' \leq t' \leq t$
 and 
 $l \leq l' \leq m' \leq m$
, the law of
$l \leq l' \leq m' \leq m$
, the law of 
 $(\mathsf {x}_{l'}, \mathsf {x}_{l' + 1}, \ldots , \mathsf {x}_{m'})$
 restricted to
$(\mathsf {x}_{l'}, \mathsf {x}_{l' + 1}, \ldots , \mathsf {x}_{m'})$
 restricted to 
 $\mathbb {Z} \times [s', t']$
 is uniform measure on those nonintersecting Bernoulli walk ensembles with entrance data
$\mathbb {Z} \times [s', t']$
 is uniform measure on those nonintersecting Bernoulli walk ensembles with entrance data 
 $\big ( \mathsf {x}_{l'} (s'), \mathsf {x}_{l' + 1} (s'), \ldots , \mathsf {x}_{m'} (s') \big )$
; exit data
$\big ( \mathsf {x}_{l'} (s'), \mathsf {x}_{l' + 1} (s'), \ldots , \mathsf {x}_{m'} (s') \big )$
; exit data 
 $\big ( \mathsf {x}_{l'} (t'), \mathsf {x}_{l' + 1} (t'), \ldots , \mathsf {x}_{m'} (t') \big )$
; and boundary conditions
$\big ( \mathsf {x}_{l'} (t'), \mathsf {x}_{l' + 1} (t'), \ldots , \mathsf {x}_{m'} (t') \big )$
; and boundary conditions 
 $(\mathsf {x}_{l' - 1}; \mathsf {x}_{m' + 1})$
.
$(\mathsf {x}_{l' - 1}; \mathsf {x}_{m' + 1})$
.
3.5 Proof of Theorem 2.10
In this section, we establish Theorem 2.10. We begin with the following proposition that provides edge statistics for nonintersecting random Bernoulli walks with an approximately quadratic boundary condition.
Proposition 3.18. Fix 
 $\delta \in \big ( 0, 1/100 \big )$
 and real numbers
$\delta \in \big ( 0, 1/100 \big )$
 and real numbers 
 $\mathfrak {q} = \mathfrak {q}_n$
 and
$\mathfrak {q} = \mathfrak {q}_n$
 and 
 $\mathfrak {l} = \mathfrak {l}_n$
 bounded away from
$\mathfrak {l} = \mathfrak {l}_n$
 bounded away from 
 $0$
 and
$0$
 and 
 $\infty $
. Define
$\infty $
. Define 
 $\mathfrak {s}, \mathfrak {r} \in \mathbb {R}$
 from
$\mathfrak {s}, \mathfrak {r} \in \mathbb {R}$
 from 
 $(\mathfrak {l}, \mathfrak {q})$
 through Equation (2.9), set
$(\mathfrak {l}, \mathfrak {q})$
 through Equation (2.9), set 
 $m = \lfloor n^{10 \delta } \rfloor $
 and
$m = \lfloor n^{10 \delta } \rfloor $
 and 
 $\mathsf {T} = \lfloor n^{2 / 3 + 20 \delta } \rfloor $
 and let
$\mathsf {T} = \lfloor n^{2 / 3 + 20 \delta } \rfloor $
 and let 
 $\mathsf {f}: [-\mathsf {T}, \mathsf {T}] \rightarrow \mathbb {R}$
 be a function satisfying
$\mathsf {f}: [-\mathsf {T}, \mathsf {T}] \rightarrow \mathbb {R}$
 be a function satisfying 
 $$ \begin{align} \displaystyle\sup_{s \in [-\mathsf{T}, \mathsf{T}]} \big| \mathsf{f} (s) + K_0 - \mathfrak{l} s- \mathfrak{q} s^2 n^{-1} \big| < n^{1/3 - \delta}, \qquad \text{where} \qquad K_0= \mathfrak{s}^{3/2} n^{1/3} \Big( \displaystyle\frac{3 \pi m}{2} \Big)^{2 / 3}. \end{align} $$
$$ \begin{align} \displaystyle\sup_{s \in [-\mathsf{T}, \mathsf{T}]} \big| \mathsf{f} (s) + K_0 - \mathfrak{l} s- \mathfrak{q} s^2 n^{-1} \big| < n^{1/3 - \delta}, \qquad \text{where} \qquad K_0= \mathfrak{s}^{3/2} n^{1/3} \Big( \displaystyle\frac{3 \pi m}{2} \Big)^{2 / 3}. \end{align} $$
 Further, let 
 $\mathsf {d} = (\mathsf {d}_1, \mathsf {d}_2, \ldots , \mathsf {d}_m)$
 and
$\mathsf {d} = (\mathsf {d}_1, \mathsf {d}_2, \ldots , \mathsf {d}_m)$
 and 
 $\mathsf {e} = (\mathsf {e}_1, \mathsf {e}_2, \ldots , \mathsf {e}_m)$
 be integer sequences satisfying
$\mathsf {e} = (\mathsf {e}_1, \mathsf {e}_2, \ldots , \mathsf {e}_m)$
 be integer sequences satisfying 
 $$ \begin{align} \big| \mathsf{d}_j - \mathsf{f} (- \mathsf{T}) \big| < n^{1/3 + 10 \delta}; \qquad \big| \mathsf{e}_j - \mathsf{f} (\mathsf{T}) \big| & < n^{1/3 + 10 \delta}, \end{align} $$
$$ \begin{align} \big| \mathsf{d}_j - \mathsf{f} (- \mathsf{T}) \big| < n^{1/3 + 10 \delta}; \qquad \big| \mathsf{e}_j - \mathsf{f} (\mathsf{T}) \big| & < n^{1/3 + 10 \delta}, \end{align} $$
for each 
 $j \in [1, m]$
. Let
$j \in [1, m]$
. Let 
 $\mathsf {X} = (\mathsf {x}_1, \mathsf {x}_2, \ldots , \mathsf {x}_m)$
 denote a uniformly random ensemble of nonintersecting Bernoulli walks with time span
$\mathsf {X} = (\mathsf {x}_1, \mathsf {x}_2, \ldots , \mathsf {x}_m)$
 denote a uniformly random ensemble of nonintersecting Bernoulli walks with time span 
 $[-\mathsf {T}, \mathsf {T}]$
; boundary data
$[-\mathsf {T}, \mathsf {T}]$
; boundary data 
 $(\mathsf {f}; \infty )$
; entrance data
$(\mathsf {f}; \infty )$
; entrance data 
 $\mathsf {d}$
; and exit data
$\mathsf {d}$
; and exit data 
 $\mathsf {e}$
. Define the family of functions
$\mathsf {e}$
. Define the family of functions 
 $\mathcal {X}_n = (\mathsf {X}_1, \mathsf {X}_2, \ldots , \mathsf {X}_n)$
 by
$\mathcal {X}_n = (\mathsf {X}_1, \mathsf {X}_2, \ldots , \mathsf {X}_n)$
 by 
 $$ \begin{align} \mathsf{X}_i (t) = \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{x}_{m - i + 1} (\mathfrak{r} n^{2/3} t) - \mathfrak{l} n^{2 / 3} t \Big). \end{align} $$
$$ \begin{align} \mathsf{X}_i (t) = \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{x}_{m - i + 1} (\mathfrak{r} n^{2/3} t) - \mathfrak{l} n^{2 / 3} t \Big). \end{align} $$
 Then 
 $\mathcal {X}_n$
 converges to
$\mathcal {X}_n$
 converges to 
 $\mathcal {R}$
, uniformly on compact subsets of
$\mathcal {R}$
, uniformly on compact subsets of 
 $\mathbb {Z}_{> 0} \times \mathbb {R}$
, as n tends to
$\mathbb {Z}_{> 0} \times \mathbb {R}$
, as n tends to 
 $\infty $
.
$\infty $
.
Proof. Throughout, we assume 
 $\mathfrak {q}> 0$
, as the proof when
$\mathfrak {q}> 0$
, as the proof when 
 $\mathfrak {q} < 0$
 is entirely analogous. This proposition will follow from a comparison between the random Bernoulli walk ensemble
$\mathfrak {q} < 0$
 is entirely analogous. This proposition will follow from a comparison between the random Bernoulli walk ensemble 
 $\mathsf {X}$
 and the one associated with a random tiling of a suitably chosen hexagon. So, we begin by identifying real numbers
$\mathsf {X}$
 and the one associated with a random tiling of a suitably chosen hexagon. So, we begin by identifying real numbers 
 $a, b, c> 0$
 and a point
$a, b, c> 0$
 and a point 
 $(x_0, t_0) \in \mathfrak {A} (a, b, c) = \partial \mathfrak {L}_{a, b, c}$
 on the ellipse inscribed in the hexagon
$(x_0, t_0) \in \mathfrak {A} (a, b, c) = \partial \mathfrak {L}_{a, b, c}$
 on the ellipse inscribed in the hexagon 
 $\mathfrak {E}_{a, b, c}$
 (recall Definition 3.12) whose curvature parameters are given by
$\mathfrak {E}_{a, b, c}$
 (recall Definition 3.12) whose curvature parameters are given by 
 $(\mathfrak {l}, \mathfrak {q})$
. To that end, first observe that there are two points on
$(\mathfrak {l}, \mathfrak {q})$
. To that end, first observe that there are two points on 
 $\mathfrak {A} (1, 1, 1)$
 at which a line with inverse slope
$\mathfrak {A} (1, 1, 1)$
 at which a line with inverse slope 
 $\mathfrak {l}$
 is tangent. Let
$\mathfrak {l}$
 is tangent. Let 
 $(\widetilde {x}, \widetilde {t}) \in \mathfrak {A} (1, 1, 1)$
 be the one whose curvature parameters
$(\widetilde {x}, \widetilde {t}) \in \mathfrak {A} (1, 1, 1)$
 be the one whose curvature parameters 
 $(\widetilde {\mathfrak {l}}, \widetilde {\mathfrak {q}}) = (\mathfrak {l}, \widetilde {\mathfrak {q}})$
 are such that
$(\widetilde {\mathfrak {l}}, \widetilde {\mathfrak {q}}) = (\mathfrak {l}, \widetilde {\mathfrak {q}})$
 are such that 
 $\operatorname {\mathrm {sgn}} \widetilde {\mathfrak {q}} = \operatorname {\mathrm {sgn}} \mathfrak {q} = 1$
. Since
$\operatorname {\mathrm {sgn}} \widetilde {\mathfrak {q}} = \operatorname {\mathrm {sgn}} \mathfrak {q} = 1$
. Since 
 $\widetilde {\mathfrak {l}} = \mathfrak {l}$
 is bounded away from
$\widetilde {\mathfrak {l}} = \mathfrak {l}$
 is bounded away from 
 $0$
 and
$0$
 and 
 $\infty $
, as is
$\infty $
, as is 
 $\widetilde {\mathfrak {q}}$
. Then setting
$\widetilde {\mathfrak {q}}$
. Then setting 
 $r = \widetilde {\mathfrak {q}} \mathfrak {q}^{-1}$
,
$r = \widetilde {\mathfrak {q}} \mathfrak {q}^{-1}$
, 
 $(a, b, c) = (r, r, r)$
 and
$(a, b, c) = (r, r, r)$
 and 
 $(x_0, t_0) = (r \widetilde {x}, r \widetilde {t}) \in \mathfrak {A} (a, b, c)$
, the curvature parameters at
$(x_0, t_0) = (r \widetilde {x}, r \widetilde {t}) \in \mathfrak {A} (a, b, c)$
, the curvature parameters at 
 $(x_0, t_0)$
 are
$(x_0, t_0)$
 are 
 $(\mathfrak {l}, \mathfrak {q})$
.
$(\mathfrak {l}, \mathfrak {q})$
.
 We will now perturb the quadratic curvature parameter of 
 $(x_0, t_0)$
 with respect to
$(x_0, t_0)$
 with respect to 
 ${\mathfrak {A}} (\mathfrak {E}_{a, b, c})$
 through scaling by
${\mathfrak {A}} (\mathfrak {E}_{a, b, c})$
 through scaling by 
 $\kappa $
 and
$\kappa $
 and 
 $\nu $
, where
$\nu $
, where 
 $$ \begin{align*} \kappa = 1 + n^{-20 \delta}; \qquad \nu = 1 - n^{-20 \delta}. \end{align*} $$
$$ \begin{align*} \kappa = 1 + n^{-20 \delta}; \qquad \nu = 1 - n^{-20 \delta}. \end{align*} $$
 Set 
 $(a^{\prime }, b^{\prime }, c^{\prime }) = (\kappa ^{-1} a, \kappa ^{-1} b, \kappa ^{-1} c)$
 and
$(a^{\prime }, b^{\prime }, c^{\prime }) = (\kappa ^{-1} a, \kappa ^{-1} b, \kappa ^{-1} c)$
 and 
 $(a^{\prime \prime }, b^{\prime \prime }, c^{\prime \prime }) = (\nu ^{-1} a, \nu ^{-1} b, \nu ^{-1} c)$
; further, let
$(a^{\prime \prime }, b^{\prime \prime }, c^{\prime \prime }) = (\nu ^{-1} a, \nu ^{-1} b, \nu ^{-1} c)$
; further, let 
 $(x_0^{\prime }, t_0^{\prime }) = (\kappa ^{-1} x_0, \kappa ^{-1} t_0)$
 and
$(x_0^{\prime }, t_0^{\prime }) = (\kappa ^{-1} x_0, \kappa ^{-1} t_0)$
 and 
 $(x_0^{\prime \prime }, t_0^{\prime \prime }) = (\nu ^{-1} x_0, \nu ^{-1} t_0)$
. Observe that the curvature parameters at
$(x_0^{\prime \prime }, t_0^{\prime \prime }) = (\nu ^{-1} x_0, \nu ^{-1} t_0)$
. Observe that the curvature parameters at 
 $(x_0^{\prime }, t_0^{\prime })$
 with respect to
$(x_0^{\prime }, t_0^{\prime })$
 with respect to 
 $\mathfrak {A} (a^{\prime }, b^{\prime }, c^{\prime })$
 and at
$\mathfrak {A} (a^{\prime }, b^{\prime }, c^{\prime })$
 and at 
 $(x_0^{\prime \prime }, t_0^{\prime \prime })$
 with respect to
$(x_0^{\prime \prime }, t_0^{\prime \prime })$
 with respect to 
 $\mathfrak {A} (a^{\prime \prime }, b^{\prime \prime }, c^{\prime \prime })$
 are given by
$\mathfrak {A} (a^{\prime \prime }, b^{\prime \prime }, c^{\prime \prime })$
 are given by 
 $(\mathfrak {l}^{\prime }, \mathfrak {q}^{\prime }) = (\mathfrak {l}, \kappa \mathfrak {q})$
 and
$(\mathfrak {l}^{\prime }, \mathfrak {q}^{\prime }) = (\mathfrak {l}, \kappa \mathfrak {q})$
 and 
 $(\mathfrak {l}^{\prime \prime }, \mathfrak {q}^{\prime \prime }) = (\mathfrak {l}, \nu \mathfrak {q})$
, respectively.
$(\mathfrak {l}^{\prime \prime }, \mathfrak {q}^{\prime \prime }) = (\mathfrak {l}, \nu \mathfrak {q})$
, respectively.
 Next, define 
 $(\mathsf {a}^{\prime }, \mathsf {b}^{\prime }, \mathsf {c}^{\prime }) = (na^{\prime }, nb^{\prime }, nc^{\prime })$
 and
$(\mathsf {a}^{\prime }, \mathsf {b}^{\prime }, \mathsf {c}^{\prime }) = (na^{\prime }, nb^{\prime }, nc^{\prime })$
 and 
 $(\mathsf {a}^{\prime \prime }, \mathsf {b}^{\prime \prime }, \mathsf {c}^{\prime \prime }) = (na^{\prime \prime }, nb^{\prime \prime }, nc^{\prime \prime })$
, which we assume for notational simplicity are integers. Denote the hexagons
$(\mathsf {a}^{\prime \prime }, \mathsf {b}^{\prime \prime }, \mathsf {c}^{\prime \prime }) = (na^{\prime \prime }, nb^{\prime \prime }, nc^{\prime \prime })$
, which we assume for notational simplicity are integers. Denote the hexagons 
 $\mathsf {P}^{\prime } = \mathfrak {E}_{\mathsf {a}^{\prime }, \mathsf {b}^{\prime }, \mathsf {c}^{\prime }}$
 and
$\mathsf {P}^{\prime } = \mathfrak {E}_{\mathsf {a}^{\prime }, \mathsf {b}^{\prime }, \mathsf {c}^{\prime }}$
 and 
 $\mathsf {P}^{\prime \prime } = \mathfrak {E}_{\mathsf {a}^{\prime \prime }, \mathsf {b}^{\prime \prime }, \mathsf {c}^{\prime \prime }}$
; let
$\mathsf {P}^{\prime \prime } = \mathfrak {E}_{\mathsf {a}^{\prime \prime }, \mathsf {b}^{\prime \prime }, \mathsf {c}^{\prime \prime }}$
; let 
 $\mathscr {M}^{\prime }$
 and
$\mathscr {M}^{\prime }$
 and 
 $\mathscr {M}^{\prime \prime }$
 denote uniformly random tilings of
$\mathscr {M}^{\prime \prime }$
 denote uniformly random tilings of 
 $\mathsf {P}^{\prime }$
 and
$\mathsf {P}^{\prime }$
 and 
 $\mathsf {P}^{\prime \prime }$
, which are associated with nonintersecting Bernoulli walk ensembles
$\mathsf {P}^{\prime \prime }$
, which are associated with nonintersecting Bernoulli walk ensembles 
 $\mathsf {Y}^{\prime } = (\mathsf {y}_1^{\prime }, \mathsf {y}_2^{\prime }, \ldots , \mathsf {y}_{\mathsf {a}^{\prime }}^{\prime })$
 and
$\mathsf {Y}^{\prime } = (\mathsf {y}_1^{\prime }, \mathsf {y}_2^{\prime }, \ldots , \mathsf {y}_{\mathsf {a}^{\prime }}^{\prime })$
 and 
 $\mathsf {Y}^{\prime \prime } = (\mathsf {y}_1^{\prime \prime }, \mathsf {y}_1^{\prime \prime }, \ldots , \mathsf {y}_{\mathsf {a}^{\prime \prime }}^{\prime \prime })$
, respectively. Define the Bernoulli walk ensembles
$\mathsf {Y}^{\prime \prime } = (\mathsf {y}_1^{\prime \prime }, \mathsf {y}_1^{\prime \prime }, \ldots , \mathsf {y}_{\mathsf {a}^{\prime \prime }}^{\prime \prime })$
, respectively. Define the Bernoulli walk ensembles 
 $\mathsf {X}^{\prime } = (\mathsf {x}_1^{\prime }, \mathsf {x}_2^{\prime }, \ldots , \mathsf {x}_m^{\prime })$
 and
$\mathsf {X}^{\prime } = (\mathsf {x}_1^{\prime }, \mathsf {x}_2^{\prime }, \ldots , \mathsf {x}_m^{\prime })$
 and 
 $\mathsf {X}^{\prime \prime } = (\mathsf {x}_1^{\prime \prime }, \mathsf {x}_2^{\prime \prime }, \ldots , \mathsf {x}_m^{\prime \prime })$
 through a spacial and index shift of
$\mathsf {X}^{\prime \prime } = (\mathsf {x}_1^{\prime \prime }, \mathsf {x}_2^{\prime \prime }, \ldots , \mathsf {x}_m^{\prime \prime })$
 through a spacial and index shift of 
 $\mathsf {Y}^{\prime }$
 and
$\mathsf {Y}^{\prime }$
 and 
 $\mathsf {Y}^{\prime \prime }$
, respectively; specifically, for each j and t, set
$\mathsf {Y}^{\prime \prime }$
, respectively; specifically, for each j and t, set 
 $$ \begin{align*} \mathsf{x}_j^{\prime} (t) = \mathsf{y}_{\mathsf{a}^{\prime} + j - m }^{\prime} (t + t_0^{\prime} n) - x_0^{\prime} n + 3 n^{1/3 - \delta}; \qquad \mathsf{x}_j^{\prime\prime} (t) = \mathsf{y}_{\mathsf{a}^{\prime\prime} + j - m}^{\prime\prime} (t + t_0^{\prime\prime} n) - x_0^{\prime\prime} n - 3 n^{1/3 - \delta}. \end{align*} $$
$$ \begin{align*} \mathsf{x}_j^{\prime} (t) = \mathsf{y}_{\mathsf{a}^{\prime} + j - m }^{\prime} (t + t_0^{\prime} n) - x_0^{\prime} n + 3 n^{1/3 - \delta}; \qquad \mathsf{x}_j^{\prime\prime} (t) = \mathsf{y}_{\mathsf{a}^{\prime\prime} + j - m}^{\prime\prime} (t + t_0^{\prime\prime} n) - x_0^{\prime\prime} n - 3 n^{1/3 - \delta}. \end{align*} $$
 Given this notation, we will first use Lemma 3.15 to bound the ensemble 
 $\mathsf {X}$
 between
$\mathsf {X}$
 between 
 $\mathsf {X}'$
 and
$\mathsf {X}'$
 and 
 $\mathsf {X}^{\prime \prime }$
; see Figure 8. Then, we will apply Proposition 3.13 to show that
$\mathsf {X}^{\prime \prime }$
; see Figure 8. Then, we will apply Proposition 3.13 to show that 
 $(\mathsf {X}', \mathsf {X}^{\prime \prime })$
 converges to the same Airy line ensemble under the normalization (3.13). To implement the former, define the sequences
$(\mathsf {X}', \mathsf {X}^{\prime \prime })$
 converges to the same Airy line ensemble under the normalization (3.13). To implement the former, define the sequences 
 $\mathsf {d}', \mathsf {e}', \mathsf {d}^{\prime \prime }, \mathsf {e}^{\prime \prime } \in \mathbb {Z}^m$
 by
$\mathsf {d}', \mathsf {e}', \mathsf {d}^{\prime \prime }, \mathsf {e}^{\prime \prime } \in \mathbb {Z}^m$
 by 
 $$ \begin{align*} & \mathsf{d}' = \big( \mathsf{x}_1' (-\mathsf{T}), \mathsf{x}_2' (-\mathsf{T}), \ldots , \mathsf{x}_m' (-\mathsf{T}) \big); \quad \mathsf{e}' = \big( \mathsf{x}_1' (\mathsf{T}), \mathsf{x}_2' (\mathsf{T}), \ldots , \mathsf{x}_m' (\mathsf{T}) \big); \\ & \mathsf{d}^{\prime\prime} = \big( \mathsf{x}_1^{\prime\prime} (-\mathsf{T}), \mathsf{x}_2^{\prime\prime} (-\mathsf{T}), \ldots , \mathsf{x}_m^{\prime\prime} (-\mathsf{T}) \big); \quad \mathsf{e}^{\prime\prime} = \big( \mathsf{x}_1^{\prime\prime} (\mathsf{T}), \mathsf{x}_2^{\prime\prime} (\mathsf{T}), \ldots , \mathsf{x}_m^{\prime\prime} (\mathsf{T}) \big), \end{align*} $$
$$ \begin{align*} & \mathsf{d}' = \big( \mathsf{x}_1' (-\mathsf{T}), \mathsf{x}_2' (-\mathsf{T}), \ldots , \mathsf{x}_m' (-\mathsf{T}) \big); \quad \mathsf{e}' = \big( \mathsf{x}_1' (\mathsf{T}), \mathsf{x}_2' (\mathsf{T}), \ldots , \mathsf{x}_m' (\mathsf{T}) \big); \\ & \mathsf{d}^{\prime\prime} = \big( \mathsf{x}_1^{\prime\prime} (-\mathsf{T}), \mathsf{x}_2^{\prime\prime} (-\mathsf{T}), \ldots , \mathsf{x}_m^{\prime\prime} (-\mathsf{T}) \big); \quad \mathsf{e}^{\prime\prime} = \big( \mathsf{x}_1^{\prime\prime} (\mathsf{T}), \mathsf{x}_2^{\prime\prime} (\mathsf{T}), \ldots , \mathsf{x}_m^{\prime\prime} (\mathsf{T}) \big), \end{align*} $$

Figure 8 Shown above trajectories for the paths 
 $\mathsf {x}_j' \leq \mathsf {x}_j \leq \mathsf {x}_j"$
 in the proof of Proposition 3.18; they approximately coincide in the shaded region.
$\mathsf {x}_j' \leq \mathsf {x}_j \leq \mathsf {x}_j"$
 in the proof of Proposition 3.18; they approximately coincide in the shaded region.
 and the functions 
 $\mathsf {f}', \mathsf {f}^{\prime \prime } : [-\mathsf {T}, \mathsf {T}] \rightarrow \mathbb {Z}$
 by
$\mathsf {f}', \mathsf {f}^{\prime \prime } : [-\mathsf {T}, \mathsf {T}] \rightarrow \mathbb {Z}$
 by 
 $$ \begin{align*} \mathsf{f}' (t) = \mathsf{x}_0' (t); \qquad \mathsf{f}^{\prime\prime} (t) = \mathsf{x}_0^{\prime\prime} (t). \end{align*} $$
$$ \begin{align*} \mathsf{f}' (t) = \mathsf{x}_0' (t); \qquad \mathsf{f}^{\prime\prime} (t) = \mathsf{x}_0^{\prime\prime} (t). \end{align*} $$
 Then, by the Gibbs property described in Remark 3.17, 
 $\mathsf {X}'$
 is a uniformly random nonintersecting Bernoulli walk ensemble with entrance data
$\mathsf {X}'$
 is a uniformly random nonintersecting Bernoulli walk ensemble with entrance data 
 $\mathsf {d}'$
, exit data
$\mathsf {d}'$
, exit data 
 $\mathsf {e}'$
 and boundary conditions
$\mathsf {e}'$
 and boundary conditions 
 $(\mathsf {x}_0'; \infty )$
; a similar statement holds for
$(\mathsf {x}_0'; \infty )$
; a similar statement holds for 
 $\mathsf {X}^{\prime \prime }$
. Let us show with overwhelming probability that
$\mathsf {X}^{\prime \prime }$
. Let us show with overwhelming probability that 
 $$ \begin{align} \mathsf{d}^{\prime\prime} \leq \mathsf{d} \leq \mathsf{d}'; \qquad \mathsf{e}^{\prime\prime} \leq \mathsf{e} \leq \mathsf{e}'; \qquad \mathsf{f}^{\prime\prime} \leq \mathsf{f} \leq \mathsf{f}'. \end{align} $$
$$ \begin{align} \mathsf{d}^{\prime\prime} \leq \mathsf{d} \leq \mathsf{d}'; \qquad \mathsf{e}^{\prime\prime} \leq \mathsf{e} \leq \mathsf{e}'; \qquad \mathsf{f}^{\prime\prime} \leq \mathsf{f} \leq \mathsf{f}'. \end{align} $$
 To verify the first statement of Equation (3.14), observe for any 
 $j \in [1, m]$
 that Equations (3.11) and (3.12) imply
$j \in [1, m]$
 that Equations (3.11) and (3.12) imply 
 $$ \begin{align} \mathsf{d}_j \leq \mathfrak{q} n^{-1} \mathsf{T}^2 - \mathfrak{l} \mathsf{T} + 5 (\mathfrak{s}^2 + 1) n^{1/3 + 10 \delta} \leq \mathfrak{q}' n^{-1} \mathsf{T}^2 - \mathfrak{l} \mathsf{T} - 5 (\mathfrak{s}^2 + 1) n^{1/3 + 10 \delta}. \end{align} $$
$$ \begin{align} \mathsf{d}_j \leq \mathfrak{q} n^{-1} \mathsf{T}^2 - \mathfrak{l} \mathsf{T} + 5 (\mathfrak{s}^2 + 1) n^{1/3 + 10 \delta} \leq \mathfrak{q}' n^{-1} \mathsf{T}^2 - \mathfrak{l} \mathsf{T} - 5 (\mathfrak{s}^2 + 1) n^{1/3 + 10 \delta}. \end{align} $$
 To deduce the first bound, we used the facts that 
 $n^{1/3 - \delta } \leq \mathfrak {s}^{-3/2} K_0 \leq 3 m^{2/3} n^{1/3} \leq n^{1/3 + 10 \delta }$
 since
$n^{1/3 - \delta } \leq \mathfrak {s}^{-3/2} K_0 \leq 3 m^{2/3} n^{1/3} \leq n^{1/3 + 10 \delta }$
 since 
 $m = \lfloor n^{10 \delta } \rfloor $
; to deduce the second, we used the facts that
$m = \lfloor n^{10 \delta } \rfloor $
; to deduce the second, we used the facts that 
 $\mathfrak {q}' = \kappa \mathfrak {q} = (1 + n^{-20 \delta }) \mathfrak {q}$
 and
$\mathfrak {q}' = \kappa \mathfrak {q} = (1 + n^{-20 \delta }) \mathfrak {q}$
 and 
 $\mathsf {T} = \lfloor n^{2/3 + 20 \delta } \rfloor $
.
$\mathsf {T} = \lfloor n^{2/3 + 20 \delta } \rfloor $
.
 Moreover, Corollary 3.11 (applied with the 
 $\mathsf {P}$
 there equal to
$\mathsf {P}$
 there equal to 
 $\mathsf {P}'$
 here) implies with overwhelming probability that
$\mathsf {P}'$
 here) implies with overwhelming probability that 
 $$ \begin{align} \mathsf{x}_{m - j}^{\prime} (-\mathsf{T}) = \mathsf{y}_{\mathsf{a}^{\prime} - j}^{\prime} (t_0 n - \mathsf{T}) - x_0^{\prime} n + 3 n^{1/3 - \delta} \geq \mathfrak{q}^{\prime} n^{-1} \mathsf{T}^2 - \mathfrak{l} \mathsf{T} - 5 (\mathfrak{s}^2 + 1) n^{1/3 + 10 \delta}, \end{align} $$
$$ \begin{align} \mathsf{x}_{m - j}^{\prime} (-\mathsf{T}) = \mathsf{y}_{\mathsf{a}^{\prime} - j}^{\prime} (t_0 n - \mathsf{T}) - x_0^{\prime} n + 3 n^{1/3 - \delta} \geq \mathfrak{q}^{\prime} n^{-1} \mathsf{T}^2 - \mathfrak{l} \mathsf{T} - 5 (\mathfrak{s}^2 + 1) n^{1/3 + 10 \delta}, \end{align} $$
where we have again used the facts that 
 $n^{1/3 - \delta } \leq \mathfrak {s}^{-3/2} \leq K_0 \leq 3 m^{2/3} n^{1/3} \leq n^{1/3 + 10 \delta }$
. Combining Equations (3.15) and (3.16), it follows that
$n^{1/3 - \delta } \leq \mathfrak {s}^{-3/2} \leq K_0 \leq 3 m^{2/3} n^{1/3} \leq n^{1/3 + 10 \delta }$
. Combining Equations (3.15) and (3.16), it follows that 
 $\mathsf {d} \leq \mathsf {d}^{\prime }$
. The proof that
$\mathsf {d} \leq \mathsf {d}^{\prime }$
. The proof that 
 $\mathsf {d}^{\prime \prime } \leq \mathsf {d}$
 is entirely analogous, thereby establishing the first statement of Equation (3.14); the second is shown similarly.
$\mathsf {d}^{\prime \prime } \leq \mathsf {d}$
 is entirely analogous, thereby establishing the first statement of Equation (3.14); the second is shown similarly.
 The third statement of Equation (3.14) follows from the fact that for any 
 $t \in [-\mathsf {T}, \mathsf {T}]$
 we have
$t \in [-\mathsf {T}, \mathsf {T}]$
 we have 
 $$ \begin{align} \mathsf{f}^{\prime} (t) = \mathsf{x}_0^{\prime} (t) & = \mathsf{y}_{\mathsf{a}^{\prime}- m}^{\prime} (t + t_0^{\prime} n) - x_0^{\prime} n + 3 n^{1/3 - \delta} \nonumber\\ & \geq \mathfrak{l} t + \mathfrak{q}^{\prime} n^{-1} t^2 - K_0^{\prime} - m^{-1/3} n^{1/3 + \delta} + 3 n^{1/3 - \delta} \\ & \nonumber\geq \mathsf{f} (t) + K_0 - K_0^{\prime} + (\mathfrak{q}^{\prime} - \mathfrak{q}) n^{-1} t^2 + n^{1/3 - \delta} \geq \mathsf{f} (t), \end{align} $$
$$ \begin{align} \mathsf{f}^{\prime} (t) = \mathsf{x}_0^{\prime} (t) & = \mathsf{y}_{\mathsf{a}^{\prime}- m}^{\prime} (t + t_0^{\prime} n) - x_0^{\prime} n + 3 n^{1/3 - \delta} \nonumber\\ & \geq \mathfrak{l} t + \mathfrak{q}^{\prime} n^{-1} t^2 - K_0^{\prime} - m^{-1/3} n^{1/3 + \delta} + 3 n^{1/3 - \delta} \\ & \nonumber\geq \mathsf{f} (t) + K_0 - K_0^{\prime} + (\mathfrak{q}^{\prime} - \mathfrak{q}) n^{-1} t^2 + n^{1/3 - \delta} \geq \mathsf{f} (t), \end{align} $$
where we have set
 $$ \begin{align*} K_0^{\prime} = \mathfrak{s}^{\prime3/2} n^{1/3} \Big( \displaystyle\frac{3 \pi m}{2} \Big)^{2/3} = \kappa^{-1/2} K_0, \end{align*} $$
$$ \begin{align*} K_0^{\prime} = \mathfrak{s}^{\prime3/2} n^{1/3} \Big( \displaystyle\frac{3 \pi m}{2} \Big)^{2/3} = \kappa^{-1/2} K_0, \end{align*} $$
and we have denoted
 $$ \begin{align} \mathfrak{s}^{\prime} = \Big| \displaystyle\frac{\mathfrak{l}^{2/3} (1 - \mathfrak{l})^{2/3}}{4^{1/3} \mathfrak{q}^{\prime1/3}} \Big|; \quad \mathfrak{r}^{\prime} = \Big| \displaystyle\frac{\mathfrak{l}^{1/3} (1 - \mathfrak{l})^{1/3}}{2^{1/3} \mathfrak{q}^{\prime1/3}} \Big|. \end{align} $$
$$ \begin{align} \mathfrak{s}^{\prime} = \Big| \displaystyle\frac{\mathfrak{l}^{2/3} (1 - \mathfrak{l})^{2/3}}{4^{1/3} \mathfrak{q}^{\prime1/3}} \Big|; \quad \mathfrak{r}^{\prime} = \Big| \displaystyle\frac{\mathfrak{l}^{1/3} (1 - \mathfrak{l})^{1/3}}{2^{1/3} \mathfrak{q}^{\prime1/3}} \Big|. \end{align} $$
 The first and second statements of Equation (3.17) follow from the definitions of 
 $\mathsf {f}^{\prime }$
 and
$\mathsf {f}^{\prime }$
 and 
 $\mathsf {x}^{\prime }$
, the third from Corollary 3.11 (applied with j there equal to m here), the fourth from Equation (3.11) and the definition
$\mathsf {x}^{\prime }$
, the third from Corollary 3.11 (applied with j there equal to m here), the fourth from Equation (3.11) and the definition 
 $m = \lfloor n^{10 \delta } \rfloor $
 and the fifth from the facts that
$m = \lfloor n^{10 \delta } \rfloor $
 and the fifth from the facts that 
 $\mathfrak {q}^{\prime } = \kappa \mathfrak {q} \geq \mathfrak {q}$
 and that
$\mathfrak {q}^{\prime } = \kappa \mathfrak {q} \geq \mathfrak {q}$
 and that 
 $$ \begin{align*} |K_0 - K_0^{\prime}| \leq |1 - \kappa^{-1/2}| K_0 \leq n^{-10 \delta} K_0 \leq 5 \mathfrak{s}^{3/2} n^{1/3 -10 \delta} m^{2/3} = \mathrm{o}(n^{1/3 - \delta}), \end{align*} $$
$$ \begin{align*} |K_0 - K_0^{\prime}| \leq |1 - \kappa^{-1/2}| K_0 \leq n^{-10 \delta} K_0 \leq 5 \mathfrak{s}^{3/2} n^{1/3 -10 \delta} m^{2/3} = \mathrm{o}(n^{1/3 - \delta}), \end{align*} $$
as 
 $m \leq n^{10 \delta }$
. Hence,
$m \leq n^{10 \delta }$
. Hence, 
 $\mathsf {f} \leq \mathsf {f}^{\prime }$
; similarly,
$\mathsf {f} \leq \mathsf {f}^{\prime }$
; similarly, 
 $\mathsf {f}^{\prime \prime } \leq \mathsf {f}$
. This verifies that Equation (3.14) holds with overwhelming probability. So, it follows from Lemma 3.15 that there exists a coupling between
$\mathsf {f}^{\prime \prime } \leq \mathsf {f}$
. This verifies that Equation (3.14) holds with overwhelming probability. So, it follows from Lemma 3.15 that there exists a coupling between 
 $(\mathsf {X}, \mathsf {X}^{\prime }, \mathsf {X}^{\prime \prime })$
 such that
$(\mathsf {X}, \mathsf {X}^{\prime }, \mathsf {X}^{\prime \prime })$
 such that 
 $\mathsf {x}_j^{\prime \prime } \leq \mathsf {x}_j \leq \mathsf {x}_j^{\prime }$
 holds for each
$\mathsf {x}_j^{\prime \prime } \leq \mathsf {x}_j \leq \mathsf {x}_j^{\prime }$
 holds for each 
 $j \in [1, m]$
, with overwhelming probability.
$j \in [1, m]$
, with overwhelming probability.
 Define normalizations of these Bernoulli walk ensembles, denoted 
 $\mathcal {X}_n^{\prime } = (\mathsf {X}_1^{\prime }, \mathsf {X}_2^{\prime }, \ldots , \mathsf {X}_m^{\prime })$
,
$\mathcal {X}_n^{\prime } = (\mathsf {X}_1^{\prime }, \mathsf {X}_2^{\prime }, \ldots , \mathsf {X}_m^{\prime })$
, 
 $\mathcal {X}_n^{\prime \prime } = (\mathsf {X}_1^{\prime \prime }, \mathsf {X}_2^{\prime \prime }, \ldots , \mathsf {X}_n^{\prime \prime })$
,
$\mathcal {X}_n^{\prime \prime } = (\mathsf {X}_1^{\prime \prime }, \mathsf {X}_2^{\prime \prime }, \ldots , \mathsf {X}_n^{\prime \prime })$
, 
 $\mathcal {W}_n' = (\mathsf {W}_1', \mathsf {W}_2', \ldots , \mathsf {W}_m')$
 and
$\mathcal {W}_n' = (\mathsf {W}_1', \mathsf {W}_2', \ldots , \mathsf {W}_m')$
 and 
 $\mathcal {W}_n^{\prime \prime } = (\mathsf {W}_1^{\prime \prime }, \mathsf {W}_2^{\prime \prime }, \ldots , \mathsf {W}_m^{\prime \prime })$
 by (recall the notation from Equation (3.18)) setting
$\mathcal {W}_n^{\prime \prime } = (\mathsf {W}_1^{\prime \prime }, \mathsf {W}_2^{\prime \prime }, \ldots , \mathsf {W}_m^{\prime \prime })$
 by (recall the notation from Equation (3.18)) setting 
 $$ \begin{align*}  \mathsf{X}_i^{\prime} (t) &= \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{x}_{m - i + 1}' (\mathfrak{r} n^{2/3} t) - \mathfrak{l} n^{2/3} t\Big); \qquad \mathsf{X}_i^{\prime\prime} (t) = \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{x}_{m -i + 1} (\mathfrak{r} n^{2/3} t) - \mathfrak{l} n^{2/3} t \Big); \\  \mathsf{W}_i^{\prime} (t) &= \mathfrak{s}^{\prime-1} {n^{-1/3}} \Big( \mathsf{x}_{m - i + 1}' (\mathfrak{r}^{\prime} {n^{2/3}} t) - {\mathfrak{l}} {n^{2/3}} t\Big); \qquad \mathsf{W}_i^{\prime\prime} (t) = \mathfrak{s}^{\prime\prime-1} {n^{-1/3}} \Big( \mathsf{x}_{m - i + 1} (\mathfrak{r}^{\prime\prime} {n^{2/3}} t) - \mathfrak{l} {n^{2/3}} t \Big). \end{align*} $$
$$ \begin{align*}  \mathsf{X}_i^{\prime} (t) &= \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{x}_{m - i + 1}' (\mathfrak{r} n^{2/3} t) - \mathfrak{l} n^{2/3} t\Big); \qquad \mathsf{X}_i^{\prime\prime} (t) = \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{x}_{m -i + 1} (\mathfrak{r} n^{2/3} t) - \mathfrak{l} n^{2/3} t \Big); \\  \mathsf{W}_i^{\prime} (t) &= \mathfrak{s}^{\prime-1} {n^{-1/3}} \Big( \mathsf{x}_{m - i + 1}' (\mathfrak{r}^{\prime} {n^{2/3}} t) - {\mathfrak{l}} {n^{2/3}} t\Big); \qquad \mathsf{W}_i^{\prime\prime} (t) = \mathfrak{s}^{\prime\prime-1} {n^{-1/3}} \Big( \mathsf{x}_{m - i + 1} (\mathfrak{r}^{\prime\prime} {n^{2/3}} t) - \mathfrak{l} {n^{2/3}} t \Big). \end{align*} $$
 Then, Proposition 3.13 implies that 
 $\mathcal {W}_n^{\prime }$
 and
$\mathcal {W}_n^{\prime }$
 and 
 $\mathcal {W}_n^{\prime \prime }$
 converge to
$\mathcal {W}_n^{\prime \prime }$
 converge to 
 $\mathcal {R}$
, uniformly on compact subsets of
$\mathcal {R}$
, uniformly on compact subsets of 
 $\mathbb {Z}_{> 0} \times \mathbb {R}$
, as n tends to
$\mathbb {Z}_{> 0} \times \mathbb {R}$
, as n tends to 
 $\infty $
. Since Equation (3.18) and the facts that
$\infty $
. Since Equation (3.18) and the facts that 
 $(\mathfrak {q}, \mathfrak {q}') = (\kappa \mathfrak {q}, \nu \mathfrak {q})$
 and
$(\mathfrak {q}, \mathfrak {q}') = (\kappa \mathfrak {q}, \nu \mathfrak {q})$
 and 
 $\kappa - 1 = 1 - \nu = n^{-20 \delta } = \mathrm {o}(1)$
 imply
$\kappa - 1 = 1 - \nu = n^{-20 \delta } = \mathrm {o}(1)$
 imply 
 $\mathfrak {s}' = \mathfrak {s} \big ( 1 + \mathrm {o}(1) \big )$
,
$\mathfrak {s}' = \mathfrak {s} \big ( 1 + \mathrm {o}(1) \big )$
, 
 $\mathfrak {r}' = \mathfrak {r} \big ( 1 + \mathrm {o}(1) \big )$
,
$\mathfrak {r}' = \mathfrak {r} \big ( 1 + \mathrm {o}(1) \big )$
, 
 $\mathfrak {s}^{\prime \prime } = \mathfrak {s} \big ( 1 + \mathrm {o}(1) \big )$
 and
$\mathfrak {s}^{\prime \prime } = \mathfrak {s} \big ( 1 + \mathrm {o}(1) \big )$
 and 
 $\mathfrak {r}^{\prime \prime } = \mathfrak {r} \big ( 1 + \mathrm {o}(1) \big )$
, we deduce that
$\mathfrak {r}^{\prime \prime } = \mathfrak {r} \big ( 1 + \mathrm {o}(1) \big )$
, we deduce that 
 $\big | \mathsf {W}_j^{\prime } (t) - \mathsf {X}_j' (t) \big |$
 and
$\big | \mathsf {W}_j^{\prime } (t) - \mathsf {X}_j' (t) \big |$
 and 
 $\big | \mathsf {W}_j^{\prime \prime } (t) - \mathsf {X}_j^{\prime \prime } (t) \big | = \mathrm {o}(1)$
 for each
$\big | \mathsf {W}_j^{\prime \prime } (t) - \mathsf {X}_j^{\prime \prime } (t) \big | = \mathrm {o}(1)$
 for each 
 $j \in [1, m]$
, uniformly on compact subsets of
$j \in [1, m]$
, uniformly on compact subsets of 
 $\mathbb {Z}_{> 0} \times \mathbb {R}$
. Hence,
$\mathbb {Z}_{> 0} \times \mathbb {R}$
. Hence, 
 $\mathcal {X}_n'$
 and
$\mathcal {X}_n'$
 and 
 $\mathcal {X}_n^{\prime \prime }$
 both converge to
$\mathcal {X}_n^{\prime \prime }$
 both converge to 
 $\mathcal {R}$
. Since
$\mathcal {R}$
. Since 
 $\mathsf {x}_j^{\prime \prime } \leq \mathsf {x}_j \leq \mathsf {x}_j^{\prime \prime }$
, it follows that
$\mathsf {x}_j^{\prime \prime } \leq \mathsf {x}_j \leq \mathsf {x}_j^{\prime \prime }$
, it follows that 
 $\mathsf {X}_j^{\prime \prime } \leq \mathsf {X}_j \leq \mathsf {X}_j'$
, and thus the same convergence holds for
$\mathsf {X}_j^{\prime \prime } \leq \mathsf {X}_j \leq \mathsf {X}_j'$
, and thus the same convergence holds for 
 $\mathcal {X}_n$
.
$\mathcal {X}_n$
.
We can now establish Theorem 2.10.
Proof of Theorem 2.10.
 This will follow from Proposition 3.18. Fix a real number 
 $\delta \in \big ( 0, 1/100 \big )$
, and define
$\delta \in \big ( 0, 1/100 \big )$
, and define 
 $m = n^{10 \delta }$
 and
$m = n^{10 \delta }$
 and 
 $\mathsf {T} = n^{2/3 + 20 \delta }$
 (as in Proposition 3.18), which we assume for notational convenience are integers. Define the nonintersecting Bernoulli walk ensemble
$\mathsf {T} = n^{2/3 + 20 \delta }$
 (as in Proposition 3.18), which we assume for notational convenience are integers. Define the nonintersecting Bernoulli walk ensemble 
 $\mathsf {Y} = (\mathsf {y}_1, \mathsf {y}_2, \ldots , \mathsf {y}_m)$
 by
$\mathsf {Y} = (\mathsf {y}_1, \mathsf {y}_2, \ldots , \mathsf {y}_m)$
 by 
 $$ \begin{align} \mathsf{y}_i (t) = \mathsf{x}_{K + i - m} (t_0 n + t) - x_0 n. \end{align} $$
$$ \begin{align} \mathsf{y}_i (t) = \mathsf{x}_{K + i - m} (t_0 n + t) - x_0 n. \end{align} $$
 Denoting the sequences 
 $\mathsf {d}, \mathsf {e} \in \mathbb {Z}^m$
 and function
$\mathsf {d}, \mathsf {e} \in \mathbb {Z}^m$
 and function 
 $\mathsf {f} : [-\mathsf {T}, \mathsf {T}] \rightarrow \mathbb {Z}$
 by
$\mathsf {f} : [-\mathsf {T}, \mathsf {T}] \rightarrow \mathbb {Z}$
 by 
 $$ \begin{align*} \mathsf{d} = \big( \mathsf{y}_1 (- \mathsf{T}), \mathsf{y}_2 (-\mathsf{T}), & \ldots , \mathsf{y}_m (-\mathsf{T}) \big); \qquad \mathsf{e} = \big( \mathsf{y}_1 (\mathsf{T}), \mathsf{y}_2 (\mathsf{T}), \ldots , \mathsf{y}_m (\mathsf{T}) \big); \\ & \mathsf{f} (t) = \mathsf{x}_{K - m} (t_0 n + t) - x_0 n, \end{align*} $$
$$ \begin{align*} \mathsf{d} = \big( \mathsf{y}_1 (- \mathsf{T}), \mathsf{y}_2 (-\mathsf{T}), & \ldots , \mathsf{y}_m (-\mathsf{T}) \big); \qquad \mathsf{e} = \big( \mathsf{y}_1 (\mathsf{T}), \mathsf{y}_2 (\mathsf{T}), \ldots , \mathsf{y}_m (\mathsf{T}) \big); \\ & \mathsf{f} (t) = \mathsf{x}_{K - m} (t_0 n + t) - x_0 n, \end{align*} $$
it follows from the Gibbs property described in Remark 3.17 that 
 $\mathsf {Y}$
 is a uniformly random nonintersecting Bernoulli walk ensemble with entrance and exit data
$\mathsf {Y}$
 is a uniformly random nonintersecting Bernoulli walk ensemble with entrance and exit data 
 $(\mathsf {d}; \mathsf {e})$
 and boundary conditions
$(\mathsf {d}; \mathsf {e})$
 and boundary conditions 
 $(\mathsf {f}; \infty )$
. Let us verify that
$(\mathsf {f}; \infty )$
. Let us verify that 
 $(\mathsf {d}, \mathsf {e}, \mathsf {f})$
 satisfy Equations (3.12) and (3.11).
$(\mathsf {d}, \mathsf {e}, \mathsf {f})$
 satisfy Equations (3.12) and (3.11).
 To that end, since 
 $m = n^{10 \delta }$
, the
$m = n^{10 \delta }$
, the 
 $j = m$
 case of Corollary 3.11 implies with overwhelming probability that
$j = m$
 case of Corollary 3.11 implies with overwhelming probability that 
 $$ \begin{align} \displaystyle\max_{s \in [-\mathsf{T}, \mathsf{T}]} \bigg| \mathsf{f} (s) - \mathfrak{l} s - \mathfrak{q} n^{-1} s^2 + \mathfrak{s}^{3/2} \Big( \displaystyle\frac{3 \pi m}{2} \Big)^{3/2} n^{1/3} \bigg| < m^{-1/3} n^{1/3 + \delta} < n^{1/3 - \delta}, \end{align} $$
$$ \begin{align} \displaystyle\max_{s \in [-\mathsf{T}, \mathsf{T}]} \bigg| \mathsf{f} (s) - \mathfrak{l} s - \mathfrak{q} n^{-1} s^2 + \mathfrak{s}^{3/2} \Big( \displaystyle\frac{3 \pi m}{2} \Big)^{3/2} n^{1/3} \bigg| < m^{-1/3} n^{1/3 + \delta} < n^{1/3 - \delta}, \end{align} $$
and so 
 $\mathsf {f}$
 satisfies Equation (3.11). Applying Corollary 3.11 with
$\mathsf {f}$
 satisfies Equation (3.11). Applying Corollary 3.11 with 
 $j \in [0, m - 1]$
 and also using Equation (3.20) yields
$j \in [0, m - 1]$
 and also using Equation (3.20) yields 
 $$ \begin{align*} \big| \mathsf{y}_1 (-\mathsf{T}) - f(-\mathsf{T}) \big| \leq 2 \mathfrak{s}^{3/2} \Big( \displaystyle\frac{3 \pi m}{2} \Big)^{2/3} n^{1/3} + 2n^{1/3} \leq n^{1/3 + 10 \delta}, \end{align*} $$
$$ \begin{align*} \big| \mathsf{y}_1 (-\mathsf{T}) - f(-\mathsf{T}) \big| \leq 2 \mathfrak{s}^{3/2} \Big( \displaystyle\frac{3 \pi m}{2} \Big)^{2/3} n^{1/3} + 2n^{1/3} \leq n^{1/3 + 10 \delta}, \end{align*} $$
where to deduce the last inequality we used the fact that 
 $m = n^{10 \delta }$
. This verifies that
$m = n^{10 \delta }$
. This verifies that 
 $\mathsf {d}$
 satisfies (3.12) with overwhelming probability; the proof that
$\mathsf {d}$
 satisfies (3.12) with overwhelming probability; the proof that 
 $\mathsf {e}$
 does as well is very similar and thus omitted.
$\mathsf {e}$
 does as well is very similar and thus omitted.
 Hence, Proposition 3.18 applies and gives that 
 $\mathcal {Y}_n = (\mathsf {Y}_1, \mathsf {Y}_2, \ldots , \mathsf {Y}_m)$
 converges to
$\mathcal {Y}_n = (\mathsf {Y}_1, \mathsf {Y}_2, \ldots , \mathsf {Y}_m)$
 converges to 
 $\mathcal {R}$
, uniformly on compact subsets of
$\mathcal {R}$
, uniformly on compact subsets of 
 $\mathbb {Z}_{> 0} \times \mathbb {R}$
, as n tends to
$\mathbb {Z}_{> 0} \times \mathbb {R}$
, as n tends to 
 $\infty $
, where
$\infty $
, where 
 $$ \begin{align*} \mathsf{Y}_i (t) = \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{y}_{m - i + 1} (\mathfrak{r} n^{2/3} t) - \mathfrak{l} n^{2/3} t \Big). \end{align*} $$
$$ \begin{align*} \mathsf{Y}_i (t) = \mathfrak{s}^{-1} n^{-1/3} \Big( \mathsf{y}_{m - i + 1} (\mathfrak{r} n^{2/3} t) - \mathfrak{l} n^{2/3} t \Big). \end{align*} $$
 By Equation (3.19), 
 $\mathsf {Y}_i (t) = \mathsf {X}_i (t)$
, meaning that
$\mathsf {Y}_i (t) = \mathsf {X}_i (t)$
, meaning that 
 $\mathcal {Y}_n = \mathcal {X}_n$
, so the same convergence holds for
$\mathcal {Y}_n = \mathcal {X}_n$
, so the same convergence holds for 
 $\mathcal {X}_n$
.
$\mathcal {X}_n$
.
4 Mixing and concentration bounds
By the content in Section 3.5, it remains to establish Theorem 3.10. In this section, we collect several miscellaneous results that will be used in its proof, to appear in Section 6 below. More specifically, in Section 4.1 we state a preliminary concentration estimate for a class of tilings whose arctic boundaries are constrained to only have one cusp (in addition to other, less essential conditions); in Section 4.2, we state a mixing time bound for certain dynamics on the set of tilings, which we prove in Section 4.3.
4.1 Preliminary concentration estimate
In this section, we state a concentration estimate for tiling height functions on ‘double-sided trapezoid domains’ (one may also view these as tilings of a strip). These domains are different from the ones considered in earlier works, such as [Reference PetrovPet14, Reference PetrovPet15, Reference Duse and MetcalfeDM18] since they will accommodate nonfrozen boundary conditions along both their north and south edges (instead of only their south ones).
 Throughout this section, we fix real numbers 
 $\mathfrak {t}_1 < \mathfrak {t}_2$
 and denote
$\mathfrak {t}_1 < \mathfrak {t}_2$
 and denote 
 $\mathfrak {t} = \mathfrak {t}_2 - \mathfrak {t}_1$
. We fix linear functions
$\mathfrak {t} = \mathfrak {t}_2 - \mathfrak {t}_1$
. We fix linear functions 
 $\mathfrak {a}, \mathfrak {b} : [\mathfrak {t}_1, \mathfrak {t}_2]$
 with
$\mathfrak {a}, \mathfrak {b} : [\mathfrak {t}_1, \mathfrak {t}_2]$
 with 
 $\mathfrak {a}' (s), \mathfrak {b}' (s) \in \{ 0, 1 \}$
 such that
$\mathfrak {a}' (s), \mathfrak {b}' (s) \in \{ 0, 1 \}$
 such that 
 $\mathfrak {a} (s) \leq \mathfrak {b} (s)$
 for each
$\mathfrak {a} (s) \leq \mathfrak {b} (s)$
 for each 
 $s \in [\mathfrak {t}_1, \mathfrak {t}_2]$
. Define the trapezoid domain
$s \in [\mathfrak {t}_1, \mathfrak {t}_2]$
. Define the trapezoid domain 
 $$ \begin{align} \mathfrak{D} = \mathfrak{D} (\mathfrak{a}, \mathfrak{b}; \mathfrak{t}_1, \mathfrak{t}_2) = \big\{ (x, t) \in \mathbb{R} \times (\mathfrak{t}_1, \mathfrak{t}_2) : \mathfrak{a} (t) < x < \mathfrak{b} (t) \big\}, \end{align} $$
$$ \begin{align} \mathfrak{D} = \mathfrak{D} (\mathfrak{a}, \mathfrak{b}; \mathfrak{t}_1, \mathfrak{t}_2) = \big\{ (x, t) \in \mathbb{R} \times (\mathfrak{t}_1, \mathfrak{t}_2) : \mathfrak{a} (t) < x < \mathfrak{b} (t) \big\}, \end{align} $$
and denote its four boundaries by
 $$ \begin{align} &\partial_{\operatorname{\mathrm{so}}} (\mathfrak{D}) = \big\{ (x, t) \in \overline{\mathfrak{D}}: t = \mathfrak{t}_1 \big\}; \qquad \quad \partial_{\operatorname{\mathrm{no}}} (\mathfrak{D}) = \big\{ (x, t) \in \overline{\mathfrak{D}}: t = \mathfrak{t}_2 \big\}; \nonumber\\ & \partial_{\operatorname{\mathrm{we}}} (\mathfrak{D}) = \big\{ (x, t) \in \overline{\mathfrak{D}}: x = \mathfrak{a} (t) \big\}; \qquad \partial_{\operatorname{\mathrm{ea}}} (\mathfrak{D}) = \big\{ (x, t) \in \overline{\mathfrak{D}}: x = \mathfrak{b} (t) \big\}. \end{align} $$
$$ \begin{align} &\partial_{\operatorname{\mathrm{so}}} (\mathfrak{D}) = \big\{ (x, t) \in \overline{\mathfrak{D}}: t = \mathfrak{t}_1 \big\}; \qquad \quad \partial_{\operatorname{\mathrm{no}}} (\mathfrak{D}) = \big\{ (x, t) \in \overline{\mathfrak{D}}: t = \mathfrak{t}_2 \big\}; \nonumber\\ & \partial_{\operatorname{\mathrm{we}}} (\mathfrak{D}) = \big\{ (x, t) \in \overline{\mathfrak{D}}: x = \mathfrak{a} (t) \big\}; \qquad \partial_{\operatorname{\mathrm{ea}}} (\mathfrak{D}) = \big\{ (x, t) \in \overline{\mathfrak{D}}: x = \mathfrak{b} (t) \big\}. \end{align} $$
We refer to Figure 9 for a depiction.

Figure 9 Shown above are the four possibilities for 
 $\mathfrak {D}$
.
$\mathfrak {D}$
.
 Let 
 $h: \partial \mathfrak {D} \rightarrow \mathbb {R}$
 denote a function admitting an admissible extension to
$h: \partial \mathfrak {D} \rightarrow \mathbb {R}$
 denote a function admitting an admissible extension to 
 $\mathfrak {D}$
. We assume throughout that h is constant along both
$\mathfrak {D}$
. We assume throughout that h is constant along both 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
 and
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
 and 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
. Let
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
. Let 
 $H^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; h)$
 denote the maximizer of
$H^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; h)$
 denote the maximizer of 
 $\mathcal {E}$
 from Equation (2.3), as in Equation (2.4), and let the liquid region
$\mathcal {E}$
 from Equation (2.3), as in Equation (2.4), and let the liquid region 
 $\mathfrak {L} = \mathfrak {L} (\mathfrak {D}; h)$
 and arctic boundary
$\mathfrak {L} = \mathfrak {L} (\mathfrak {D}; h)$
 and arctic boundary 
 $\mathfrak {A} = \mathfrak {A} (\mathfrak {D}; h)$
 be as in Equation (2.6). Recall that a point on
$\mathfrak {A} = \mathfrak {A} (\mathfrak {D}; h)$
 be as in Equation (2.6). Recall that a point on 
 $\mathfrak {A}$
 is a tangency location if the tangent line to
$\mathfrak {A}$
 is a tangency location if the tangent line to 
 $\mathfrak {A}$
 through it has slope either
$\mathfrak {A}$
 through it has slope either 
 $\{ 0, 1, \infty \}$
.
$\{ 0, 1, \infty \}$
.
 We may then define the complex slope 
 $f: \mathfrak {L} \rightarrow \mathbb {H}^-$
 as in Equation (3.1), which upon denoting
$f: \mathfrak {L} \rightarrow \mathbb {H}^-$
 as in Equation (3.1), which upon denoting 
 $f_t (x) = f (x, t)$
 satisfies the complex Burgers equation (3.2). Further let
$f_t (x) = f (x, t)$
 satisfies the complex Burgers equation (3.2). Further let 
 $\mathfrak {L}_{\operatorname {\mathrm {no}}} = \mathfrak {L}_{\operatorname {\mathrm {no}}} (\mathfrak {D}; h)$
 denote the interior of
$\mathfrak {L}_{\operatorname {\mathrm {no}}} = \mathfrak {L}_{\operatorname {\mathrm {no}}} (\mathfrak {D}; h)$
 denote the interior of 
 $\overline {\mathfrak {L}} \cap \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
, and let
$\overline {\mathfrak {L}} \cap \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
, and let 
 $\mathfrak {L}_{\operatorname {\mathrm {so}}} = \mathfrak {L}_{\operatorname {\mathrm {so}}} (\mathfrak {D}; h)$
 denote the interior of
$\mathfrak {L}_{\operatorname {\mathrm {so}}} = \mathfrak {L}_{\operatorname {\mathrm {so}}} (\mathfrak {D}; h)$
 denote the interior of 
 $\overline {\mathfrak {L}} \cap \partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
; these are the extensions of the liquid region
$\overline {\mathfrak {L}} \cap \partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
; these are the extensions of the liquid region 
 $\mathfrak {R}$
 to the north and south boundaries of
$\mathfrak {R}$
 to the north and south boundaries of 
 $\mathfrak {L}$
. For all
$\mathfrak {L}$
. For all 
 $t \in (\mathfrak {t}_1, \mathfrak {t}_2)$
, we define slices of the liquid region (along the horizontal line
$t \in (\mathfrak {t}_1, \mathfrak {t}_2)$
, we define slices of the liquid region (along the horizontal line 
 $y = t$
) by
$y = t$
) by 
 $$ \begin{align*} I_t = \big\{ x : (x, t) \in \overline{\mathfrak{L}} \big\}; \qquad I_{\mathfrak{t}_1} = \overline{\mathfrak{L}}_{\operatorname{\mathrm{so}}}; \qquad I_{\mathfrak{t}_2} = \overline{\mathfrak{L}_{\operatorname{\mathrm{no}}}}. \end{align*} $$
$$ \begin{align*} I_t = \big\{ x : (x, t) \in \overline{\mathfrak{L}} \big\}; \qquad I_{\mathfrak{t}_1} = \overline{\mathfrak{L}}_{\operatorname{\mathrm{so}}}; \qquad I_{\mathfrak{t}_2} = \overline{\mathfrak{L}_{\operatorname{\mathrm{no}}}}. \end{align*} $$
 For any real number 
 $\delta> 0$
, we define the augmented variant of
$\delta> 0$
, we define the augmented variant of 
 $\mathfrak {L}$
 (as in Definition 3.9) by
$\mathfrak {L}$
 (as in Definition 3.9) by 
 $$ \begin{align*} \mathfrak{L}_+^{\delta} (\mathfrak{D}) = \mathfrak{L}_+^{\delta} (\mathfrak{D}; h) = \mathfrak{L} \cup \bigcup_{u \in \mathfrak{A}} \mathfrak{B} (u; n^{\delta - 2/3}). \end{align*} $$
$$ \begin{align*} \mathfrak{L}_+^{\delta} (\mathfrak{D}) = \mathfrak{L}_+^{\delta} (\mathfrak{D}; h) = \mathfrak{L} \cup \bigcup_{u \in \mathfrak{A}} \mathfrak{B} (u; n^{\delta - 2/3}). \end{align*} $$
 Next, let us formulate certain conditions on the limit shape 
 $H^*$
. For any
$H^*$
. For any 
 $\mathfrak {t}' \geq \mathfrak {t}_2$
, we say that
$\mathfrak {t}' \geq \mathfrak {t}_2$
, we say that 
 $H^*$
 can be extended to time
$H^*$
 can be extended to time 
 $\mathfrak {t}'$
 if there exists a simply connected, open subset
$\mathfrak {t}'$
 if there exists a simply connected, open subset 
 $\widetilde {\mathfrak {L}} \subset \mathbb {R}^2$
 containing
$\widetilde {\mathfrak {L}} \subset \mathbb {R}^2$
 containing 
 $\mathfrak {L}$
 such that the set
$\mathfrak {L}$
 such that the set 
 $\big \{ x: (x, t) \in \widetilde {\mathfrak {L}} \big \}$
 is nonempty and connected for each
$\big \{ x: (x, t) \in \widetilde {\mathfrak {L}} \big \}$
 is nonempty and connected for each 
 $t \in [\mathfrak {t}_2, \mathfrak {t}']$
, and there exists an extension
$t \in [\mathfrak {t}_2, \mathfrak {t}']$
, and there exists an extension 
 $\widetilde {f}: \widetilde {\mathfrak {L}} \rightarrow \mathbb {H}^-$
 of
$\widetilde {f}: \widetilde {\mathfrak {L}} \rightarrow \mathbb {H}^-$
 of 
 $f_t (x)$
 satisfying the complex Burgers equation (3.2). In this case,
$f_t (x)$
 satisfying the complex Burgers equation (3.2). In this case, 
 $\mathfrak {L}_{\operatorname {\mathrm {no}}} = I_{\mathfrak {t}_2}$
 is a single interval. We also call
$\mathfrak {L}_{\operatorname {\mathrm {no}}} = I_{\mathfrak {t}_2}$
 is a single interval. We also call 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 packed (with respect to h) if
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 packed (with respect to h) if 
 $\partial _x h (v) = 1$
 for each
$\partial _x h (v) = 1$
 for each 
 $v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
; in this case,
$v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
; in this case, 
 $\overline {\mathfrak {L}} \cap \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 constitutes at most a single point, and so
$\overline {\mathfrak {L}} \cap \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 constitutes at most a single point, and so 
 $\mathfrak {L}_{\operatorname {\mathrm {no}}}$
 is empty. We refer to Figure 10 for a depiction.
$\mathfrak {L}_{\operatorname {\mathrm {no}}}$
 is empty. We refer to Figure 10 for a depiction.

Figure 10 Shown to the left is an example of limit shape admitting an extension to time 
 $\mathfrak {t}'> \mathfrak {t}_2$
; shown to the right is a liquid region that is packed with respect to h.
$\mathfrak {t}'> \mathfrak {t}_2$
; shown to the right is a liquid region that is packed with respect to h.
 Now, let 
 $n \geq 1$
 be an integer; denote
$n \geq 1$
 be an integer; denote 
 $\mathsf {t}_1 = \mathfrak {t}_1 n$
, and
$\mathsf {t}_1 = \mathfrak {t}_1 n$
, and 
 $\mathsf {t}_2 = \mathfrak {t}_2 n$
. Suppose
$\mathsf {t}_2 = \mathfrak {t}_2 n$
. Suppose 
 $\mathsf {D} = n \mathfrak {D} \subset \mathbb {T}^2$
 so that
$\mathsf {D} = n \mathfrak {D} \subset \mathbb {T}^2$
 so that 
 $\mathsf {t}_1, \mathsf {t}_2 \in \mathbb {Z}_{> 0}$
. Let
$\mathsf {t}_1, \mathsf {t}_2 \in \mathbb {Z}_{> 0}$
. Let 
 $\mathsf {h}: \partial \mathsf {D} \rightarrow \mathbb {Z}$
 denote a boundary height function. We next stipulate the following assumption on the continuum limit shape
$\mathsf {h}: \partial \mathsf {D} \rightarrow \mathbb {Z}$
 denote a boundary height function. We next stipulate the following assumption on the continuum limit shape 
 $H^*$
. Here, we fix a real number
$H^*$
. Here, we fix a real number 
 $\delta> 0$
 and a (large) positive integer n. Below, we view the quantities
$\delta> 0$
 and a (large) positive integer n. Below, we view the quantities 
 $\mathfrak {t}_1 < \mathfrak {t}_2 < \mathfrak {t}'$
; functions
$\mathfrak {t}_1 < \mathfrak {t}_2 < \mathfrak {t}'$
; functions 
 $\mathfrak {a}, \mathfrak {b}$
; and polygonal domain
$\mathfrak {a}, \mathfrak {b}$
; and polygonal domain 
 $\mathfrak {P}$
 as independent of n. In what follows, a horizontal tangency location of
$\mathfrak {P}$
 as independent of n. In what follows, a horizontal tangency location of 
 $\partial \mathfrak {L}$
 is a tangency location on
$\partial \mathfrak {L}$
 is a tangency location on 
 $\mathfrak {A}$
 at which the tangent line is horizontal (parallel to the x-axis).
$\mathfrak {A}$
 at which the tangent line is horizontal (parallel to the x-axis).
Assumption 4.1. Assume the following constraints hold.
- 
(1) The boundary height function h is constant along both  $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
.
- 
(2) Either  $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to h or there exists $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to h or there exists $\mathfrak {t}'> \mathfrak {t}_2$
 such that $\mathfrak {t}'> \mathfrak {t}_2$
 such that $H^*$
 admits an extension to time $H^*$
 admits an extension to time $\mathfrak {t}'$
. $\mathfrak {t}'$
.
- 
(3) There exists  $\widetilde {\mathfrak {t}} \in [\mathfrak {t}_1, \mathfrak {t}_2)$
 such that the following holds. For $\widetilde {\mathfrak {t}} \in [\mathfrak {t}_1, \mathfrak {t}_2)$
 such that the following holds. For $t \in [\mathfrak {t}_1, \widetilde {\mathfrak {t}})$
, the set $t \in [\mathfrak {t}_1, \widetilde {\mathfrak {t}})$
, the set $I_t$
 consists of two nonempty disjoint intervals, and for $I_t$
 consists of two nonempty disjoint intervals, and for $t \in [\widetilde {\mathfrak {t}}, \mathfrak {t}_2]$
, the set $t \in [\widetilde {\mathfrak {t}}, \mathfrak {t}_2]$
, the set $I_t$
 consists of one nonempty interval.Footnote 
8 $I_t$
 consists of one nonempty interval.Footnote 
8
- 
(4) Any tangency location along  $\partial \overline {\mathfrak {L}}$
 is of the form $\partial \overline {\mathfrak {L}}$
 is of the form $\min I_t$
 or $\min I_t$
 or $\max I_t$
, for some $\max I_t$
, for some $t \in (\mathfrak {t}_1, \mathfrak {t}_2)$
. At most one tangency location is of the form $t \in (\mathfrak {t}_1, \mathfrak {t}_2)$
. At most one tangency location is of the form $\min I_t$
, and at most one is of the form $\min I_t$
, and at most one is of the form $\max I_t$
. $\max I_t$
.
- 
(5) There exists an algebraic curve Q such that, for any  $(x, t) \in \overline {\mathfrak {L}}$
, we have $(x, t) \in \overline {\mathfrak {L}}$
, we have $$ \begin{align*} Q \bigg( f_t (x), x - \displaystyle\frac{t f_t (x)}{f_t (x) + 1} \bigg) = 0. \end{align*} $$ $$ \begin{align*} Q \bigg( f_t (x), x - \displaystyle\frac{t f_t (x)}{f_t (x) + 1} \bigg) = 0. \end{align*} $$Furthermore, the curve Q ‘approximately comes from a polygonal domain’ in the following sense. There exists a polygonal domain  $\mathfrak {P}$
 satisfying Assumption 2.8 with liquid region $\mathfrak {P}$
 satisfying Assumption 2.8 with liquid region $\mathfrak {L} (\mathfrak {P})$
 and a real number $\mathfrak {L} (\mathfrak {P})$
 and a real number $\alpha = \alpha _n \in \mathbb {R}$
 with $\alpha = \alpha _n \in \mathbb {R}$
 with $|\alpha - 1| < n^{-\delta }$
 such that, if $|\alpha - 1| < n^{-\delta }$
 such that, if $Q_{\mathfrak {L} (\mathfrak {P})}$
 is the algebraic curve associated with $Q_{\mathfrak {L} (\mathfrak {P})}$
 is the algebraic curve associated with $\mathfrak {L} (\mathfrak {P})$
 from Proposition 3.3, then $\mathfrak {L} (\mathfrak {P})$
 from Proposition 3.3, then $$ \begin{align*} Q (u, v) = \displaystyle\frac{u + 1}{\alpha^{-1} u + 1} Q_{\mathfrak{L} (\mathfrak{P})} (\alpha^{-1} u, v). \end{align*} $$ $$ \begin{align*} Q (u, v) = \displaystyle\frac{u + 1}{\alpha^{-1} u + 1} Q_{\mathfrak{L} (\mathfrak{P})} (\alpha^{-1} u, v). \end{align*} $$
Let us briefly comment on these constraints. The first guarantees that the associated tiling is one of a trapezoid, as depicted in Figure 9. The second and third guarantee that the arctic boundary for the tiling has only one cusp. The fourth implies that are are most two tangency locations along the arctic boundary (and are along the leftmost and rightmost components of the arctic curve, if they exist); the fifth implies that the limit shape for the tiling is part of (an explicit perturbation of) one given by a polygonal domain. The last two conditions could in principle be weakened; we impose them since doing so will substantially simplify notation in the proofs later.
 The next assumption indicates how the tiling boundary data 
 $\mathsf {h}$
 approximate h along
$\mathsf {h}$
 approximate h along 
 $\partial \mathfrak {D}$
.
$\partial \mathfrak {D}$
.
Assumption 4.2. Adopt Assumption 4.1, and assume the following on how 
 $\mathsf {h}$
 converges to h.
$\mathsf {h}$
 converges to h. 
- 
(1) For each  $v \in \partial \mathfrak {D}$
, we have $v \in \partial \mathfrak {D}$
, we have $\big | \mathsf {h} (nv) - n h(v) \big | < n^{\delta / 2}$
. $\big | \mathsf {h} (nv) - n h(v) \big | < n^{\delta / 2}$
.
- 
(2) For each  $v \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
, and each $v \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
, and each $v \in \partial _{\operatorname {\mathrm {no}}} \mathfrak {D}\cup \partial _{\operatorname {\mathrm {so}}} \mathfrak {D}$
 such that $v \in \partial _{\operatorname {\mathrm {no}}} \mathfrak {D}\cup \partial _{\operatorname {\mathrm {so}}} \mathfrak {D}$
 such that $v \notin \mathfrak {L}_+^{\delta / 2} (\mathfrak {D})$
, we have $v \notin \mathfrak {L}_+^{\delta / 2} (\mathfrak {D})$
, we have $\mathsf {h} (nv) = n h(v)$
. $\mathsf {h} (nv) = n h(v)$
.
 The first assumption states that 
 $\mathsf {h}$
 approximates its limit shape, and the second states that it coincides with its limit shape in the frozen region.Footnote 
9
 Recalling that
$\mathsf {h}$
 approximates its limit shape, and the second states that it coincides with its limit shape in the frozen region.Footnote 
9
 Recalling that 
 $\mathscr {G} (\mathsf {h})$
 denotes the set of height functions on
$\mathscr {G} (\mathsf {h})$
 denotes the set of height functions on 
 $\mathsf {D}$
 with boundary data
$\mathsf {D}$
 with boundary data 
 $\mathsf {h}$
. We can now state the following concentration estimate for a uniformly element of
$\mathsf {h}$
. We can now state the following concentration estimate for a uniformly element of 
 $\mathscr {G} (\mathsf {h})$
 from part I of this series [Reference HuangHua24]. In particular, the below result appears as [Reference HuangHua24, Theorem 2.5], where Assumption 2.3 there is verified by Assumption 4.1 and [Reference HuangHua24, Proposition A.4].
$\mathscr {G} (\mathsf {h})$
 from part I of this series [Reference HuangHua24]. In particular, the below result appears as [Reference HuangHua24, Theorem 2.5], where Assumption 2.3 there is verified by Assumption 4.1 and [Reference HuangHua24, Proposition A.4].
Theorem 4.3 [Reference HuangHua24, Theorem 2.5].
 There exists a constant 
 $\mathfrak {c} = \mathfrak {c} (\mathfrak {P})> 0$
 such that the following holds. Adopt Assumption 4.1 and Assumption 4.2, and further assume that
$\mathfrak {c} = \mathfrak {c} (\mathfrak {P})> 0$
 such that the following holds. Adopt Assumption 4.1 and Assumption 4.2, and further assume that 
 $\mathfrak {t}_2 - \mathfrak {t}_1 < \mathfrak {c}$
. Let
$\mathfrak {t}_2 - \mathfrak {t}_1 < \mathfrak {c}$
. Let 
 $\mathsf {H} : \mathsf {D} \rightarrow \mathbb {Z}$
 denote a uniformly random element of
$\mathsf {H} : \mathsf {D} \rightarrow \mathbb {Z}$
 denote a uniformly random element of 
 $\mathscr {G} (\mathsf {h})$
. Then, the following two statements hold with overwhelming probability.
$\mathscr {G} (\mathsf {h})$
. Then, the following two statements hold with overwhelming probability. 
- 
(1) We have  $\big | \mathsf {H} (nu) - n H^* (u) \big | < n^{\delta }$
, for any $\big | \mathsf {H} (nu) - n H^* (u) \big | < n^{\delta }$
, for any $u \in \mathfrak {D}$
. $u \in \mathfrak {D}$
.
- 
(2) For any  $u \in \mathfrak {D} \setminus \mathfrak {L}_+^{\delta } (\mathfrak {D})$
, we have $u \in \mathfrak {D} \setminus \mathfrak {L}_+^{\delta } (\mathfrak {D})$
, we have $\mathsf {H} (nu) =n H^* (u)$
. $\mathsf {H} (nu) =n H^* (u)$
.
Remark 4.4. Recall from Section 2.2 that the height function 
 $\mathsf {H}$
 from Theorem 4.3 can equivalently be interpreted as a family of nonintersecting Bernoulli walks on
$\mathsf {H}$
 from Theorem 4.3 can equivalently be interpreted as a family of nonintersecting Bernoulli walks on 
 $\mathsf {D}$
. The fact that h is constant along both
$\mathsf {D}$
. The fact that h is constant along both 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
 implies that
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
 implies that 
 $\mathsf {h}$
 is constant along the east and west boundaries of
$\mathsf {h}$
 is constant along the east and west boundaries of 
 $\mathsf {D}$
. This is equivalent to not imposing any left or right boundary constraints
$\mathsf {D}$
. This is equivalent to not imposing any left or right boundary constraints 
 $(\mathsf {f}; \mathsf {g})$
 for these nonintersecting random Bernoulli walks (in the sense described in Section 3.4).
$(\mathsf {f}; \mathsf {g})$
 for these nonintersecting random Bernoulli walks (in the sense described in Section 3.4).
4.2 Mixing time estimates
As mentioned in Section 1, we will establish the general concentration estimate Theorem 3.10 by decomposing our domain into a bounded number of subregions that each satisfy Assumption 4.1. We introduce a Markov chain, called the alternating dynamics, that uniformly resamples the tiling in one subregion and leaves it fixed in the others. At each step of these dynamics, we show that our concentration bound is preserved with overwhelming probability, by using the preliminary concentration estimate Theorem 4.3. Theorem 3.10 would then follow by running the alternating dynamics until they mix.
A point of caution here is that Theorem 4.3 is not deterministic; there is some, subpolynomial, probability that the preliminary concentration estimate does not hold. So, if the number of steps required for the alternating dynamics to mix were sufficiently (superpolynomially) large, then in principle the concentration estimate could be lost at some point in these dynamics. In this section, we will indicate that this does not happen; after introducing the alternating dynamics (Definition 4.5), we will state that they mix in polynomial time (Proposition 4.6 below).
 In what follows, we fix a domain 
 $\mathsf {R} \subset \mathbb {T}$
 and a boundary height function
$\mathsf {R} \subset \mathbb {T}$
 and a boundary height function 
 $\mathsf {h}: \partial \mathsf {R} \rightarrow \mathbb {Z}$
. Let us introduce the following Markov dynamics on
$\mathsf {h}: \partial \mathsf {R} \rightarrow \mathbb {Z}$
. Let us introduce the following Markov dynamics on 
 $\mathscr {G} (\mathsf {h})$
 that, given a certain decomposition of
$\mathscr {G} (\mathsf {h})$
 that, given a certain decomposition of 
 $\mathsf {R}$
 as a union
$\mathsf {R}$
 as a union 
 $\mathsf {R} = \bigcup _{i = 1}^k \mathsf {R}_i$
 of domains, ‘alternate’ between uniformly resampling
$\mathsf {R} = \bigcup _{i = 1}^k \mathsf {R}_i$
 of domains, ‘alternate’ between uniformly resampling 
 $\mathsf {H} \in \mathscr {G} (\mathsf {h})$
 on each of the
$\mathsf {H} \in \mathscr {G} (\mathsf {h})$
 on each of the 
 $\mathsf {R}_i$
.
$\mathsf {R}_i$
.
Definition 4.5. Fix an integer 
 $k \geq 1$
, a domain
$k \geq 1$
, a domain 
 $\mathsf {R} \subset \mathbb {T}$
, and a boundary height function
$\mathsf {R} \subset \mathbb {T}$
, and a boundary height function 
 $\mathsf {h}: \partial \mathsf {R} \rightarrow \mathbb {Z}$
. Suppose that
$\mathsf {h}: \partial \mathsf {R} \rightarrow \mathbb {Z}$
. Suppose that 
 $\mathsf {R}\setminus \partial \mathsf {R}$
 and
$\mathsf {R}\setminus \partial \mathsf {R}$
 and 
 $\mathscr {G} (\mathsf {h})$
 are both nonempty and that
$\mathscr {G} (\mathsf {h})$
 are both nonempty and that 
 $k \le \operatorname {\mathrm {diam}} \mathsf {R}$
. Let
$k \le \operatorname {\mathrm {diam}} \mathsf {R}$
. Let 
 $\mathsf {R}_1, \mathsf {R}_2, \ldots , \mathsf {R}_k \subseteq \mathsf {R}$
 denote domains such that
$\mathsf {R}_1, \mathsf {R}_2, \ldots , \mathsf {R}_k \subseteq \mathsf {R}$
 denote domains such that 
 $\mathsf {R} = \bigcup _{i = 1}^k \mathsf {R}_i$
 and such that any interior vertex of
$\mathsf {R} = \bigcup _{i = 1}^k \mathsf {R}_i$
 and such that any interior vertex of 
 $\mathsf {R}$
 is an interior vertex of some
$\mathsf {R}$
 is an interior vertex of some 
 $\mathsf {R}_i$
, that is, for each
$\mathsf {R}_i$
, that is, for each 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
, there exists
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
, there exists 
 $i \in [1, k]$
 for which
$i \in [1, k]$
 for which 
 $\mathsf {v} \in \mathsf {R}_i \setminus \partial \mathsf {R}_i$
. The alternating dynamics on
$\mathsf {v} \in \mathsf {R}_i \setminus \partial \mathsf {R}_i$
. The alternating dynamics on 
 $\mathsf {R}$
 with respect to
$\mathsf {R}$
 with respect to 
 $(\mathsf {R}_1, \mathsf {R}_2, \ldots , \mathsf {R}_k)$
, denoted by
$(\mathsf {R}_1, \mathsf {R}_2, \ldots , \mathsf {R}_k)$
, denoted by 
 $M_{\operatorname {\mathrm {alt}}}$
, is the discrete-time Markov chain on
$M_{\operatorname {\mathrm {alt}}}$
, is the discrete-time Markov chain on 
 $\mathscr {G} (\mathsf {h})$
, whose state
$\mathscr {G} (\mathsf {h})$
, whose state 
 $\mathsf {H}_{t+1} \in \mathscr {G} (\mathsf {h})$
 at any time
$\mathsf {H}_{t+1} \in \mathscr {G} (\mathsf {h})$
 at any time 
 $t+1 \in \mathbb {Z}_{\geq 1}$
 is defined from
$t+1 \in \mathbb {Z}_{\geq 1}$
 is defined from 
 $\mathsf {H}_t$
 as follows.
$\mathsf {H}_t$
 as follows.
 Let 
 $i \in [1, k]$
 denote the integer such that k divides
$i \in [1, k]$
 denote the integer such that k divides 
 $t - i + 1$
, and let
$t - i + 1$
, and let 
 $\mathsf {h}_{t+1} = \mathsf {H}_t |_{\partial \mathsf {R}_i}$
. Further, let
$\mathsf {h}_{t+1} = \mathsf {H}_t |_{\partial \mathsf {R}_i}$
. Further, let 
 $\mathsf {F}_{t+1} \in \mathscr {G} (\mathsf {h}_{t+1})$
 denote a uniformly random height function on
$\mathsf {F}_{t+1} \in \mathscr {G} (\mathsf {h}_{t+1})$
 denote a uniformly random height function on 
 $\mathsf {R}_i$
. Then, define
$\mathsf {R}_i$
. Then, define 
 $\mathsf {H}_{t+1}: \mathsf {R} \rightarrow \mathbb {Z}$
 by setting
$\mathsf {H}_{t+1}: \mathsf {R} \rightarrow \mathbb {Z}$
 by setting 
 $\mathsf {H}_{t+1} |_{\mathsf {R}_i} = \mathsf {F}_{t+1}$
 and
$\mathsf {H}_{t+1} |_{\mathsf {R}_i} = \mathsf {F}_{t+1}$
 and 
 $\mathsf {H}_{t+1} |_{\mathsf {R} \setminus \mathsf {R}_i} = \mathsf {H}_t |_{\mathsf {R}\setminus \mathsf {R}_i}$
.
$\mathsf {H}_{t+1} |_{\mathsf {R} \setminus \mathsf {R}_i} = \mathsf {H}_t |_{\mathsf {R}\setminus \mathsf {R}_i}$
.
 Observe (as is quickly verified by induction on 
 $|\mathsf {R}|$
) that the alternating dynamics are irreducible. Thus, they admit a unique stationary measure [Reference Levin, Peres and WilmerLPW09, Corollary 1.17], which is the uniform one
$|\mathsf {R}|$
) that the alternating dynamics are irreducible. Thus, they admit a unique stationary measure [Reference Levin, Peres and WilmerLPW09, Corollary 1.17], which is the uniform one 
 $\mathscr {G} (\mathsf {h})$
.
$\mathscr {G} (\mathsf {h})$
.
 We will bound the rate of convergence to stationarity for these alternating dynamics, so let us recall some notion on mixing times. Fix a discrete state space 
 $\mathscr {S}$
, and let
$\mathscr {S}$
, and let 
 $\mathscr {P} (\mathscr {S})$
 denote the set of probability measures on
$\mathscr {P} (\mathscr {S})$
 denote the set of probability measures on 
 $\mathscr {S}$
. The total variation distance between two measures
$\mathscr {S}$
. The total variation distance between two measures 
 $\nu , \nu ' \in \mathscr {P} (\mathscr {S})$
 is
$\nu , \nu ' \in \mathscr {P} (\mathscr {S})$
 is 
 $$ \begin{align*} d_{\operatorname{\mathrm{TV}}} (\nu, \nu') = \displaystyle\max_{\mathscr{A} \subseteq \mathscr{S}} \big| \nu (\mathscr{A}) - \nu' (\mathscr{A}) \big|. \end{align*} $$
$$ \begin{align*} d_{\operatorname{\mathrm{TV}}} (\nu, \nu') = \displaystyle\max_{\mathscr{A} \subseteq \mathscr{S}} \big| \nu (\mathscr{A}) - \nu' (\mathscr{A}) \big|. \end{align*} $$
 In addition, fix an irreducible Markov chain 
 $M: \mathscr {P} (\mathscr {S}) \rightarrow \mathscr {P} (\mathscr {S})$
 on
$M: \mathscr {P} (\mathscr {S}) \rightarrow \mathscr {P} (\mathscr {S})$
 on 
 $\mathscr {S}$
, whose unique stationary measure is denoted by
$\mathscr {S}$
, whose unique stationary measure is denoted by 
 $\rho $
. For any real number
$\rho $
. For any real number 
 $\varepsilon> 0$
, the mixing time with respect to M is given by
$\varepsilon> 0$
, the mixing time with respect to M is given by 
 $$ \begin{align} t_{\operatorname{\mathrm{mix}}} (\varepsilon; M) = \min \Big\{ t \in \mathbb{Z}_{\geq 0}: \displaystyle\max_{\nu \in \mathscr{P} (\mathscr{S})} d_{\operatorname{\mathrm{TV}}} (M^t \nu, \rho) \le \varepsilon \Big\}, \end{align} $$
$$ \begin{align} t_{\operatorname{\mathrm{mix}}} (\varepsilon; M) = \min \Big\{ t \in \mathbb{Z}_{\geq 0}: \displaystyle\max_{\nu \in \mathscr{P} (\mathscr{S})} d_{\operatorname{\mathrm{TV}}} (M^t \nu, \rho) \le \varepsilon \Big\}, \end{align} $$
which by [Reference Levin, Peres and WilmerLPW09, Exercise 4.3] (which is a quick consequence of [Reference Levin, Peres and WilmerLPW09, Proposition 4.7]) satisfies
 $$ \begin{align} d_{\operatorname{\mathrm{TV}}} (M^t \nu, \rho) \le d_{\operatorname{\mathrm{TV}}} (M^{t_{\operatorname{\mathrm{mix}}} (\varepsilon; M)} \nu, \rho) \le \varepsilon, \qquad \text{whenever}\ t \ge t_{\operatorname{\mathrm{mix}}} (\varepsilon; M). \end{align} $$
$$ \begin{align} d_{\operatorname{\mathrm{TV}}} (M^t \nu, \rho) \le d_{\operatorname{\mathrm{TV}}} (M^{t_{\operatorname{\mathrm{mix}}} (\varepsilon; M)} \nu, \rho) \le \varepsilon, \qquad \text{whenever}\ t \ge t_{\operatorname{\mathrm{mix}}} (\varepsilon; M). \end{align} $$
 We now state the following (very coarse) estimate on the mixing time for the dynamics 
 $M_{\operatorname {\mathrm {alt}}}$
. Its proof will appear in Section 4.3 below.
$M_{\operatorname {\mathrm {alt}}}$
. Its proof will appear in Section 4.3 below.
Proposition 4.6. There exists a constant 
 $C> 1$
 such that the following holds. Adopt the notation of Definition 4.5, and set
$C> 1$
 such that the following holds. Adopt the notation of Definition 4.5, and set 
 $A = (\operatorname {\mathrm {diam}} \mathsf {R})^2$
. If
$A = (\operatorname {\mathrm {diam}} \mathsf {R})^2$
. If 
 $A \geq C$
, then
$A \geq C$
, then 
 $t_{\operatorname {\mathrm {mix}}} (e^{-A}; M_{\operatorname {\mathrm {alt}}}) \leq A^{11}$
.
$t_{\operatorname {\mathrm {mix}}} (e^{-A}; M_{\operatorname {\mathrm {alt}}}) \leq A^{11}$
.
4.3 Proof of Proposition 4.6
 There are likely many ways of establishing Proposition 4.6; the proof below will proceed through a comparison between the alternating dynamics and the Glauber (‘flip’) dynamics. To define the latter, given a height function 
 $\mathsf {H} : \mathsf {R} \rightarrow \mathbb {Z}$
 and an interior vertex
$\mathsf {H} : \mathsf {R} \rightarrow \mathbb {Z}$
 and an interior vertex 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
, we say
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
, we say 
 $\mathsf {v}$
 is increasable with respect to
$\mathsf {v}$
 is increasable with respect to 
 $\mathsf {H}$
 if the function
$\mathsf {H}$
 if the function 
 $\mathsf {H}' : \mathsf {R} \rightarrow \mathbb {Z}$
 defined by
$\mathsf {H}' : \mathsf {R} \rightarrow \mathbb {Z}$
 defined by 
 $$ \begin{align*} \mathsf{H}' (\mathsf{v}) = \mathsf{H} (\mathsf{v}) + 1; \qquad \mathsf{H}' (\mathsf{u}) = \mathsf{H} (\mathsf{u}), \quad \text{for}\; \mathsf{u} \in \mathsf{R}\setminus \{ v \}, \end{align*} $$
$$ \begin{align*} \mathsf{H}' (\mathsf{v}) = \mathsf{H} (\mathsf{v}) + 1; \qquad \mathsf{H}' (\mathsf{u}) = \mathsf{H} (\mathsf{u}), \quad \text{for}\; \mathsf{u} \in \mathsf{R}\setminus \{ v \}, \end{align*} $$
is a height function on 
 $\mathsf {R}$
. In this case, we say that
$\mathsf {R}$
. In this case, we say that 
 $\mathsf {H}'$
 is the unit increase of
$\mathsf {H}'$
 is the unit increase of 
 $\mathsf {H}$
 at
$\mathsf {H}$
 at 
 $\mathsf {v}$
. We define decreasable vertices and unit decreases of
$\mathsf {v}$
. We define decreasable vertices and unit decreases of 
 $\mathsf {H}$
 analogously. Observe that a vertex of
$\mathsf {H}$
 analogously. Observe that a vertex of 
 $\mathsf {R}$
 cannot simultaneously be increasable and decreasable with respect to a given height function
$\mathsf {R}$
 cannot simultaneously be increasable and decreasable with respect to a given height function 
 $\mathsf {H}$
 on
$\mathsf {H}$
 on 
 $\mathsf {R}$
.
$\mathsf {R}$
.
Definition 4.7. Given a height function 
 $\mathsf {H} \in \mathscr {G} (\mathsf {h})$
 and an interior vertex
$\mathsf {H} \in \mathscr {G} (\mathsf {h})$
 and an interior vertex 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
, the random flip of
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
, the random flip of 
 $\mathsf {H}$
 at
$\mathsf {H}$
 at 
 $\mathsf {v}$
 is the random height function
$\mathsf {v}$
 is the random height function 
 $\mathsf {H}' \in \mathscr {G} (\mathsf {h})$
 defined as follows.
$\mathsf {H}' \in \mathscr {G} (\mathsf {h})$
 defined as follows.
- 
• If  $\mathsf {v}$
 is neither increasable nor decreasable with respect to $\mathsf {v}$
 is neither increasable nor decreasable with respect to $\mathsf {H}$
, then set $\mathsf {H}$
, then set $\mathsf {H}' = \mathsf {H}$
. $\mathsf {H}' = \mathsf {H}$
.
- 
• If  $\mathsf {v}$
 is increasable with respect to $\mathsf {v}$
 is increasable with respect to $\mathsf {H}$
, then with probability $\mathsf {H}$
, then with probability $\frac {1}{2}$
 set $\frac {1}{2}$
 set $\mathsf {H}'$
 to be the unit increase of $\mathsf {H}'$
 to be the unit increase of $\mathsf {H}$
 at $\mathsf {H}$
 at $\mathsf {v}$
. Otherwise, set $\mathsf {v}$
. Otherwise, set $\mathsf {H}' = \mathsf {H}$
. $\mathsf {H}' = \mathsf {H}$
.
- 
• If  $\mathsf {v}$
 is decreasable with respect to $\mathsf {v}$
 is decreasable with respect to $\mathsf {H}$
, then with probability $\mathsf {H}$
, then with probability $\frac {1}{2}$
 set $\frac {1}{2}$
 set $\mathsf {H}'$
 to be the unit decrease of $\mathsf {H}'$
 to be the unit decrease of $\mathsf {H}$
 at $\mathsf {H}$
 at $\mathsf {v}$
. Otherwise, set $\mathsf {v}$
. Otherwise, set $\mathsf {H}' = \mathsf {H}$
. $\mathsf {H}' = \mathsf {H}$
.
 Now, we define two Markov chains on 
 $\mathscr {G} (\mathsf {h})$
, the flip dynamics and the region-flip dynamics. Each update of either chain is obtained by applying a random flip to an interior vertex
$\mathscr {G} (\mathsf {h})$
, the flip dynamics and the region-flip dynamics. Each update of either chain is obtained by applying a random flip to an interior vertex 
 $\mathsf {v}$
 of
$\mathsf {v}$
 of 
 $\mathsf {R}$
. In the flip dynamics,
$\mathsf {R}$
. In the flip dynamics, 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 is selected uniformly at random; in the region-flip dynamics
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 is selected uniformly at random; in the region-flip dynamics 
 $\mathsf {v}$
 is selected uniformly at random from
$\mathsf {v}$
 is selected uniformly at random from 
 $\mathsf {R}_i \setminus \partial \mathsf {R}_i$
, where i is determined from the time of the update.
$\mathsf {R}_i \setminus \partial \mathsf {R}_i$
, where i is determined from the time of the update.
Definition 4.8. The flip dynamics on 
 $\mathsf {R}$
, denoted by
$\mathsf {R}$
, denoted by 
 $M_{\operatorname {\mathrm {fl}}}$
, is the discrete-time Markov chain on
$M_{\operatorname {\mathrm {fl}}}$
, is the discrete-time Markov chain on 
 $\mathscr {G} (\mathsf {h})$
 whose state
$\mathscr {G} (\mathsf {h})$
 whose state 
 $\mathsf {H}_{t+1}$
 at time
$\mathsf {H}_{t+1}$
 at time 
 $t+1 \geq 1$
 is defined from
$t+1 \geq 1$
 is defined from 
 $\mathsf {H}_t$
 as follows. Select an interior vertex
$\mathsf {H}_t$
 as follows. Select an interior vertex 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 uniformly at random, and set
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 uniformly at random, and set 
 $\mathsf {H}_{t+1}$
 to be the random flip of
$\mathsf {H}_{t+1}$
 to be the random flip of 
 $\mathsf {H}_t$
 at
$\mathsf {H}_t$
 at 
 $\mathsf {v}$
.
$\mathsf {v}$
.
 The region-flip dynamics on 
 $\mathsf {R}$
 with respect to
$\mathsf {R}$
 with respect to 
 $(\mathsf {R}_1, \mathsf {R}_2, \ldots , \mathsf {R}_k)$
, denoted by
$(\mathsf {R}_1, \mathsf {R}_2, \ldots , \mathsf {R}_k)$
, denoted by 
 $M_{\operatorname {\mathrm {rf}}}$
, is the discrete-time Markov chain on
$M_{\operatorname {\mathrm {rf}}}$
, is the discrete-time Markov chain on 
 $\mathscr {G} (\mathsf {h})$
 whose state
$\mathscr {G} (\mathsf {h})$
 whose state 
 $\mathsf {H}_{t+1}$
 at time
$\mathsf {H}_{t+1}$
 at time 
 $t+1 \geq 1$
 is defined from
$t+1 \geq 1$
 is defined from 
 $\mathsf {H}_t$
 as follows. Let
$\mathsf {H}_t$
 as follows. Let 
 $i \in [1, k]$
 denote the integer such that
$i \in [1, k]$
 denote the integer such that 
 $(k t_0 + i - 1) A^5 < t+1 \leq (k t_0 + i) A^5$
, for some
$(k t_0 + i - 1) A^5 < t+1 \leq (k t_0 + i) A^5$
, for some 
 $t_0 \in \mathbb {Z}_{\geq 0}$
. Select a vertex
$t_0 \in \mathbb {Z}_{\geq 0}$
. Select a vertex 
 $\mathsf {v} \in \mathsf {R}_i \setminus \partial \mathsf {R}_i$
 uniformly at random, and set
$\mathsf {v} \in \mathsf {R}_i \setminus \partial \mathsf {R}_i$
 uniformly at random, and set 
 $\mathsf {H}_{t+1}$
 to be the random flip of
$\mathsf {H}_{t+1}$
 to be the random flip of 
 $\mathsf {H}_t$
 at
$\mathsf {H}_t$
 at 
 $\mathsf {v}$
.
$\mathsf {v}$
.
We then have the following result from [Reference Randall and TetaliRT00] bounding the mixing time for the flip dynamics.
Proposition 4.9 [Reference Randall and TetaliRT00, Theorem 5].
 Adopting the notation of Proposition 4.6, we have for any real number 
 $\varepsilon \in (0, 1)$
 that
$\varepsilon \in (0, 1)$
 that 
 $$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (\varepsilon; M_{\operatorname{\mathrm{fl}}}) < C A^4 \log A + C A^3 \log A \log \varepsilon^{-1}. \end{align*} $$
$$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (\varepsilon; M_{\operatorname{\mathrm{fl}}}) < C A^4 \log A + C A^3 \log A \log \varepsilon^{-1}. \end{align*} $$
We now state the following two lemmas, which will be established below.
Lemma 4.10. Under the notation of Proposition 4.6,
 $$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (e^{-2A}; M_{\operatorname{\mathrm{rf}}}) \leq 144A^{11} \cdot \bigg( t_{\operatorname{\mathrm{mix}}} \Big( \frac{e^{-2A}}{32A^4}; M_{\operatorname{\mathrm{fl}}} \Big) + 1 \bigg). \end{align*} $$
$$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (e^{-2A}; M_{\operatorname{\mathrm{rf}}}) \leq 144A^{11} \cdot \bigg( t_{\operatorname{\mathrm{mix}}} \Big( \frac{e^{-2A}}{32A^4}; M_{\operatorname{\mathrm{fl}}} \Big) + 1 \bigg). \end{align*} $$
Lemma 4.11. Under the notation of Proposition 4.6, 
 $A^5 t_{\operatorname {\mathrm {mix}}} (e^{-A}; M_{\operatorname {\mathrm {alt}}}) \leq t_{\operatorname {\mathrm {mix}}} (e^{-2A}; M_{\operatorname {\mathrm {rf}}})$
.
$A^5 t_{\operatorname {\mathrm {mix}}} (e^{-A}; M_{\operatorname {\mathrm {alt}}}) \leq t_{\operatorname {\mathrm {mix}}} (e^{-2A}; M_{\operatorname {\mathrm {rf}}})$
.
Given these two results, we can quickly establish Proposition 4.6.
Proof of Proposition 4.6.
 This follows from Lemma 4.11, Lemma 4.10 and Proposition 4.9 (the last applied with the 
 $\varepsilon $
 there equal to
$\varepsilon $
 there equal to 
 $\frac {e^{-2A}}{32A^4}$
 here).
$\frac {e^{-2A}}{32A^4}$
 here).
 The proofs of Lemma 4.10 and Lemma 4.11 will use ‘weighted’ and ‘censored’ forms of the flip dynamics from Definition 4.8. To define these, adopting the notation of Definition 4.5, for each 
 $i \in [1, k]$
 let
$i \in [1, k]$
 let 
 $$ \begin{align} p_i = |\mathsf{R}_i \setminus \partial \mathsf{R}_i| \cdot \Bigg(\displaystyle\sum_{j=1}^k |\mathsf{R}_j \setminus \partial \mathsf{R}_j| \Bigg)^{-1} \in [0, 1]. \end{align} $$
$$ \begin{align} p_i = |\mathsf{R}_i \setminus \partial \mathsf{R}_i| \cdot \Bigg(\displaystyle\sum_{j=1}^k |\mathsf{R}_j \setminus \partial \mathsf{R}_j| \Bigg)^{-1} \in [0, 1]. \end{align} $$
 For any vertex 
 $v \in \mathsf {R} \setminus \partial \mathsf {R}$
, we further let
$v \in \mathsf {R} \setminus \partial \mathsf {R}$
, we further let 
 $$ \begin{align*} \mathfrak{m}(\mathsf{v}) = \# \big\{ j \in [1, k] : \mathsf{v} \in \mathsf{R}_j \setminus \partial \mathsf{R}_j \big\}; \quad \mathfrak{M} = \displaystyle\sum_{\mathsf{v} \in \mathsf{R} \setminus \partial \mathsf{R}} \mathfrak{m} (\mathsf{v}) = \displaystyle\sum_{j = 1}^k |\mathsf{R}_j \setminus \partial \mathsf{R}_j|; \quad \mathfrak{m}= \displaystyle\sum_{\mathsf{v} \in \mathsf{R} \setminus \partial \mathsf{R}} \mathfrak{m} (\mathsf{v})^{-1}. \end{align*} $$
$$ \begin{align*} \mathfrak{m}(\mathsf{v}) = \# \big\{ j \in [1, k] : \mathsf{v} \in \mathsf{R}_j \setminus \partial \mathsf{R}_j \big\}; \quad \mathfrak{M} = \displaystyle\sum_{\mathsf{v} \in \mathsf{R} \setminus \partial \mathsf{R}} \mathfrak{m} (\mathsf{v}) = \displaystyle\sum_{j = 1}^k |\mathsf{R}_j \setminus \partial \mathsf{R}_j|; \quad \mathfrak{m}= \displaystyle\sum_{\mathsf{v} \in \mathsf{R} \setminus \partial \mathsf{R}} \mathfrak{m} (\mathsf{v})^{-1}. \end{align*} $$
Definition 4.12. The weighted flip dynamics, denoted by 
 $M_{\operatorname {\mathrm {wf}}}$
, is the discrete-time Markov chain on
$M_{\operatorname {\mathrm {wf}}}$
, is the discrete-time Markov chain on 
 $\mathscr {G} (\mathsf {h})$
, whose state
$\mathscr {G} (\mathsf {h})$
, whose state 
 $\mathsf {H}_{t+1}$
 at time
$\mathsf {H}_{t+1}$
 at time 
 $t+1$
 is defined from
$t+1$
 is defined from 
 $\mathsf {H}_t$
 as follows. Select an interior vertex
$\mathsf {H}_t$
 as follows. Select an interior vertex 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 with probability
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 with probability 
 $\mathfrak {m} (\mathsf {v}) \cdot \mathfrak {M}^{-1}$
, and set
$\mathfrak {m} (\mathsf {v}) \cdot \mathfrak {M}^{-1}$
, and set 
 $\mathsf {H}_{t+1}$
 to be the random flip of
$\mathsf {H}_{t+1}$
 to be the random flip of 
 $\mathsf {H}_t$
 at
$\mathsf {H}_t$
 at 
 $\mathsf {v}$
.
$\mathsf {v}$
.
 The censored weighted flip dynamics, denoted by 
 $M_{\operatorname {\mathrm {cwf}}}$
, is the discrete-time Markov chain on
$M_{\operatorname {\mathrm {cwf}}}$
, is the discrete-time Markov chain on 
 $\mathscr {G} (\mathsf {h})$
, whose state
$\mathscr {G} (\mathsf {h})$
, whose state 
 $\mathsf {H}_{t+1}$
 at time
$\mathsf {H}_{t+1}$
 at time 
 $t+1$
 is defined from
$t+1$
 is defined from 
 $\mathsf {H}_t$
 as follows. Select an interior vertex
$\mathsf {H}_t$
 as follows. Select an interior vertex 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 with probability
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 with probability 
 $\mathfrak {m} (\mathsf {v}) \cdot \mathfrak {M}^{-1}$
. Then set
$\mathfrak {m} (\mathsf {v}) \cdot \mathfrak {M}^{-1}$
. Then set 
 $\mathsf {H}_{t+1}$
 to be the random flip of
$\mathsf {H}_{t+1}$
 to be the random flip of 
 $\mathsf {H}_t$
 at
$\mathsf {H}_t$
 at 
 $\mathsf {v}$
 with probability
$\mathsf {v}$
 with probability 
 $\mathfrak {m}(\mathsf {v})^{-1} \cdot \mathfrak {m}^{-1}$
, and set
$\mathfrak {m}(\mathsf {v})^{-1} \cdot \mathfrak {m}^{-1}$
, and set 
 $\mathsf {H}_{t+1} = \mathsf {H}_t$
 with the complementary probability
$\mathsf {H}_{t+1} = \mathsf {H}_t$
 with the complementary probability 
 $1 - \mathfrak {m} (\mathsf {v})^{-1} \cdot \mathfrak {m}^{-1}$
.
$1 - \mathfrak {m} (\mathsf {v})^{-1} \cdot \mathfrak {m}^{-1}$
.
Remark 4.13. By Definition 4.12, we may interpret the censored weighted flip dynamics 
 $M_{\operatorname {\mathrm {cwf}}}$
 as the following ‘lazy’ version of the flip dynamics from Definition 4.8. With probability
$M_{\operatorname {\mathrm {cwf}}}$
 as the following ‘lazy’ version of the flip dynamics from Definition 4.8. With probability 
 $1 - |\mathsf {R} \setminus \partial \mathsf {R}| \cdot (\mathfrak {M} \cdot \mathfrak {m})^{-1}$
, we perform a lazy step and set
$1 - |\mathsf {R} \setminus \partial \mathsf {R}| \cdot (\mathfrak {M} \cdot \mathfrak {m})^{-1}$
, we perform a lazy step and set 
 $\mathsf {H}_{t+1} = \mathsf {H}_t$
. Otherwise, we perform a active step, by selecting a vertex
$\mathsf {H}_{t+1} = \mathsf {H}_t$
. Otherwise, we perform a active step, by selecting a vertex 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 uniformly at random and setting
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 uniformly at random and setting 
 $\mathsf {H}_{t+1}$
 to be the random flip of
$\mathsf {H}_{t+1}$
 to be the random flip of 
 $\mathsf {H}_t$
 at
$\mathsf {H}_t$
 at 
 $\mathsf {v}$
.
$\mathsf {v}$
.
 The following lemma compares the mixing time for 
 $M_{\operatorname {\mathrm {cwf}}}$
 to that of the flip dynamics
$M_{\operatorname {\mathrm {cwf}}}$
 to that of the flip dynamics 
 $M_{\operatorname {\mathrm {fl}}}$
, using Remark 4.13.
$M_{\operatorname {\mathrm {fl}}}$
, using Remark 4.13.
Lemma 4.14. For any real number 
 $\varepsilon \in \big ( 0, \frac {1}{2} \big ]$
, we have
$\varepsilon \in \big ( 0, \frac {1}{2} \big ]$
, we have 
 $$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (2 \varepsilon; M_{\operatorname{\mathrm{cwf}}}) \le 4 A^3 (\log \varepsilon^{-1})^2 \cdot \big( t_{\operatorname{\mathrm{mix}}} (\varepsilon; M_{\operatorname{\mathrm{fl}}}) + 1 \big). \end{align*} $$
$$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (2 \varepsilon; M_{\operatorname{\mathrm{cwf}}}) \le 4 A^3 (\log \varepsilon^{-1})^2 \cdot \big( t_{\operatorname{\mathrm{mix}}} (\varepsilon; M_{\operatorname{\mathrm{fl}}}) + 1 \big). \end{align*} $$
Proof. Throughout this proof, we recall the interpretation of 
 $M_{\operatorname {\mathrm {cwf}}}$
 as a lazy version of
$M_{\operatorname {\mathrm {cwf}}}$
 as a lazy version of 
 $M_{\operatorname {\mathrm {fl}}}$
 from Remark 4.13; we also recall the notation of that remark and set
$M_{\operatorname {\mathrm {fl}}}$
 from Remark 4.13; we also recall the notation of that remark and set 
 $T_0 = 4A^3 (\log \varepsilon ^{-1})^2 \cdot \big ( t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}}) \big ) + 2 \le 4 A^3 (\log \varepsilon ^{-1})^2 \cdot \big ( t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}}) + 1 \big )$
. By Chernoff’s inequality, with probability at least
$T_0 = 4A^3 (\log \varepsilon ^{-1})^2 \cdot \big ( t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}}) \big ) + 2 \le 4 A^3 (\log \varepsilon ^{-1})^2 \cdot \big ( t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}}) + 1 \big )$
. By Chernoff’s inequality, with probability at least 
 $1 - \varepsilon $
, the number of active steps in this walk after some time
$1 - \varepsilon $
, the number of active steps in this walk after some time 
 $T \ge T_0 \ge 4A^3 (\log \varepsilon ^{-1})^2$
 is at least
$T \ge T_0 \ge 4A^3 (\log \varepsilon ^{-1})^2$
 is at least 
 $$ \begin{align*} \displaystyle\frac{|\mathsf{R} \setminus \partial \mathsf{R}|}{\mathfrak{m} \cdot \mathfrak{M}} \cdot T - T^{1/2} \log \varepsilon^{-1} \ge \displaystyle\frac{T}{\mathfrak{M}} - T^{1/2} \log \varepsilon \ge A^{-3/2} T - (\log \varepsilon^{-1}) T^{1/2} \ge \displaystyle\frac{T_0}{2A^{3/2}}, \end{align*} $$
$$ \begin{align*} \displaystyle\frac{|\mathsf{R} \setminus \partial \mathsf{R}|}{\mathfrak{m} \cdot \mathfrak{M}} \cdot T - T^{1/2} \log \varepsilon^{-1} \ge \displaystyle\frac{T}{\mathfrak{M}} - T^{1/2} \log \varepsilon \ge A^{-3/2} T - (\log \varepsilon^{-1}) T^{1/2} \ge \displaystyle\frac{T_0}{2A^{3/2}}, \end{align*} $$
where we used the fact that 
 $\mathfrak {m} \le |\mathsf {R} \setminus \partial \mathsf {R}|$
 (as
$\mathfrak {m} \le |\mathsf {R} \setminus \partial \mathsf {R}|$
 (as 
 $\mathfrak {m}(\mathsf {v}) \ge 1$
 for each
$\mathfrak {m}(\mathsf {v}) \ge 1$
 for each 
 $\mathsf {v}\in \mathsf {R} \setminus \partial \mathsf {R}$
) in the first bound, the fact that
$\mathsf {v}\in \mathsf {R} \setminus \partial \mathsf {R}$
) in the first bound, the fact that 
 $\mathfrak {M} \le k \cdot |\mathsf {R} \setminus \mathsf {R}| \le (\operatorname {\mathrm {diam}} \mathsf {R})^3 = A^{3/2}$
 (as
$\mathfrak {M} \le k \cdot |\mathsf {R} \setminus \mathsf {R}| \le (\operatorname {\mathrm {diam}} \mathsf {R})^3 = A^{3/2}$
 (as 
 $\mathfrak {m} (\mathsf {v}) \le k \le \operatorname {\mathrm {diam}} \mathsf {R}$
 for each
$\mathfrak {m} (\mathsf {v}) \le k \le \operatorname {\mathrm {diam}} \mathsf {R}$
 for each 
 $\mathsf {V} \in \mathsf {R} \setminus \partial \mathsf {R}$
) in the second and the fact that
$\mathsf {V} \in \mathsf {R} \setminus \partial \mathsf {R}$
) in the second and the fact that 
 $T \ge T_0 \ge 4 A^3 (\log \varepsilon )^2$
 in the third. It follows for
$T \ge T_0 \ge 4 A^3 (\log \varepsilon )^2$
 in the third. It follows for 
 $T \ge T_0$
 that the number of active steps in these dynamics is at least
$T \ge T_0$
 that the number of active steps in these dynamics is at least 
 $t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}})$
, with probability at least
$t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}})$
, with probability at least 
 $1 - \varepsilon $
. Conditioning on this event, we find for any
$1 - \varepsilon $
. Conditioning on this event, we find for any 
 $\mathsf {H}_0 \in \mathscr {G} (\mathsf {h})$
 that we may couple
$\mathsf {H}_0 \in \mathscr {G} (\mathsf {h})$
 that we may couple 
 $M_{\operatorname {\mathrm {cwf}}}^{\lceil T_0 \rceil } \mathsf {H}_0$
 to coincide with a uniformly random element
$M_{\operatorname {\mathrm {cwf}}}^{\lceil T_0 \rceil } \mathsf {H}_0$
 to coincide with a uniformly random element 
 $\mathsf {F} \in \mathscr {G} (\mathsf {h})$
 with probability
$\mathsf {F} \in \mathscr {G} (\mathsf {h})$
 with probability 
 $1 - \varepsilon $
. Hence, by a union bound, we may couple
$1 - \varepsilon $
. Hence, by a union bound, we may couple 
 $M_{\operatorname {\mathrm {cwf}}}^{\lceil T_0 \rceil } \mathsf {H}_0$
 to coincide with
$M_{\operatorname {\mathrm {cwf}}}^{\lceil T_0 \rceil } \mathsf {H}_0$
 to coincide with 
 $\mathsf {F}$
 with probability
$\mathsf {F}$
 with probability 
 $1 - 2 \varepsilon $
, from which the lemma follows, as
$1 - 2 \varepsilon $
, from which the lemma follows, as 
 $T_0 \le 4A^3 (\log \varepsilon ^{-1})^2 \cdot \big ( t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}}) + 1 \big )$
.
$T_0 \le 4A^3 (\log \varepsilon ^{-1})^2 \cdot \big ( t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}}) + 1 \big )$
.
We further require a censored version of the region-flip dynamics from Definition 4.8.
Definition 4.15. The censored region-flip dynamics, denoted by 
 $M_{\operatorname {\mathrm {crf}}}$
, is the discrete-time Markov chain on
$M_{\operatorname {\mathrm {crf}}}$
, is the discrete-time Markov chain on 
 $\mathscr {G} (\mathsf {h})$
 whose state
$\mathscr {G} (\mathsf {h})$
 whose state 
 $\mathsf {H}_{t+1}$
 at time
$\mathsf {H}_{t+1}$
 at time 
 $t+1$
 is defined from
$t+1$
 is defined from 
 $\mathsf {H}_t$
 as follows. First, let
$\mathsf {H}_t$
 as follows. First, let 
 $\mathcal {X} = (X_1, X_2, \ldots ) \in \mathbb {Z}_{\geq 1}$
 denote the sequence of integer-valued random variables defined by first setting
$\mathcal {X} = (X_1, X_2, \ldots ) \in \mathbb {Z}_{\geq 1}$
 denote the sequence of integer-valued random variables defined by first setting 
 $\mathbb {P} (X_1 = i) = p_i$
 for each
$\mathbb {P} (X_1 = i) = p_i$
 for each 
 $i \in [1, k]$
. Then, given
$i \in [1, k]$
. Then, given 
 $X_r$
, we define
$X_r$
, we define 
 $X_{r + 1} \in \{ X_r + 1, X_r + 2, \ldots , X_r + k \}$
 by setting
$X_{r + 1} \in \{ X_r + 1, X_r + 2, \ldots , X_r + k \}$
 by setting 
 $\mathbb {P} (X_{r + 1} = X_r + i) = p_j$
, where
$\mathbb {P} (X_{r + 1} = X_r + i) = p_j$
, where 
 $j \in [1, k]$
 is such that k divides
$j \in [1, k]$
 is such that k divides 
 $X_r + i - j$
.
$X_r + i - j$
.
 Now, let 
 $s \geq 1$
 denote the integer such that
$s \geq 1$
 denote the integer such that 
 $(s - 1) A^{5} < t+1 \leq s A^{5}$
, and let
$(s - 1) A^{5} < t+1 \leq s A^{5}$
, and let 
 $i' \in [1, k]$
 denote the integer such that
$i' \in [1, k]$
 denote the integer such that 
 $k t_0 + i' = s$
 for some
$k t_0 + i' = s$
 for some 
 $t_0 \in \mathbb {Z}_{\geq 0}$
. If
$t_0 \in \mathbb {Z}_{\geq 0}$
. If 
 $s \in \mathcal {X}$
 and
$s \in \mathcal {X}$
 and 
 $t = (s - 1) A^{5}$
, then select an interior
$t = (s - 1) A^{5}$
, then select an interior 
 $\mathsf {v} \in \mathsf {R}_{I'} \setminus \partial \mathsf {R}_{I'}$
 uniformly at random, and let
$\mathsf {v} \in \mathsf {R}_{I'} \setminus \partial \mathsf {R}_{I'}$
 uniformly at random, and let 
 $\mathsf {H}_{t + 1}$
 denote the random flip of
$\mathsf {H}_{t + 1}$
 denote the random flip of 
 $\mathsf {H}_t$
 at
$\mathsf {H}_t$
 at 
 $\mathsf {v}$
. If instead
$\mathsf {v}$
. If instead 
 $s \notin \mathcal {X}$
 or
$s \notin \mathcal {X}$
 or 
 $t - (s - 1) A^{5}> 0$
, then set
$t - (s - 1) A^{5}> 0$
, then set 
 $\mathsf {H}_{t + 1} = \mathsf {H}_t$
.
$\mathsf {H}_{t + 1} = \mathsf {H}_t$
.
 The process 
 $M_{\operatorname {\mathrm {crf}}}$
 censors any step in the region-flip dynamics
$M_{\operatorname {\mathrm {crf}}}$
 censors any step in the region-flip dynamics 
 $M_{\operatorname {\mathrm {rf}}}$
 from Definition 4.8 in the time interval
$M_{\operatorname {\mathrm {rf}}}$
 from Definition 4.8 in the time interval 
 $\big [ (s - 1) A^{5} + 2, s A^{5} \big ]$
 and also the step at time
$\big [ (s - 1) A^{5} + 2, s A^{5} \big ]$
 and also the step at time 
 $(s - 1) A^{5}$
 if
$(s - 1) A^{5}$
 if 
 $s \notin \mathcal {X}$
.
$s \notin \mathcal {X}$
.
 Denote the maximal and minimal configurations of 
 $\mathscr {G} (\mathsf {h})$
 by
$\mathscr {G} (\mathsf {h})$
 by 
 $\mathsf H^{\mathrm {top}}, \mathsf H^{\mathrm {btm}}\in \mathscr {G} (\mathsf {h})$
, which for any
$\mathsf H^{\mathrm {top}}, \mathsf H^{\mathrm {btm}}\in \mathscr {G} (\mathsf {h})$
, which for any 
 $\mathsf H\in \mathscr {G} (\mathsf {h})$
 satisfy
$\mathsf H\in \mathscr {G} (\mathsf {h})$
 satisfy 
 $\mathsf H^{\mathrm {top}}(\mathsf v)\geq \mathsf H(\mathsf v)\geq \mathsf H^{\mathrm {btm}}(\mathsf v)$
, for each
$\mathsf H^{\mathrm {top}}(\mathsf v)\geq \mathsf H(\mathsf v)\geq \mathsf H^{\mathrm {btm}}(\mathsf v)$
, for each 
 $\mathsf v\in \mathsf R$
. Further, let
$\mathsf v\in \mathsf R$
. Further, let 
 $\delta _{\mathsf H^{\mathrm {top}}}, \delta _{\mathsf H^{\mathrm {btm}}} \in \mathscr {P} \big (\mathscr {G} (\mathsf {h}) \big )$
 denote the delta masses at
$\delta _{\mathsf H^{\mathrm {top}}}, \delta _{\mathsf H^{\mathrm {btm}}} \in \mathscr {P} \big (\mathscr {G} (\mathsf {h}) \big )$
 denote the delta masses at 
 $\mathsf H^{\mathrm {top}}$
 and
$\mathsf H^{\mathrm {top}}$
 and 
 $\mathsf H^{\mathrm {btm}}$
, respectively. The following result from [Reference Peres and WinklerPW13] shows that the above censorings (weakly) increase the mixing time for these dynamics when started from the top configuration or the bottom one.Footnote 
10
$\mathsf H^{\mathrm {btm}}$
, respectively. The following result from [Reference Peres and WinklerPW13] shows that the above censorings (weakly) increase the mixing time for these dynamics when started from the top configuration or the bottom one.Footnote 
10
Proposition 4.16 [Reference Peres and WinklerPW13, Theorem 1.1].
 Letting 
 $\rho $
 denote the uniform measure on
$\rho $
 denote the uniform measure on 
 $\mathscr {G} (\mathsf {h})$
, we have for any integer
$\mathscr {G} (\mathsf {h})$
, we have for any integer 
 $t \ge 0$
 that
$t \ge 0$
 that 
 $$ \begin{align*} & d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{wf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{top}}}}, \rho) \le d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{cwf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{top}}}}, \rho); \qquad d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{wf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{btm}}}}, \rho) \le d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{cwf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{btm}}}}, \rho); \\ & d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{rf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{top}}}}, \rho) \le d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{crf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{top}}}}, \rho); \qquad d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{rf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{btm}}}}, \rho) \le d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{crf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{btm}}}}, \rho). \end{align*} $$
$$ \begin{align*} & d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{wf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{top}}}}, \rho) \le d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{cwf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{top}}}}, \rho); \qquad d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{wf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{btm}}}}, \rho) \le d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{cwf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{btm}}}}, \rho); \\ & d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{rf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{top}}}}, \rho) \le d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{crf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{top}}}}, \rho); \qquad d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{rf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{btm}}}}, \rho) \le d_{\operatorname{\mathrm{TV}}} (M_{\operatorname{\mathrm{crf}}}^t \delta_{\mathsf{H}^{\operatorname{\mathrm{btm}}}}, \rho). \end{align*} $$
 Recalling the notation from Equation (4.3), we also denote for any irreducible Markov chain 
 $M : \mathscr {P} \big ( \mathscr {G} (h) \big ) \rightarrow \mathscr {P} \big (\mathscr {G} (h) \big )$
 the quantities (where below
$M : \mathscr {P} \big ( \mathscr {G} (h) \big ) \rightarrow \mathscr {P} \big (\mathscr {G} (h) \big )$
 the quantities (where below 
 $\rho $
 denotes the stationary measure of M)
$\rho $
 denotes the stationary measure of M) 
 $$ \begin{align*} &R (\varepsilon; M) = \min \Big\{ t \in \mathbb{Z}_{\geq 0}: d_{\operatorname{\mathrm{TV}}} (M^t \delta_{\mathsf H^{\mathrm{top}}}, \rho) < \varepsilon \Big\}; \\ &S (\varepsilon; M) = \min \Big\{ t \in \mathbb{Z}_{\geq 0}: d_{\operatorname{\mathrm{TV}}} (M^t \delta_{\mathsf H^{\mathrm{btm}}}, \rho) < \varepsilon \Big\}. \end{align*} $$
$$ \begin{align*} &R (\varepsilon; M) = \min \Big\{ t \in \mathbb{Z}_{\geq 0}: d_{\operatorname{\mathrm{TV}}} (M^t \delta_{\mathsf H^{\mathrm{top}}}, \rho) < \varepsilon \Big\}; \\ &S (\varepsilon; M) = \min \Big\{ t \in \mathbb{Z}_{\geq 0}: d_{\operatorname{\mathrm{TV}}} (M^t \delta_{\mathsf H^{\mathrm{btm}}}, \rho) < \varepsilon \Big\}. \end{align*} $$
The following lemma bounds the mixing times of weighted flip and region-flip dynamics by the associated R and S.
Lemma 4.17. For any 
 $\varepsilon \in (0, 1)$
, we have
$\varepsilon \in (0, 1)$
, we have 
 $$ \begin{align} t_{\operatorname{\mathrm{mix}}} (\varepsilon, M_{\operatorname{\mathrm{wf}}}) \le R \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{wf}}} \Big) + S \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{wf}}} \Big); \quad t_{\operatorname{\mathrm{mix}}} (\varepsilon, M_{\operatorname{\mathrm{rf}}}) \le R \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{rf}}} \Big) + S \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{rf}}} \Big). \end{align} $$
$$ \begin{align} t_{\operatorname{\mathrm{mix}}} (\varepsilon, M_{\operatorname{\mathrm{wf}}}) \le R \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{wf}}} \Big) + S \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{wf}}} \Big); \quad t_{\operatorname{\mathrm{mix}}} (\varepsilon, M_{\operatorname{\mathrm{rf}}}) \le R \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{rf}}} \Big) + S \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{rf}}} \Big). \end{align} $$
Proof. We only establish the second statement of Equation (4.6), as the proof of the first is entirely analogous. First, observe that there exists a grand coupling the region-flip dynamics 
 $M_{\operatorname {\mathrm {rf}}}$
 over all choices of initial data in
$M_{\operatorname {\mathrm {rf}}}$
 over all choices of initial data in 
 $\mathscr {G} (\mathsf {h})$
 by running them under the same choices of sequences
$\mathscr {G} (\mathsf {h})$
 by running them under the same choices of sequences 
 $\mathcal {X} = (X_1, X_2, \ldots )$
 and vertices
$\mathcal {X} = (X_1, X_2, \ldots )$
 and vertices 
 $\mathsf {v}$
 (at which each flip is made) from Definition 4.15. It is quickly verified (see [Reference GorinGor21, Proposition 25.7], for example) that this coupling is monotone, meaning that if for some
$\mathsf {v}$
 (at which each flip is made) from Definition 4.15. It is quickly verified (see [Reference GorinGor21, Proposition 25.7], for example) that this coupling is monotone, meaning that if for some 
 $\mathsf {H}_1, \mathsf {H}_2 \in \mathscr {G} (\mathsf {h})$
 we have
$\mathsf {H}_1, \mathsf {H}_2 \in \mathscr {G} (\mathsf {h})$
 we have 
 $\mathsf {H}_1 (\mathsf {v}) \le \mathsf {H}_2 (\mathsf {v})$
 for each
$\mathsf {H}_1 (\mathsf {v}) \le \mathsf {H}_2 (\mathsf {v})$
 for each 
 $\mathsf {v} \in \mathsf {R}$
, then it holds that
$\mathsf {v} \in \mathsf {R}$
, then it holds that 
 $M_{\operatorname {\mathrm {rf}}}^t \mathsf {H}_1 (\mathsf {v}) \le M_{\operatorname {\mathrm {rf}}}^t \mathsf {H}_2 (\mathsf {v})$
 for each
$M_{\operatorname {\mathrm {rf}}}^t \mathsf {H}_1 (\mathsf {v}) \le M_{\operatorname {\mathrm {rf}}}^t \mathsf {H}_2 (\mathsf {v})$
 for each 
 $t \ge 0$
 and
$t \ge 0$
 and 
 $v \in \mathsf {R}$
.
$v \in \mathsf {R}$
.
 Observe that it suffices to show under these coupled dynamics that, with probability at least 
 $1 - \varepsilon $
, the models started at
$1 - \varepsilon $
, the models started at 
 $\mathsf {H}^{\operatorname {\mathrm {top}}}$
 and at
$\mathsf {H}^{\operatorname {\mathrm {top}}}$
 and at 
 $\mathsf {H}^{\operatorname {\mathrm {btm}}}$
 coincide after time
$\mathsf {H}^{\operatorname {\mathrm {btm}}}$
 coincide after time 
 $R \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big ) + S \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big )$
, that is,
$R \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big ) + S \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big )$
, that is, 
 $$ \begin{align} \mathbb{P} [ M_{\operatorname{\mathrm{rf}}}^T \mathsf{H}^{\operatorname{\mathrm{top}}} = M_{\operatorname{\mathrm{rf}}}^T \mathsf{H}^{\operatorname{\mathrm{btm}}}] \ge 1 - \varepsilon, \qquad \text{if}\; T \ge R \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{rf}}} \Big) + S \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{rf}}} \Big). \end{align} $$
$$ \begin{align} \mathbb{P} [ M_{\operatorname{\mathrm{rf}}}^T \mathsf{H}^{\operatorname{\mathrm{top}}} = M_{\operatorname{\mathrm{rf}}}^T \mathsf{H}^{\operatorname{\mathrm{btm}}}] \ge 1 - \varepsilon, \qquad \text{if}\; T \ge R \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{rf}}} \Big) + S \Big( \displaystyle\frac{\varepsilon}{4A^2}; M_{\operatorname{\mathrm{rf}}} \Big). \end{align} $$
 Indeed, given Equation (4.7), it follows since 
 $\mathsf {H}^{\operatorname {\mathrm {btm}}} \le \mathsf {F} \le \mathsf {H}^{\operatorname {\mathrm {top}}}$
 for each
$\mathsf {H}^{\operatorname {\mathrm {btm}}} \le \mathsf {F} \le \mathsf {H}^{\operatorname {\mathrm {top}}}$
 for each 
 $\mathsf {F} \in \mathscr {G}(\mathsf {h})$
 that with probability at least
$\mathsf {F} \in \mathscr {G}(\mathsf {h})$
 that with probability at least 
 $1 - \varepsilon $
 the
$1 - \varepsilon $
 the 
 $M_{\operatorname {\mathrm {rf}}}^T \mathsf {F}$
 all, over every
$M_{\operatorname {\mathrm {rf}}}^T \mathsf {F}$
 all, over every 
 $\mathsf {F} \in \mathscr {G} (\mathsf {h})$
, coincide for
$\mathsf {F} \in \mathscr {G} (\mathsf {h})$
, coincide for 
 $T \ge R \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big ) + S \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big )$
. In particular, sampling
$T \ge R \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big ) + S \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big )$
. In particular, sampling 
 $\mathsf {F}$
 under the stationary measure
$\mathsf {F}$
 under the stationary measure 
 $\rho $
 for
$\rho $
 for 
 $M_{\operatorname {\mathrm {rf}}}$
, we deduce for any
$M_{\operatorname {\mathrm {rf}}}$
, we deduce for any 
 $\mathsf {H} \in \mathscr {G}(\mathsf {h})$
 that one can couple
$\mathsf {H} \in \mathscr {G}(\mathsf {h})$
 that one can couple 
 $M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}$
 to coincide with a height function sampled under
$M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}$
 to coincide with a height function sampled under 
 $\rho $
, with probability
$\rho $
, with probability 
 $1 - \varepsilon $
; hence,
$1 - \varepsilon $
; hence, 
 $t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {rf}}}) \le R \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big ) + S \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big )$
, confirming the lemma.
$t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {rf}}}) \le R \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big ) + S \big ( \frac {\varepsilon }{4A^2}, M_{\operatorname {\mathrm {rf}}} \big )$
, confirming the lemma.
 It remains to verify Equation (4.7). Since 
 $T \ge R \big ( \frac {\varepsilon }{4A^2}; M_{\operatorname {\mathrm {rf}}} \big )$
, we have by Equation (4.4) that it is possible to couple
$T \ge R \big ( \frac {\varepsilon }{4A^2}; M_{\operatorname {\mathrm {rf}}} \big )$
, we have by Equation (4.4) that it is possible to couple 
 $M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}}$
 with a height function
$M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}}$
 with a height function 
 $\mathsf {F}$
 sampled under the stationary measure
$\mathsf {F}$
 sampled under the stationary measure 
 $\rho $
 of
$\rho $
 of 
 $M_{\operatorname {\mathrm {rf}}}$
 such that
$M_{\operatorname {\mathrm {rf}}}$
 such that 
 $M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} = \mathsf {F}$
 with probability at least
$M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} = \mathsf {F}$
 with probability at least 
 $1 - \frac {\varepsilon }{4A^2}$
. Moreover, since
$1 - \frac {\varepsilon }{4A^2}$
. Moreover, since 
 $M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {u}) = \mathsf {h} (\mathsf {u}) = \mathsf {F}(\mathsf {u})$
 for each
$M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {u}) = \mathsf {h} (\mathsf {u}) = \mathsf {F}(\mathsf {u})$
 for each 
 $\mathsf {u} \in \partial \mathsf {R}$
 and since
$\mathsf {u} \in \partial \mathsf {R}$
 and since 
 $\operatorname {\mathrm {diam}} \mathsf {R} \le A$
, it follows (as
$\operatorname {\mathrm {diam}} \mathsf {R} \le A$
, it follows (as 
 $\mathsf {H}$
 is
$\mathsf {H}$
 is 
 $1$
-Lipschitz) that
$1$
-Lipschitz) that 
 $\big | M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {v}) - \mathsf {F} (\mathsf {v}) \big | \le 2A$
. Combining these two statements, we deduce that
$\big | M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {v}) - \mathsf {F} (\mathsf {v}) \big | \le 2A$
. Combining these two statements, we deduce that 
 $\mathbb {E} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {v}) \big ] \le \mathbb {E} \big [ \mathsf {F} (\mathsf {v}) \big ] + \frac {\varepsilon }{2A}$
, for each
$\mathbb {E} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {v}) \big ] \le \mathbb {E} \big [ \mathsf {F} (\mathsf {v}) \big ] + \frac {\varepsilon }{2A}$
, for each 
 $\mathsf {v} \in \mathsf {R}$
. Similarly, we have
$\mathsf {v} \in \mathsf {R}$
. Similarly, we have 
 $\mathbb {E} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {btm}}} (\mathsf {v}) \big ] \le \mathbb {E} \big [ \mathsf {F} (\mathsf {v}) \big ] - \frac {\varepsilon }{2A}$
.
$\mathbb {E} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {btm}}} (\mathsf {v}) \big ] \le \mathbb {E} \big [ \mathsf {F} (\mathsf {v}) \big ] - \frac {\varepsilon }{2A}$
.
 Therefore, 
 $\mathbb {E} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {v}) \big ] \le \mathbb {E} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {btm}}} (\mathsf {v})| \big ] + \frac {\varepsilon }{A}$
 for any
$\mathbb {E} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {v}) \big ] \le \mathbb {E} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {btm}}} (\mathsf {v})| \big ] + \frac {\varepsilon }{A}$
 for any 
 $\mathsf {v} \in \mathsf {R}$
. Together with the above grand coupling satisfying
$\mathsf {v} \in \mathsf {R}$
. Together with the above grand coupling satisfying 
 $M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {btm}}} \le M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}}$
, the fact that any height function is integer-valued and a Markov inequality, we deduce that
$M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {btm}}} \le M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}}$
, the fact that any height function is integer-valued and a Markov inequality, we deduce that 
 $\mathbb {P} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {v}) \ne M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {btm}}} (\mathsf {v}) \big ] \le \frac {\varepsilon }{A}$
, under this grand coupling. A union bound over all
$\mathbb {P} \big [ M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {top}}} (\mathsf {v}) \ne M_{\operatorname {\mathrm {rf}}}^T \mathsf {H}^{\operatorname {\mathrm {btm}}} (\mathsf {v}) \big ] \le \frac {\varepsilon }{A}$
, under this grand coupling. A union bound over all 
 $|\mathsf {R}| \le A$
 vertices
$|\mathsf {R}| \le A$
 vertices 
 $\mathsf {v} \in \mathsf {R}$
 then yields Equation (4.7) and thus the lemma.
$\mathsf {v} \in \mathsf {R}$
 then yields Equation (4.7) and thus the lemma.
Next, we have the following lemma that compares the mixing times of the flip and censored region-flip dynamics.
Lemma 4.18. Adopting the notation of Proposition 4.6, and fixing a real number 
 $\varepsilon \in \big ( 0, \frac {1}{2} \big ]$
, we have
$\varepsilon \in \big ( 0, \frac {1}{2} \big ]$
, we have 
 $t_{\operatorname {\mathrm {mix}}} (8A^2 \varepsilon; M_{\operatorname {\mathrm {crf}}}) \leq 8A^9 (\log \varepsilon ^{-1})^2 \cdot \big ( t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}}) + 1 \big )$
.
$t_{\operatorname {\mathrm {mix}}} (8A^2 \varepsilon; M_{\operatorname {\mathrm {crf}}}) \leq 8A^9 (\log \varepsilon ^{-1})^2 \cdot \big ( t_{\operatorname {\mathrm {mix}}} (\varepsilon; M_{\operatorname {\mathrm {fl}}}) + 1 \big )$
.
Proof. We first bound the mixing time of 
 $M_{\operatorname {\mathrm {crf}}}$
 in terms of that of
$M_{\operatorname {\mathrm {crf}}}$
 in terms of that of 
 $M_{\operatorname {\mathrm {wf}}}$
. To that end, recall that the state
$M_{\operatorname {\mathrm {wf}}}$
. To that end, recall that the state 
 $\mathsf {H}_t$
 at time
$\mathsf {H}_t$
 at time 
 $t \geq 1$
 under
$t \geq 1$
 under 
 $M_{\operatorname {\mathrm {wf}}}$
 is defined from
$M_{\operatorname {\mathrm {wf}}}$
 is defined from 
 $\mathsf {H}_{t - 1}$
 by performing a random flip at a vertex
$\mathsf {H}_{t - 1}$
 by performing a random flip at a vertex 
 $\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 chosen with probability
$\mathsf {v} \in \mathsf {R} \setminus \partial \mathsf {R}$
 chosen with probability 
 $\mathfrak {m} (\mathsf {v}) \cdot \mathfrak {M}^{-1}$
. Observe that we equivalently sample
$\mathfrak {m} (\mathsf {v}) \cdot \mathfrak {M}^{-1}$
. Observe that we equivalently sample 
 $\mathsf {v}$
 by first selecting an index
$\mathsf {v}$
 by first selecting an index 
 $i \in [1, k]$
 with probability
$i \in [1, k]$
 with probability 
 $p_i$
 and then selecting
$p_i$
 and then selecting 
 $\mathsf {v} \in \mathsf {R}_i \setminus \partial \mathsf {R}_i$
 uniformly at random. Recalling the random sequence
$\mathsf {v} \in \mathsf {R}_i \setminus \partial \mathsf {R}_i$
 uniformly at random. Recalling the random sequence 
 $\mathcal {X} = (X_1, X_2, \ldots )$
 from Definition 4.15, this is in turn equivalent to sampling
$\mathcal {X} = (X_1, X_2, \ldots )$
 from Definition 4.15, this is in turn equivalent to sampling 
 $\mathsf {v} \in \mathsf {R}_{X_t} \setminus \partial \mathsf {R}_{X_t}$
 uniformly at random, where we have denoted
$\mathsf {v} \in \mathsf {R}_{X_t} \setminus \partial \mathsf {R}_{X_t}$
 uniformly at random, where we have denoted 
 $\mathsf {R}_j = \mathsf {R}_i$
 for
$\mathsf {R}_j = \mathsf {R}_i$
 for 
 $i \in [1, k]$
 the integer such that k divides
$i \in [1, k]$
 the integer such that k divides 
 $j - i$
.
$j - i$
.
 It follows that 
 $M_{\operatorname {\mathrm {fl}}}$
 and
$M_{\operatorname {\mathrm {fl}}}$
 and 
 $M_{\operatorname {\mathrm {crf}}}$
 can be coupled so that the former at time t coincides with the latter at time
$M_{\operatorname {\mathrm {crf}}}$
 can be coupled so that the former at time t coincides with the latter at time 
 $(X_t - 1) A^{5} \le k t A^{5}$
, where in the last equality we used the fact that
$(X_t - 1) A^{5} \le k t A^{5}$
, where in the last equality we used the fact that 
 $X_t \leq kt$
 (as
$X_t \leq kt$
 (as 
 $X_t - X_{t - 1} \leq k$
). Hence,
$X_t - X_{t - 1} \leq k$
). Hence, 
 $$ \begin{align} t_{\operatorname{\mathrm{mix}}} (8A^2 \varepsilon; M_{\operatorname{\mathrm{crf}}}) \leq k A^{5} t_{\operatorname{\mathrm{mix}}} (8A^2 \varepsilon; M_{\operatorname{\mathrm{wf}}}) \le A^6 t_{\operatorname{\mathrm{mix}}} (8 A^2 \varepsilon; M_{\operatorname{\mathrm{wf}}}). \end{align} $$
$$ \begin{align} t_{\operatorname{\mathrm{mix}}} (8A^2 \varepsilon; M_{\operatorname{\mathrm{crf}}}) \leq k A^{5} t_{\operatorname{\mathrm{mix}}} (8A^2 \varepsilon; M_{\operatorname{\mathrm{wf}}}) \le A^6 t_{\operatorname{\mathrm{mix}}} (8 A^2 \varepsilon; M_{\operatorname{\mathrm{wf}}}). \end{align} $$
Moreover, we have
 $$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (8 A^2 \varepsilon; M_{\operatorname{\mathrm{wf}}}) \le R (2 \varepsilon; M_{\operatorname{\mathrm{wf}}}) + S(2 \varepsilon; M_{\operatorname{\mathrm{wf}}}) & \le 2 t_{\operatorname{\mathrm{mix}}} (2\varepsilon; M_{\operatorname{\mathrm{cwf}}}) \\ & \le 8A^3 (\log \varepsilon^{-1})^2 \cdot \big( t_{\operatorname{\mathrm{mix}}} (\varepsilon; M_{\operatorname{\mathrm{fl}}}) + 1 \big), \end{align*} $$
$$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (8 A^2 \varepsilon; M_{\operatorname{\mathrm{wf}}}) \le R (2 \varepsilon; M_{\operatorname{\mathrm{wf}}}) + S(2 \varepsilon; M_{\operatorname{\mathrm{wf}}}) & \le 2 t_{\operatorname{\mathrm{mix}}} (2\varepsilon; M_{\operatorname{\mathrm{cwf}}}) \\ & \le 8A^3 (\log \varepsilon^{-1})^2 \cdot \big( t_{\operatorname{\mathrm{mix}}} (\varepsilon; M_{\operatorname{\mathrm{fl}}}) + 1 \big), \end{align*} $$
where in the first inequality we applied Lemma 4.17, in the second we applied Proposition 4.16 and in the third we applied Lemma 4.14. Combining this with Equation (4.8) yields the lemma.
Given the above, we can quickly establish Lemma 4.10 and Lemma 4.11.
Proof of Lemma 4.10.
By Lemma 4.17, Proposition 4.16, and Lemma 4.18, we have for sufficiently large A that
 $$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (e^{-2A}; M_{\operatorname{\mathrm{rf}}}) &\leq R \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{rf}}} \Big) + S \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{rf}}} \Big)\\ &\leq R \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{crf}}} \Big) + S \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{crf}}} \Big)\\ &\leq 2t_{\operatorname{\mathrm{mix}}} \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{crf}}} \Big)\leq 16A^9 (3A)^2 \cdot \bigg( t_{\operatorname{\mathrm{mix}}} \Big(\frac{e^{-2A}}{32A^4}; M_{\operatorname{\mathrm{fl}}} \Big) + 1 \bigg), \end{align*} $$
$$ \begin{align*} t_{\operatorname{\mathrm{mix}}} (e^{-2A}; M_{\operatorname{\mathrm{rf}}}) &\leq R \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{rf}}} \Big) + S \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{rf}}} \Big)\\ &\leq R \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{crf}}} \Big) + S \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{crf}}} \Big)\\ &\leq 2t_{\operatorname{\mathrm{mix}}} \Big(\frac{e^{-2A}}{4A^2}, M_{\operatorname{\mathrm{crf}}} \Big)\leq 16A^9 (3A)^2 \cdot \bigg( t_{\operatorname{\mathrm{mix}}} \Big(\frac{e^{-2A}}{32A^4}; M_{\operatorname{\mathrm{fl}}} \Big) + 1 \bigg), \end{align*} $$
which yields the lemma.
Proof of Lemma 4.11.
 First, observe that for sufficiently large A we have 
 $t_{\operatorname {\mathrm {mix}}} \big (\frac {e^{-2A}}{4A^2}; M_{\operatorname {\mathrm {fl}}} \big ) \leq \frac {A^5}{200}$
, by Proposition 4.9; thus, Lemma 4.10 implies that
$t_{\operatorname {\mathrm {mix}}} \big (\frac {e^{-2A}}{4A^2}; M_{\operatorname {\mathrm {fl}}} \big ) \leq \frac {A^5}{200}$
, by Proposition 4.9; thus, Lemma 4.10 implies that 
 $t_{\operatorname {\mathrm {mix}}} (e^{-2A}; M_{\operatorname {\mathrm {rf}}}) \leq A^{16}$
. So, to establish the lemma it suffices to couple the dynamics
$t_{\operatorname {\mathrm {mix}}} (e^{-2A}; M_{\operatorname {\mathrm {rf}}}) \leq A^{16}$
. So, to establish the lemma it suffices to couple the dynamics 
 $M_{\operatorname {\mathrm {alt}}}$
 at time t to coincide with
$M_{\operatorname {\mathrm {alt}}}$
 at time t to coincide with 
 $M_{\operatorname {\mathrm {rf}}}$
 at time
$M_{\operatorname {\mathrm {rf}}}$
 at time 
 $A^5 t$
 for each
$A^5 t$
 for each 
 $t \in [0, A^{11}]$
, away from an event of probability at most
$t \in [0, A^{11}]$
, away from an event of probability at most 
 $A^{11} e^{-2A} \leq e^{-A} - e^{-2A}$
.
$A^{11} e^{-2A} \leq e^{-A} - e^{-2A}$
.
 To that end, let 
 $\mathsf {H}_t$
 denote the state after
$\mathsf {H}_t$
 denote the state after 
 $t \geq 0$
 steps of the dynamics
$t \geq 0$
 steps of the dynamics 
 $M_{\operatorname {\mathrm {rf}}}$
. Furthermore, for any integer
$M_{\operatorname {\mathrm {rf}}}$
. Furthermore, for any integer 
 $s \geq 0$
 and
$s \geq 0$
 and 
 $i \in [1, k]$
 such that k divides
$i \in [1, k]$
 such that k divides 
 $s - i + 1$
, set
$s - i + 1$
, set 
 $\mathsf {H}_s' = \mathsf {H}_{s A^5}$
 and
$\mathsf {H}_s' = \mathsf {H}_{s A^5}$
 and 
 $\mathsf {h}_{s+1}' = \mathsf {H}_{s A^5} |_{\partial \mathsf {R}_i}$
. Then, Definition 4.8 implies
$\mathsf {h}_{s+1}' = \mathsf {H}_{s A^5} |_{\partial \mathsf {R}_i}$
. Then, Definition 4.8 implies 
 $\mathsf {H}_{s+1}' \in \mathscr {G} (\mathsf {h}_{s+1}')$
 is obtained from
$\mathsf {H}_{s+1}' \in \mathscr {G} (\mathsf {h}_{s+1}')$
 is obtained from 
 $\mathsf {H}_s' \in \mathscr {G} (\mathsf {h}_s')$
 from applying the flip dynamics
$\mathsf {H}_s' \in \mathscr {G} (\mathsf {h}_s')$
 from applying the flip dynamics 
 $M_{\operatorname {\mathrm {fl}}}$
 on
$M_{\operatorname {\mathrm {fl}}}$
 on 
 $\mathsf {R}_i$
 for time
$\mathsf {R}_i$
 for time 
 $A^5$
. Hence, since
$A^5$
. Hence, since 
 $t_{\operatorname {\mathrm {mix}}} (e^{-2A}; M_{\operatorname {\mathrm {fl}}})\leq t_{\operatorname {\mathrm {mix}}} \big ( \frac {e^{-2A}}{4A^2}; M_{\operatorname {\mathrm {fl}}} \big ) \leq A^5$
, we may couple
$t_{\operatorname {\mathrm {mix}}} (e^{-2A}; M_{\operatorname {\mathrm {fl}}})\leq t_{\operatorname {\mathrm {mix}}} \big ( \frac {e^{-2A}}{4A^2}; M_{\operatorname {\mathrm {fl}}} \big ) \leq A^5$
, we may couple 
 $\mathsf {H}_s' |_{\mathsf {R}_i}$
 with a uniformly random element of
$\mathsf {H}_s' |_{\mathsf {R}_i}$
 with a uniformly random element of 
 $\mathscr {G} (\mathsf {h}_s')$
, away from an event of probability at most
$\mathscr {G} (\mathsf {h}_s')$
, away from an event of probability at most 
 $e^{-2A}$
.
$e^{-2A}$
.
 It follows that the sequence 
 $\{ \mathsf {H}_0', \mathsf {H}_1', \ldots , \mathsf {H}_s' \}$
 can be coupled with s steps of the alternating dynamics
$\{ \mathsf {H}_0', \mathsf {H}_1', \ldots , \mathsf {H}_s' \}$
 can be coupled with s steps of the alternating dynamics 
 $M_{\operatorname {\mathrm {alt}}}$
 with initial data
$M_{\operatorname {\mathrm {alt}}}$
 with initial data 
 $\mathsf {H}_0$
, away from an event of probability at most
$\mathsf {H}_0$
, away from an event of probability at most 
 $s e^{-2A}$
. Taking
$s e^{-2A}$
. Taking 
 $s = t_{\operatorname {\mathrm {mix}}} (e^{-2A}; M_{\operatorname {\mathrm {fl}}}) \leq A^{11}$
 and recalling that
$s = t_{\operatorname {\mathrm {mix}}} (e^{-2A}; M_{\operatorname {\mathrm {fl}}}) \leq A^{11}$
 and recalling that 
 $\mathsf {H}_s' = \mathsf {H}_{s A^5}$
, we deduce the lemma.
$\mathsf {H}_s' = \mathsf {H}_{s A^5}$
, we deduce the lemma.
5 Tilted height functions and comparison estimates
In this section, we discuss how height functions can be ‘tilted’ in a specific way. Section 5.1 introduces the notion of a tilted height function and states results comparing tilted height functions to random tiling height functions; we prove the latter comparison results in Section 5.2 and Section 5.3.
5.1 Tilted height functions
 In this section, we describe a way of ‘tilting’ the height function of a random tiling that will enable us to apply Theorem 4.3 in an effective way. Throughout this section, we recall the notation from Definition 2.2 and more generally from Section 2.3 and Section 3.1; this includes the polygonal domain 
 $\mathfrak {P}$
 and associated boundary height function
$\mathfrak {P}$
 and associated boundary height function 
 $h : \partial \mathfrak {P} \rightarrow \mathbb {R}$
; the liquid region
$h : \partial \mathfrak {P} \rightarrow \mathbb {R}$
; the liquid region 
 $ \mathfrak {L} (\mathfrak {P})$
 and arctic curve
$ \mathfrak {L} (\mathfrak {P})$
 and arctic curve 
 $\mathfrak {A} (\mathfrak {P})$
 from Equations (2.5) and (2.6); the associated complex slope
$\mathfrak {A} (\mathfrak {P})$
 from Equations (2.5) and (2.6); the associated complex slope 
 $f_t (x) = f(x, t)$
 from Equation (3.1); the polygonal domain
$f_t (x) = f(x, t)$
 from Equation (3.1); the polygonal domain 
 $\mathsf {P} = n \overline {\mathfrak {P}} \cap \mathbb {T}^2$
 and its associated boundary height function
$\mathsf {P} = n \overline {\mathfrak {P}} \cap \mathbb {T}^2$
 and its associated boundary height function 
 $\mathsf {h} : \mathsf {P} \rightarrow \mathbb {Z}$
. For any
$\mathsf {h} : \mathsf {P} \rightarrow \mathbb {Z}$
. For any 
 $(x, s) \in \overline {\mathfrak {L}(\mathfrak {P})}$
, we also set
$(x, s) \in \overline {\mathfrak {L}(\mathfrak {P})}$
, we also set 
 $$ \begin{align} \Omega_s (x) = \frac{1}{\pi} \operatorname{\mathrm{Im}} \displaystyle\frac{f_s (x)}{f_s (x) + 1}=\frac{1}{\pi}\frac{\operatorname{\mathrm{Im}} f_s (x)}{|f_s (x) + 1|^2}; \qquad \Upsilon_s (x) = \displaystyle\frac{f_s (x)}{\big( f_s (x) + 1 \big)^2}, \end{align} $$
$$ \begin{align} \Omega_s (x) = \frac{1}{\pi} \operatorname{\mathrm{Im}} \displaystyle\frac{f_s (x)}{f_s (x) + 1}=\frac{1}{\pi}\frac{\operatorname{\mathrm{Im}} f_s (x)}{|f_s (x) + 1|^2}; \qquad \Upsilon_s (x) = \displaystyle\frac{f_s (x)}{\big( f_s (x) + 1 \big)^2}, \end{align} $$
and further set 
 $\Omega _s (x) = 0$
 when
$\Omega _s (x) = 0$
 when 
 $(x, s) \notin \overline {\mathfrak {L}(\mathfrak {P})}$
. The parameters
$(x, s) \notin \overline {\mathfrak {L}(\mathfrak {P})}$
. The parameters 
 $\Omega _s (x)$
 and
$\Omega _s (x)$
 and 
 $\Upsilon _s (x)$
 will quantitatively govern how height functions change under ‘tilts’, to be described further in Proposition 5.4 below.
$\Upsilon _s (x)$
 will quantitatively govern how height functions change under ‘tilts’, to be described further in Proposition 5.4 below.
Remark 5.1. Observe that 
 $\Omega _s (x) \leq 0$
 since
$\Omega _s (x) \leq 0$
 since 
 $f_s (x) \in \mathbb {H}^- \cup \mathbb {R}$
. Moreover, if
$f_s (x) \in \mathbb {H}^- \cup \mathbb {R}$
. Moreover, if 
 $(x, s) \in \mathfrak {A}(\mathfrak {P})$
, then
$(x, s) \in \mathfrak {A}(\mathfrak {P})$
, then 
 $f_s (x) \in \mathbb {R}$
, so
$f_s (x) \in \mathbb {R}$
, so 
 $\Upsilon _s (x) \in \mathbb {R}$
. Then
$\Upsilon _s (x) \in \mathbb {R}$
. Then 
 $\Upsilon _s (x)> 0$
 holds if
$\Upsilon _s (x)> 0$
 holds if 
 $f_s (x)> 0$
, and
$f_s (x)> 0$
, and 
 $\Upsilon _s (x) < 0$
 holds if
$\Upsilon _s (x) < 0$
 holds if 
 $f_s (x) < 0$
. The former implies
$f_s (x) < 0$
. The former implies 
 $\arg ^* f_s (x) = 0$
, which by Equation (3.1) implies
$\arg ^* f_s (x) = 0$
, which by Equation (3.1) implies 
 $\partial _x H^* (x, s) = 0$
. Similarly, the latter implies
$\partial _x H^* (x, s) = 0$
. Similarly, the latter implies 
 $\partial _x H^* (x, s) = 1$
.
$\partial _x H^* (x, s) = 1$
.
 Throughout this section, we fix real numbers 
 $\mathfrak {t}_1 < \mathfrak {t}_2$
 with
$\mathfrak {t}_1 < \mathfrak {t}_2$
 with 
 $\mathfrak {t} = \mathfrak {t}_2 - \mathfrak {t}_1$
; linear functions
$\mathfrak {t} = \mathfrak {t}_2 - \mathfrak {t}_1$
; linear functions 
 $\mathfrak {a}, \mathfrak {b} : [\mathfrak {t}_1, \mathfrak {t}_2] \rightarrow \mathbb {R}$
, with slopes in
$\mathfrak {a}, \mathfrak {b} : [\mathfrak {t}_1, \mathfrak {t}_2] \rightarrow \mathbb {R}$
, with slopes in 
 $\{ 0, 1 \}$
; and the domain
$\{ 0, 1 \}$
; and the domain 
 $\mathfrak {D} = \mathfrak {D} (\mathfrak {a}, \mathfrak {b}; \mathfrak {t}_1, \mathfrak {t}_2)$
 from Equation (4.1) with boundaries (4.2), as in Section 4.1. We view them all as independent from n. We will impose the following condition on
$\mathfrak {D} = \mathfrak {D} (\mathfrak {a}, \mathfrak {b}; \mathfrak {t}_1, \mathfrak {t}_2)$
 from Equation (4.1) with boundaries (4.2), as in Section 4.1. We view them all as independent from n. We will impose the following condition on 
 $\mathfrak {D}$
 concerning its relation to
$\mathfrak {D}$
 concerning its relation to 
 $\mathfrak {P}$
 (see Figure 10 for possible depictions).
$\mathfrak {P}$
 (see Figure 10 for possible depictions).
Assumption 5.2. Adopt the notation of Theorem 2.10, and suppose 
 $\mathfrak {D} \subseteq \mathfrak {P}$
, with
$\mathfrak {D} \subseteq \mathfrak {P}$
, with 
 $\mathsf {D} = n \mathfrak {D} \subseteq \mathbb {T}$
. Assume that the second, third and fourth constraints listed in Assumption 4.1 hold for
$\mathsf {D} = n \mathfrak {D} \subseteq \mathbb {T}$
. Assume that the second, third and fourth constraints listed in Assumption 4.1 hold for 
 $\mathfrak {D}$
 (with respect to
$\mathfrak {D}$
 (with respect to 
 $H^*$
). Further, suppose that either
$H^*$
). Further, suppose that either 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is disjoint with
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is disjoint with 
 $\overline {\mathfrak {L}(\mathfrak {P})}$
 or that
$\overline {\mathfrak {L}(\mathfrak {P})}$
 or that 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \subset \partial \mathfrak {P}$
; similarly, suppose that either
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \subset \partial \mathfrak {P}$
; similarly, suppose that either 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
 is disjoint with
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
 is disjoint with 
 $\overline {\mathfrak {L}(\mathfrak {P})}$
 or that
$\overline {\mathfrak {L}(\mathfrak {P})}$
 or that 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D}) \subset \partial \mathfrak {P}$
. We denote the liquid region inside
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D}) \subset \partial \mathfrak {P}$
. We denote the liquid region inside 
 $\mathfrak D$
 as
$\mathfrak D$
 as 
 $\mathfrak {L}=\mathfrak {L}(\mathfrak P)\cap \mathfrak D$
. Additionally, fix a real number
$\mathfrak {L}=\mathfrak {L}(\mathfrak P)\cap \mathfrak D$
. Additionally, fix a real number 
 $\mathfrak {s} \in [\mathfrak {t}_1, \mathfrak {t}_2]$
, and assume that no cusp or tangency location of
$\mathfrak {s} \in [\mathfrak {t}_1, \mathfrak {t}_2]$
, and assume that no cusp or tangency location of 
 $\mathfrak {A}(\mathfrak {P})$
 in
$\mathfrak {A}(\mathfrak {P})$
 in 
 $\overline {\mathfrak {D}}$
 is of the form
$\overline {\mathfrak {D}}$
 is of the form 
 $(x, y)$
, with
$(x, y)$
, with 
 $x \in \mathbb {R}$
 and
$x \in \mathbb {R}$
 and 
 $y \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
.
$y \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
.
Remark 5.3. Under Assumption 5.2, 
 $\Upsilon _t (x)$
 is uniformly bounded away from
$\Upsilon _t (x)$
 is uniformly bounded away from 
 $0$
 and
$0$
 and 
 $\infty $
, for any
$\infty $
, for any 
 $(x, t) \in \mathfrak {A}(\mathfrak {P})$
 with
$(x, t) \in \mathfrak {A}(\mathfrak {P})$
 with 
 $t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
. This holds since
$t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
. This holds since 
 $(f_t (x) + 1)/f_t (x)$
 is the slope of the tangent line to
$(f_t (x) + 1)/f_t (x)$
 is the slope of the tangent line to 
 $\mathfrak {A}(\mathfrak {P})$
 at
$\mathfrak {A}(\mathfrak {P})$
 at 
 $(x, t)$
 (by Lemma 3.7), and since no tangency location of
$(x, t)$
 (by Lemma 3.7), and since no tangency location of 
 $\mathfrak {A}(\mathfrak {P})$
 has y-coordinate in
$\mathfrak {A}(\mathfrak {P})$
 has y-coordinate in 
 $\big \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
. Moreover, if
$\big \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
. Moreover, if 
 $u = (x, t) \in \mathfrak {L}(\mathfrak {P})$
 satisfies
$u = (x, t) \in \mathfrak {L}(\mathfrak {P})$
 satisfies 
 $t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
, and we denote
$t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
, and we denote 
 $d = \operatorname {\mathrm {dist}} (u, \mathfrak {A}(\mathfrak {P}))$
, then there exists a constant
$d = \operatorname {\mathrm {dist}} (u, \mathfrak {A}(\mathfrak {P}))$
, then there exists a constant 
 $c = c(\mathfrak {P}, \mathfrak {D})> 0$
 such that
$c = c(\mathfrak {P}, \mathfrak {D})> 0$
 such that 
 $c d^{1/2} \leq -\Omega _t (x) \leq c^{-1} d^{1/2}$
. This follows from the square root decay of
$c d^{1/2} \leq -\Omega _t (x) \leq c^{-1} d^{1/2}$
. This follows from the square root decay of 
 $\operatorname {\mathrm {Im}} f_t (x)$
 around smooth points of
$\operatorname {\mathrm {Im}} f_t (x)$
 around smooth points of 
 $\mathfrak {A}(\mathfrak {P})$
 (see Lemma A.1 below) and the fact that no cusp or tangency location of
$\mathfrak {A}(\mathfrak {P})$
 (see Lemma A.1 below) and the fact that no cusp or tangency location of 
 $\mathfrak {A}(\mathfrak {P})$
 in
$\mathfrak {A}(\mathfrak {P})$
 in 
 $\overline {\mathfrak {D}}$
 has y-coordinate in
$\overline {\mathfrak {D}}$
 has y-coordinate in 
 $\{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
.
$\{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
.
 Next, we state the following proposition, to be established in Section 7 below, indicating how a height function can be ‘tilted’. Here, the parameters 
 $\Omega _s (x)$
 and
$\Omega _s (x)$
 and 
 $\Upsilon _s (x)$
 from Equation (5.1) will govern how the height function and edge of the liquid region change under such a tilt, respectively. In what follows, all implicit constants (including notions of being ‘sufficiently small’) will only depend on the parameters
$\Upsilon _s (x)$
 from Equation (5.1) will govern how the height function and edge of the liquid region change under such a tilt, respectively. In what follows, all implicit constants (including notions of being ‘sufficiently small’) will only depend on the parameters 
 $\mathfrak {P}$
,
$\mathfrak {P}$
, 
 $\mathfrak {D}$
 and
$\mathfrak {D}$
 and 
 $\varepsilon $
 in the statement of the proposition. We also recall maximizers of
$\varepsilon $
 in the statement of the proposition. We also recall maximizers of 
 $\mathcal {E}$
 from Equation (2.4), and the liquid regions
$\mathcal {E}$
 from Equation (2.4), and the liquid regions 
 $\mathfrak {L} (\mathfrak {D}; g)$
,
$\mathfrak {L} (\mathfrak {D}; g)$
, 
 $\mathfrak {L}_{\operatorname {\mathrm {no}}} (\mathfrak {D}; g)$
, and
$\mathfrak {L}_{\operatorname {\mathrm {no}}} (\mathfrak {D}; g)$
, and 
 $\mathfrak {L}_{\operatorname {\mathrm {so}}} (\mathfrak {D}; g)$
 for any function
$\mathfrak {L}_{\operatorname {\mathrm {so}}} (\mathfrak {D}; g)$
 for any function 
 $g : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 from Equations (2.5) and (2.6) and Section 4.1.
$g : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 from Equations (2.5) and (2.6) and Section 4.1.
Proposition 5.4. Fix 
 $\varepsilon> 0$
, and adopt Assumption 5.2. Also, let
$\varepsilon> 0$
, and adopt Assumption 5.2. Also, let 
 $\xi _1, \xi _2 \in \mathbb {R}$
 be real numbers of the same sign (that is,
$\xi _1, \xi _2 \in \mathbb {R}$
 be real numbers of the same sign (that is, 
 $\xi _1 \xi _2 \geq 0$
), with
$\xi _1 \xi _2 \geq 0$
), with 
 $|\xi _1|, |\xi _2|$
 sufficiently small. Further, assume that
$|\xi _1|, |\xi _2|$
 sufficiently small. Further, assume that 
 $ |\xi _2 - \xi _1| \geq \varepsilon \max \big \{ |\xi _1|, |\xi _2| \big \}$
, and define the function
$ |\xi _2 - \xi _1| \geq \varepsilon \max \big \{ |\xi _1|, |\xi _2| \big \}$
, and define the function 
 $\omega : [\mathfrak {t}_1, \mathfrak {t}_2] \rightarrow \mathbb {R}$
 interpolating
$\omega : [\mathfrak {t}_1, \mathfrak {t}_2] \rightarrow \mathbb {R}$
 interpolating 
 $\xi _1$
 and
$\xi _1$
 and 
 $\xi _2$
:
$\xi _2$
: 
 $$ \begin{align} \omega (t) = \xi_2 \displaystyle\frac{y - \mathfrak{t}_1}{\mathfrak{t}_2 - \mathfrak{t}_1} + \xi_1 \displaystyle\frac{\mathfrak{t}_2 - t}{\mathfrak{t}_2 - \mathfrak{t}_1}. \end{align} $$
$$ \begin{align} \omega (t) = \xi_2 \displaystyle\frac{y - \mathfrak{t}_1}{\mathfrak{t}_2 - \mathfrak{t}_1} + \xi_1 \displaystyle\frac{\mathfrak{t}_2 - t}{\mathfrak{t}_2 - \mathfrak{t}_1}. \end{align} $$
 Then there exists a function 
 $\widehat {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 admitting an admissible extension to
$\widehat {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 admitting an admissible extension to 
 $\mathbb {R}^2$
 such that the maximizer
$\mathbb {R}^2$
 such that the maximizer 
 $\widehat {H}^* = \operatorname {\mathrm {argmax}}_{F \in \operatorname {\mathrm {Adm}} (\mathfrak {D}, \widehat {h})} \mathcal {E} (F)$
 of
$\widehat {H}^* = \operatorname {\mathrm {argmax}}_{F \in \operatorname {\mathrm {Adm}} (\mathfrak {D}, \widehat {h})} \mathcal {E} (F)$
 of 
 $\mathcal {E}$
 satisfies the following properties. In the below, we fix one of three real numbers
$\mathcal {E}$
 satisfies the following properties. In the below, we fix one of three real numbers 
 $t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
, and we abbreviate
$t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
, and we abbreviate 
 $\widehat {\mathfrak {L}} = \mathfrak {L} (\mathfrak {D}; \widehat {h}) \cup \mathfrak {L}_{\operatorname {\mathrm {no}}} (\mathfrak {D}; \widehat {h}) \cup \mathfrak {L}_{\operatorname {\mathrm {so}}} (\mathfrak {D}; \widehat {h})$
.
$\widehat {\mathfrak {L}} = \mathfrak {L} (\mathfrak {D}; \widehat {h}) \cup \mathfrak {L}_{\operatorname {\mathrm {no}}} (\mathfrak {D}; \widehat {h}) \cup \mathfrak {L}_{\operatorname {\mathrm {so}}} (\mathfrak {D}; \widehat {h})$
. 
- 
(1) For any  $(x, t) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
, we have (5.3) $(x, t) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
, we have (5.3) $$ \begin{align} \big| \widehat{H}^* (x, t) - H^* (x, t) - \omega (t) \Omega_t (x) \big| = \mathcal{O} \big( |\xi_1|^{3/2} + |\xi_2|^{3/2} \big). \end{align} $$ $$ \begin{align} \big| \widehat{H}^* (x, t) - H^* (x, t) - \omega (t) \Omega_t (x) \big| = \mathcal{O} \big( |\xi_1|^{3/2} + |\xi_2|^{3/2} \big). \end{align} $$
- 
(2) Suppose  $\big \{ x: (x, t) \in \mathfrak {L} \big \}$
 is a union of $\big \{ x: (x, t) \in \mathfrak {L} \big \}$
 is a union of $k \geq 1$
 disjoint open intervals $k \geq 1$
 disjoint open intervals $(x_1, x_1') \cup (x_2, x_2') \cup \cdots \cup (x_k, x_k')$
. Then, $(x_1, x_1') \cup (x_2, x_2') \cup \cdots \cup (x_k, x_k')$
. Then, $\big \{ x : (x, t) \in \widehat {\mathfrak {L}} \big \}$
 is also a union of k disjoint open intervals $\big \{ x : (x, t) \in \widehat {\mathfrak {L}} \big \}$
 is also a union of k disjoint open intervals $(\widehat {x}_1, \widehat {x}_1') \cup (\widehat {x}_2, \widehat {x}_2') \cup \cdots \cup (\widehat {x}_k, \widehat {x}_k')$
. Moreover, for any index $(\widehat {x}_1, \widehat {x}_1') \cup (\widehat {x}_2, \widehat {x}_2') \cup \cdots \cup (\widehat {x}_k, \widehat {x}_k')$
. Moreover, for any index $1\leq j\leq k$
, we have (5.4)and $1\leq j\leq k$
, we have (5.4)and $$ \begin{align} \widehat{x}_j - x_j = \omega (t) \Upsilon_t (x_j) + \mathcal{O} (\xi_1^2 + \xi_2^2); \qquad \widehat{x}_j' - x_j' = \omega (t) \Upsilon_t (x_j') + \mathcal{O} ( \xi_1^2 + \xi_2^2), \end{align} $$ $$ \begin{align} \widehat{x}_j - x_j = \omega (t) \Upsilon_t (x_j) + \mathcal{O} (\xi_1^2 + \xi_2^2); \qquad \widehat{x}_j' - x_j' = \omega (t) \Upsilon_t (x_j') + \mathcal{O} ( \xi_1^2 + \xi_2^2), \end{align} $$ $\widehat {H}^* (x, t) = H^* (x, t)$
 whenever $\widehat {H}^* (x, t) = H^* (x, t)$
 whenever $(x, t) \in \overline {\mathfrak {D}}$
 and $(x, t) \in \overline {\mathfrak {D}}$
 and $(x, t) \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
. $(x, t) \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
.
- 
(3) Under the notation of Equation (5.4), fix any endpoint  $x \in \bigcup _{i = 1}^k \{ x_i, x_i' \}$
; set $x \in \bigcup _{i = 1}^k \{ x_i, x_i' \}$
; set $\widehat {x} = \widehat {x}_i$
 or $\widehat {x} = \widehat {x}_i$
 or $\widehat {x} = \widehat {x}_i'$
 if $\widehat {x} = \widehat {x}_i'$
 if $x = x_i$
 or $x = x_i$
 or $x = x_i'$
, respectively. For any real number $x = x_i'$
, respectively. For any real number $\Delta $
 with $\Delta $
 with $|\Delta |$
 sufficiently small, we have (5.5) $|\Delta |$
 sufficiently small, we have (5.5) $$ \begin{align} \widehat{H}^* (\widehat{x} + \Delta) - \widehat{H}^* (\widehat{x}) = H^* (x + \Delta) - H^* (x) + \mathcal{O} \big( (|\xi_1| + |\xi_2|) |\Delta|^{3/2} + \Delta^2 \big). \end{align} $$ $$ \begin{align} \widehat{H}^* (\widehat{x} + \Delta) - \widehat{H}^* (\widehat{x}) = H^* (x + \Delta) - H^* (x) + \mathcal{O} \big( (|\xi_1| + |\xi_2|) |\Delta|^{3/2} + \Delta^2 \big). \end{align} $$
- 
(4) The domain  $\mathfrak {D}$
 satisfies five assumptions listed in Assumption 4.1, with respect to $\mathfrak {D}$
 satisfies five assumptions listed in Assumption 4.1, with respect to $\widehat {H}^*$
. $\widehat {H}^*$
.
 Let us briefly comment on Proposition 5.4. We view 
 $\omega (t)$
 as quantifying how ‘tilted’
$\omega (t)$
 as quantifying how ‘tilted’ 
 $\widehat {H}^*$
 is with respect to
$\widehat {H}^*$
 is with respect to 
 $H^*$
 along a fixed horizontal slice
$H^*$
 along a fixed horizontal slice 
 $t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
. In particular,
$t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
. In particular, 
 $\xi _1$
 and
$\xi _1$
 and 
 $\xi _2$
 parameterize this tiltedness along the north and south boundaries of
$\xi _2$
 parameterize this tiltedness along the north and south boundaries of 
 $\mathfrak {D}$
, respectively, and Equation (5.2) implies that this tiltedness linearly interpolates between these two boundaries. The first part of Proposition 5.4 quantifies how the height function in the liquid region tilts in terms of
$\mathfrak {D}$
, respectively, and Equation (5.2) implies that this tiltedness linearly interpolates between these two boundaries. The first part of Proposition 5.4 quantifies how the height function in the liquid region tilts in terms of 
 $\Omega _s$
, and the second part quantifies how the edges of the liquid region tilt in terms of
$\Omega _s$
, and the second part quantifies how the edges of the liquid region tilt in terms of 
 $\Upsilon _s$
. The tilted function
$\Upsilon _s$
. The tilted function 
 $\widehat {H}^*$
 will eventually be obtained by perturbing solutions of the complex Burgers equation (3.2), and these functions
$\widehat {H}^*$
 will eventually be obtained by perturbing solutions of the complex Burgers equation (3.2), and these functions 
 $\Omega _s$
 and
$\Omega _s$
 and 
 $\Upsilon _s$
 can be viewed as derivatives arising from this procedure; see Lemma 7.2 and Lemma 7.3 below. The third part of Proposition 5.4 states that the gradient around the edges does not change too much under the tilting. The fourth part verifies properties of
$\Upsilon _s$
 can be viewed as derivatives arising from this procedure; see Lemma 7.2 and Lemma 7.3 below. The third part of Proposition 5.4 states that the gradient around the edges does not change too much under the tilting. The fourth part verifies properties of 
 $\widehat {H}^*$
 that will enable us to later apply Theorem 4.3.
$\widehat {H}^*$
 that will enable us to later apply Theorem 4.3.
Remark 5.5. We will use notation such as 
 $H^*$
 and
$H^*$
 and 
 $\widehat H^*$
 for deterministic height functions (which will be maximizers of
$\widehat H^*$
 for deterministic height functions (which will be maximizers of 
 $\mathcal {E}$
), and notation such as H and
$\mathcal {E}$
), and notation such as H and 
 $\widehat H$
 for random height functions (which are associated with random tilings).
$\widehat H$
 for random height functions (which are associated with random tilings).
In view of Equations (5.3) and (5.4), we introduce the following more precise notion of tiltedness. It will be useful to define it through estimates, instead of the close approximations provided by Proposition 5.4.
Definition 5.6. Suppose 
 $\mathfrak {D} \subseteq \mathfrak {P}$
; fix real numbers
$\mathfrak {D} \subseteq \mathfrak {P}$
; fix real numbers 
 $\xi , \mu , \zeta \geq 0$
; and fix an admissible function
$\xi , \mu , \zeta \geq 0$
; and fix an admissible function 
 $H \in \operatorname {\mathrm {Adm}} (\mathfrak {D})$
. For any
$H \in \operatorname {\mathrm {Adm}} (\mathfrak {D})$
. For any 
 $(x, s) \in \overline {\mathfrak {D}}$
, we say that H is
$(x, s) \in \overline {\mathfrak {D}}$
, we say that H is 
 $(\xi; \mu )$
-tilted with respect to
$(\xi; \mu )$
-tilted with respect to 
 $H^*$
 at
$H^*$
 at 
 $(x, s)$
 if (recalling that
$(x, s)$
 if (recalling that 
 $\Omega _s (x) \leq 0$
 by Remark 5.1) we have
$\Omega _s (x) \leq 0$
 by Remark 5.1) we have 
 $$ \begin{align*} \big| H (x, s) - H^* (x, s) \big| \leq \mu - \xi \Omega_s (x). \end{align*} $$
$$ \begin{align*} \big| H (x, s) - H^* (x, s) \big| \leq \mu - \xi \Omega_s (x). \end{align*} $$
 We also say that the edge of H is 
 $\zeta $
-tilted with respect to
$\zeta $
-tilted with respect to 
 $H^*$
at level s if the following two conditions hold. For
$H^*$
at level s if the following two conditions hold. For 
 $u = (x, s) \in \overline {\mathfrak {D}}$
, we let
$u = (x, s) \in \overline {\mathfrak {D}}$
, we let 
 $(x_0, s) \in \mathfrak {A}(\mathfrak {P})$
 denote any point on
$(x_0, s) \in \mathfrak {A}(\mathfrak {P})$
 denote any point on 
 $\mathfrak {A}(\mathfrak {P})$
 with
$\mathfrak {A}(\mathfrak {P})$
 with 
 $|x - x_0|$
 minimal so that
$|x - x_0|$
 minimal so that 
 $\partial _x H^* (x_0, s) \in \{ 0, 1 \}$
 (that is,
$\partial _x H^* (x_0, s) \in \{ 0, 1 \}$
 (that is, 
 $H^*$
 is frozen at
$H^*$
 is frozen at 
 $(x_0, s)$
).
$(x_0, s)$
). 
- 
(1) Fix any  $u = (x, s) \notin \mathfrak {L}$
 with $u = (x, s) \notin \mathfrak {L}$
 with $|x_0 - x| \geq \zeta \big | \Upsilon _s (x_0) \big |$
. We have $|x_0 - x| \geq \zeta \big | \Upsilon _s (x_0) \big |$
. We have $H (u) = H^* (u)$
. $H (u) = H^* (u)$
.
- 
(2) Fix any  $u = (x, s) \in \overline {\mathfrak {D}}$
 withFootnote 
11 $u = (x, s) \in \overline {\mathfrak {D}}$
 withFootnote 
11 $|x_0 - x| \leq \zeta ^{8/9}$
. If $|x_0 - x| \leq \zeta ^{8/9}$
. If $\partial _x H^* (x_0, s) = 0$
 (so $\partial _x H^* (x_0, s) = 0$
 (so $\Upsilon _s (x_0)> 0$
 by Remark 5.1), then $\Upsilon _s (x_0)> 0$
 by Remark 5.1), then $$ \begin{align*} H^* \big(x - \zeta \Upsilon_s (x_0), s \big)\leq H (x, s) \leq H^* \big( x + \zeta \Upsilon_s (x_0), s \big). \end{align*} $$ $$ \begin{align*} H^* \big(x - \zeta \Upsilon_s (x_0), s \big)\leq H (x, s) \leq H^* \big( x + \zeta \Upsilon_s (x_0), s \big). \end{align*} $$If instead  $\partial _x H^* (x_0, s) = 1$
 (so $\partial _x H^* (x_0, s) = 1$
 (so $\Upsilon _s (x_0) < 0$
 by Remark 5.1), then $\Upsilon _s (x_0) < 0$
 by Remark 5.1), then $$ \begin{align*} H^* \big(x - \zeta \Upsilon_s (x_0), s \big) + \zeta \Upsilon_s (x_0) \leq H (x, s) \leq H^* \big( x + \zeta \Upsilon_s (x_0),s \big) - \zeta \Upsilon_s (x_0). \end{align*} $$ $$ \begin{align*} H^* \big(x - \zeta \Upsilon_s (x_0), s \big) + \zeta \Upsilon_s (x_0) \leq H (x, s) \leq H^* \big( x + \zeta \Upsilon_s (x_0),s \big) - \zeta \Upsilon_s (x_0). \end{align*} $$
 We sometimes refer to the former notion described in Definition 5.6 as a ‘bulk’ form of tiltedness, and the latter as an ‘edge’ form. The bulk form imposes a bound on 
 $|H - H^*|$
 of a similar form to Equation (5.4) in Proposition 5.4. The edge form constitutes two parts. The first is an estimate for the edge ponts of H, of a similar form to Equation (5.4); the second bounds
$|H - H^*|$
 of a similar form to Equation (5.4) in Proposition 5.4. The edge form constitutes two parts. The first is an estimate for the edge ponts of H, of a similar form to Equation (5.4); the second bounds 
 $|H - H^*|$
 near these edge points (this is eventually related to (5.5)). We will often view the tiltedness parameters
$|H - H^*|$
 near these edge points (this is eventually related to (5.5)). We will often view the tiltedness parameters 
 $\xi , \mu , \zeta $
 as small (decaying as a negative power of n), even though this was not needed to formulate Definition 5.6.
$\xi , \mu , \zeta $
 as small (decaying as a negative power of n), even though this was not needed to formulate Definition 5.6.
 To proceed, for any real number 
 $\delta> 0$
, we require the ‘reduced’ version of the liquid region inside
$\delta> 0$
, we require the ‘reduced’ version of the liquid region inside 
 $\mathfrak D$
$\mathfrak D$
 
 $$ \begin{align} \mathfrak{L}_-^{\delta} = \big\{ u \in \mathfrak{L} : \operatorname{\mathrm{dist}} ( u, \mathfrak{A}(\mathfrak{P}))> n^{\delta - 2/3} \big\}. \end{align} $$
$$ \begin{align} \mathfrak{L}_-^{\delta} = \big\{ u \in \mathfrak{L} : \operatorname{\mathrm{dist}} ( u, \mathfrak{A}(\mathfrak{P}))> n^{\delta - 2/3} \big\}. \end{align} $$
 Given this notation, we will state two results concerning the tiltedness of a random height function on 
 $\mathsf {D}$
 along a middle horizontal slice, given its tiltedness on the north and south boundaries of
$\mathsf {D}$
 along a middle horizontal slice, given its tiltedness on the north and south boundaries of 
 $\mathsf {D}$
. Let us introduce the following notation and assumption to set this context.
$\mathsf {D}$
. Let us introduce the following notation and assumption to set this context.
Assumption 5.7. Adopt Assumption 5.2; fix 
 $\varepsilon , \varsigma , \delta \in ( 0, 1/50)$
, and suppose that
$\varepsilon , \varsigma , \delta \in ( 0, 1/50)$
, and suppose that 
 $\mathfrak {s} \in [ \mathfrak {t}_1 + \varepsilon \mathfrak {t}, \mathfrak {t}_2 - \varepsilon \mathfrak {t}]$
. Let
$\mathfrak {s} \in [ \mathfrak {t}_1 + \varepsilon \mathfrak {t}, \mathfrak {t}_2 - \varepsilon \mathfrak {t}]$
. Let 
 $\widetilde {\mathsf {h}}: \partial \mathsf {D} \rightarrow \mathbb {Z}$
 denote a boundary height function that is constant along the east and west boundaries of
$\widetilde {\mathsf {h}}: \partial \mathsf {D} \rightarrow \mathbb {Z}$
 denote a boundary height function that is constant along the east and west boundaries of 
 $\mathsf {D}$
; if
$\mathsf {D}$
; if 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to h, then we further assume that
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to h, then we further assume that 
 $\widetilde {\mathsf {h}} = \mathsf {h}$
 along
$\widetilde {\mathsf {h}} = \mathsf {h}$
 along 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
. Let
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
. Let 
 $\widetilde {\mathsf {H}} : \mathsf {D} \rightarrow \mathbb {Z}$
 denote a uniformly random element of
$\widetilde {\mathsf {H}} : \mathsf {D} \rightarrow \mathbb {Z}$
 denote a uniformly random element of 
 $\mathscr {G} (\widetilde {\mathsf {h}})$
, and define
$\mathscr {G} (\widetilde {\mathsf {h}})$
, and define 
 $\widetilde {H} \in \operatorname {\mathrm {Adm}}(\mathfrak {D})$
 by setting
$\widetilde {H} \in \operatorname {\mathrm {Adm}}(\mathfrak {D})$
 by setting 
 $\widetilde {H} (u) = n^{-1} \widetilde {\mathsf {H}} (nu)$
 for each
$\widetilde {H} (u) = n^{-1} \widetilde {\mathsf {H}} (nu)$
 for each 
 $u \in \overline {\mathfrak {D}}$
. Further, let
$u \in \overline {\mathfrak {D}}$
. Further, let 
 $\xi _1, \xi _2, \zeta _1, \zeta _2, \mu \in [0, n^{\delta - 2/3}]$
 be real numbers satisfying the inequalities
$\xi _1, \xi _2, \zeta _1, \zeta _2, \mu \in [0, n^{\delta - 2/3}]$
 be real numbers satisfying the inequalities 
 $\min \big \{ \xi _1, \xi _2, |\xi _1 - \xi _2| \big \} \geq \varsigma (\xi _1 + \xi _2)$
 and
$\min \big \{ \xi _1, \xi _2, |\xi _1 - \xi _2| \big \} \geq \varsigma (\xi _1 + \xi _2)$
 and 
 $\min \big \{ \zeta _1, \zeta _2, |\zeta _1 - \zeta _2| \big \} \geq \varsigma (\zeta _1 + \zeta _2)$
. Assume the following two statements hold for each
$\min \big \{ \zeta _1, \zeta _2, |\zeta _1 - \zeta _2| \big \} \geq \varsigma (\zeta _1 + \zeta _2)$
. Assume the following two statements hold for each 
 $j \in \{ 1, 2 \}$
.
$j \in \{ 1, 2 \}$
. 
- 
(1) At each  $(x, \mathfrak {t}_j) \in \mathfrak {L}_-^{\delta }$
, we have that $(x, \mathfrak {t}_j) \in \mathfrak {L}_-^{\delta }$
, we have that $\widetilde {H}$
 is $\widetilde {H}$
 is $(\xi _j; \mu )$
-tilted with respect to $(\xi _j; \mu )$
-tilted with respect to $H^*$
. $H^*$
.
- 
(2) The edge of  $\widetilde {H}$
 is $\widetilde {H}$
 is $\zeta _j$
-tilted with respect to $\zeta _j$
-tilted with respect to $H^*$
 at level $H^*$
 at level $\mathfrak {t}_j$
. $\mathfrak {t}_j$
.
 Observe that the latter two points in the above assumption are more constraints on the deterministic boundary data 
 $\widetilde {\mathsf {h}}$
 (equivalently,
$\widetilde {\mathsf {h}}$
 (equivalently, 
 $\widetilde {h}$
) than on the random height function
$\widetilde {h}$
) than on the random height function 
 $\widetilde {H}$
. Indeed, the restriction of
$\widetilde {H}$
. Indeed, the restriction of 
 $\widetilde {H}$
 to levels
$\widetilde {H}$
 to levels 
 $\mathfrak {t}_1$
 and
$\mathfrak {t}_1$
 and 
 $\mathfrak {t}_2$
 is fully determined by
$\mathfrak {t}_2$
 is fully determined by 
 $\widetilde {h}$
 since these levels constitute the south and north boundaries of
$\widetilde {h}$
 since these levels constitute the south and north boundaries of 
 $\mathfrak {D}$
, respectively.
$\mathfrak {D}$
, respectively.
 Now, we state the following two results to be established in Section 5.2 and Section 5.3 below. Qualitatively, they both state that the tiltedness of 
 $\widetilde {H}$
 along the intermediate horizontal slice
$\widetilde {H}$
 along the intermediate horizontal slice 
 $t = \mathfrak {s}$
 lies between its tiltedness along the top and bottom boundaries of
$t = \mathfrak {s}$
 lies between its tiltedness along the top and bottom boundaries of 
 $\mathfrak {D}$
. The two statements differ in that Proposition 5.8 addresses both the bulk and edge forms of tiltedness but imposes that its tiltedness parameters
$\mathfrak {D}$
. The two statements differ in that Proposition 5.8 addresses both the bulk and edge forms of tiltedness but imposes that its tiltedness parameters 
 $\zeta _1, \zeta _2 \gg n^{-2/3}$
; Proposition 5.9 only addresses the bulk form of tiltedness but allows for smaller tiltedness parameters
$\zeta _1, \zeta _2 \gg n^{-2/3}$
; Proposition 5.9 only addresses the bulk form of tiltedness but allows for smaller tiltedness parameters 
 $\xi _1, \xi _2 \gg n^{-1}$
.
$\xi _1, \xi _2 \gg n^{-1}$
.
Proposition 5.8. Adopt Assumption 5.7, and set
 $$ \begin{align} \zeta = \displaystyle\max \bigg\{ \displaystyle\frac{\varepsilon}{2} \zeta_1 + \Big( 1 - \displaystyle\frac{\varepsilon}{2} \Big) \zeta_2, \Big( 1 - \displaystyle\frac{\varepsilon}{2} \Big) \zeta_1 + \displaystyle\frac{\varepsilon}{2} \zeta_2 \bigg\}. \end{align} $$
$$ \begin{align} \zeta = \displaystyle\max \bigg\{ \displaystyle\frac{\varepsilon}{2} \zeta_1 + \Big( 1 - \displaystyle\frac{\varepsilon}{2} \Big) \zeta_2, \Big( 1 - \displaystyle\frac{\varepsilon}{2} \Big) \zeta_1 + \displaystyle\frac{\varepsilon}{2} \zeta_2 \bigg\}. \end{align} $$
 Assume that 
 $\mu = 0$
, and that
$\mu = 0$
, and that 
 $\xi _j \leq \zeta _j \leq n^{\delta / 2 - 2/3}$
 and
$\xi _j \leq \zeta _j \leq n^{\delta / 2 - 2/3}$
 and 
 $\zeta _j \geq n^{\delta / 100 - 2/3}$
 for each
$\zeta _j \geq n^{\delta / 100 - 2/3}$
 for each 
 $j \in \{ 1, 2 \}$
. Then, the following two statements hold with overwhelming probability.
$j \in \{ 1, 2 \}$
. Then, the following two statements hold with overwhelming probability. 
- 
(1) At each  $(x, \mathfrak {s}) \in \mathfrak {L}_-^{\delta }$
, we have that $(x, \mathfrak {s}) \in \mathfrak {L}_-^{\delta }$
, we have that $\widetilde {H}$
 is $\widetilde {H}$
 is $(\zeta; 0)$
-tilted with respect to $(\zeta; 0)$
-tilted with respect to $H^*$
. $H^*$
.
- 
(2) The edge of  $\widetilde {H}$
 is $\widetilde {H}$
 is $\zeta $
-tilted with respect to $\zeta $
-tilted with respect to $H^*$
 at level $H^*$
 at level $\mathfrak {s}$
. $\mathfrak {s}$
.
Proposition 5.9. Adopt Assumption 5.7, and set
 $$ \begin{align*} \xi = \displaystyle\max \bigg\{ \displaystyle\frac{\varepsilon}{2} \xi_1 + \Big( 1 - \displaystyle\frac{\varepsilon}{2} \Big) \xi_2, \Big( 1 - \displaystyle\frac{\varepsilon}{2} \Big) \xi_1 + \displaystyle\frac{\varepsilon}{2} \xi_2 \bigg\}. \end{align*} $$
$$ \begin{align*} \xi = \displaystyle\max \bigg\{ \displaystyle\frac{\varepsilon}{2} \xi_1 + \Big( 1 - \displaystyle\frac{\varepsilon}{2} \Big) \xi_2, \Big( 1 - \displaystyle\frac{\varepsilon}{2} \Big) \xi_1 + \displaystyle\frac{\varepsilon}{2} \xi_2 \bigg\}. \end{align*} $$
 Assume that 
 $\xi _j \leq n^{- 2/3}$
 and
$\xi _j \leq n^{- 2/3}$
 and 
 $\zeta _j \leq n^{\delta / 50 - 2/3}$
 for each
$\zeta _j \leq n^{\delta / 50 - 2/3}$
 for each 
 $j \in \{ 1, 2 \}$
. Further, fix
$j \in \{ 1, 2 \}$
. Further, fix 
 $U_0 = (X_0, \mathfrak {s}) \in \mathfrak {L}_-^{\delta }$
, and assume that
$U_0 = (X_0, \mathfrak {s}) \in \mathfrak {L}_-^{\delta }$
, and assume that 
 $\xi _j \geq n^{\delta / 4 - 1} \operatorname {\mathrm {dist}} (U_0, \mathfrak {A}(\mathfrak {P}))^{-1/2}$
 for each
$\xi _j \geq n^{\delta / 4 - 1} \operatorname {\mathrm {dist}} (U_0, \mathfrak {A}(\mathfrak {P}))^{-1/2}$
 for each 
 $j \in \{ 1, 2 \}$
. Then,
$j \in \{ 1, 2 \}$
. Then, 
 $\widetilde {H}$
 is
$\widetilde {H}$
 is 
 $(\xi; \mu )$
-tilted with respect to
$(\xi; \mu )$
-tilted with respect to 
 $H^*$
 at
$H^*$
 at 
 $U_0$
, with overwhelming probability.
$U_0$
, with overwhelming probability.
5.2 Proof of Proposition 5.8
 In this section, we establish Proposition 5.8. Throughout this section, we adopt the notation of that proposition. For any 
 $u = (x, \mathfrak {s}) \in \mathfrak {D}$
, with
$u = (x, \mathfrak {s}) \in \mathfrak {D}$
, with 
 $(x_0, \mathfrak {s}) \in \mathfrak {A}(\mathfrak {P})$
 denoting a point with
$(x_0, \mathfrak {s}) \in \mathfrak {A}(\mathfrak {P})$
 denoting a point with 
 $|x - x_0|$
 minimal, it suffices to show with overwhelming probability that
$|x - x_0|$
 minimal, it suffices to show with overwhelming probability that 
 $$ \begin{align} H^* (u) + \zeta \Omega_{\mathfrak{s}} (x) \leq \widetilde{H} (u) \leq H^* (u) - \zeta \Omega_{\mathfrak{s}} (x), \qquad & \text{if}\; u \in \mathfrak{L}_-^{\delta}; \nonumber\\ \widetilde{H} (u) = H^* (u), \qquad & \text{if}\ u \notin \mathfrak{L}\ \text{and}\ |x - x_0| \geq \zeta \big| \Upsilon_{\mathfrak{s}} (x_0) \big|, \end{align} $$
$$ \begin{align} H^* (u) + \zeta \Omega_{\mathfrak{s}} (x) \leq \widetilde{H} (u) \leq H^* (u) - \zeta \Omega_{\mathfrak{s}} (x), \qquad & \text{if}\; u \in \mathfrak{L}_-^{\delta}; \nonumber\\ \widetilde{H} (u) = H^* (u), \qquad & \text{if}\ u \notin \mathfrak{L}\ \text{and}\ |x - x_0| \geq \zeta \big| \Upsilon_{\mathfrak{s}} (x_0) \big|, \end{align} $$
and, if 
 $|x - x_0| \leq \zeta ^{8/9}$
, that
$|x - x_0| \leq \zeta ^{8/9}$
, that 
 $$ \begin{align} H^* \big( x - \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s}) \leq \widetilde{H} (u) \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big), \qquad \text{if}\ \partial_x H^* (x_0, \mathfrak{s}) = 0; \nonumber\\ H^* \big( x - \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) + \zeta \Upsilon_{\mathfrak{s}} (x_0) \leq \widetilde{H} (u) \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) - \zeta \Upsilon_{\mathfrak{s}} (x_0), \qquad \text{if}\ \partial_x H^* (x_0, \mathfrak{s}) = 1. \end{align} $$
$$ \begin{align} H^* \big( x - \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s}) \leq \widetilde{H} (u) \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big), \qquad \text{if}\ \partial_x H^* (x_0, \mathfrak{s}) = 0; \nonumber\\ H^* \big( x - \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) + \zeta \Upsilon_{\mathfrak{s}} (x_0) \leq \widetilde{H} (u) \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) - \zeta \Upsilon_{\mathfrak{s}} (x_0), \qquad \text{if}\ \partial_x H^* (x_0, \mathfrak{s}) = 1. \end{align} $$
 We only establish the upper bounds in Equations (5.8) and (5.9), as the proofs of the lower bounds are entirely analogous. In what follows, we will assume that 
 $\zeta _1 \geq \zeta _2$
, as the proof in the complementary case
$\zeta _1 \geq \zeta _2$
, as the proof in the complementary case 
 $\zeta _1 < \zeta _2$
 is entirely analogous. Let us also fix a small real number
$\zeta _1 < \zeta _2$
 is entirely analogous. Let us also fix a small real number 
 $\theta \in ( 0, 1/50)$
 (it suffices to take
$\theta \in ( 0, 1/50)$
 (it suffices to take 
 $\theta = 1/100$
), and we define slightly larger versions of
$\theta = 1/100$
), and we define slightly larger versions of 
 $\zeta _1, \zeta _2$
 by
$\zeta _1, \zeta _2$
 by 
 $$ \begin{align} \zeta_1' = (1 + \theta \varepsilon) \zeta_1; \qquad \zeta_2' = (1 + \theta \varepsilon) \zeta_2. \end{align} $$
$$ \begin{align} \zeta_1' = (1 + \theta \varepsilon) \zeta_1; \qquad \zeta_2' = (1 + \theta \varepsilon) \zeta_2. \end{align} $$
 Throughout, we further set 
 $\widetilde {h} = \widetilde {H} |_{\partial \mathfrak {D}}$
 from Assumption 5.7.
$\widetilde {h} = \widetilde {H} |_{\partial \mathfrak {D}}$
 from Assumption 5.7.
 Before continuing, let us briefly outline how we will proceed. First, we use Proposition 5.4 to obtain a ‘
 $(-\zeta _1', -\zeta _2')$
-tilted’ boundary function
$(-\zeta _1', -\zeta _2')$
-tilted’ boundary function 
 $\widehat {h}: \partial \mathfrak {D} \rightarrow \mathbb {R}$
, with associated maximizer
$\widehat {h}: \partial \mathfrak {D} \rightarrow \mathbb {R}$
, with associated maximizer 
 $\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 of
$\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 of 
 $\mathcal {E}$
; properties of this tilting from Proposition 5.4 will imply
$\mathcal {E}$
; properties of this tilting from Proposition 5.4 will imply 
 $\widehat {h} \geq \widetilde {h}$
. Next, we consider a tiling of
$\widehat {h} \geq \widetilde {h}$
. Next, we consider a tiling of 
 $\mathsf {D}$
 whose (scaled) boundary height function is given by
$\mathsf {D}$
 whose (scaled) boundary height function is given by 
 $\widehat {h}$
. Applying Theorem 4.3, we will deduce that the (scaled) height function
$\widehat {h}$
. Applying Theorem 4.3, we will deduce that the (scaled) height function 
 $\widehat {H}$
 associated with this tiling is close to
$\widehat {H}$
 associated with this tiling is close to 
 $\widehat {H}^*$
. Together with the bound
$\widehat {H}^*$
. Together with the bound 
 $\widehat {h}\geq \widetilde {h}$
 and the monotonicity result Lemma 3.15, this will essentially imply that
$\widehat {h}\geq \widetilde {h}$
 and the monotonicity result Lemma 3.15, this will essentially imply that 
 $\widehat {H}^* \approx \widehat {H} \geq \widetilde {H}$
. This, with the fact (implied by Proposition 5.4) that
$\widehat {H}^* \approx \widehat {H} \geq \widetilde {H}$
. This, with the fact (implied by Proposition 5.4) that 
 $\widehat {H}^*$
 is approximately
$\widehat {H}^*$
 is approximately 
 $\zeta $
-tilted with respect to
$\zeta $
-tilted with respect to 
 $H^*$
, will yield the upper bounds in Equations (5.8) and (5.9).
$H^*$
, will yield the upper bounds in Equations (5.8) and (5.9).
 Now, let us implement this procedure in detail. Apply Proposition 5.4 with the 
 $(\xi _1, \xi _2)$
 there equal to
$(\xi _1, \xi _2)$
 there equal to 
 $(-\zeta _1', -\zeta _2')$
 here. This yields a function
$(-\zeta _1', -\zeta _2')$
 here. This yields a function 
 $\widehat {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 and its associated maximizer
$\widehat {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 and its associated maximizer 
 $\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 of
$\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 of 
 $\mathcal {E}$
 satisfying the four properties listed there. Define its discretization
$\mathcal {E}$
 satisfying the four properties listed there. Define its discretization 
 $\widehat {\mathsf {h}} : \partial \mathsf {D} \rightarrow \mathbb {Z}$
 of
$\widehat {\mathsf {h}} : \partial \mathsf {D} \rightarrow \mathbb {Z}$
 of 
 $\widehat {h}$
 by setting
$\widehat {h}$
 by setting 
 $\widehat {\mathsf {h}} (nv) = \big \lfloor n \widehat {h} (v) \big \rfloor $
, for each
$\widehat {\mathsf {h}} (nv) = \big \lfloor n \widehat {h} (v) \big \rfloor $
, for each 
 $v \in \partial \mathfrak {D}$
.
$v \in \partial \mathfrak {D}$
.
Lemma 5.11. For each 
 $\mathsf {v} \in \partial \mathsf {D}$
, we have that
$\mathsf {v} \in \partial \mathsf {D}$
, we have that 
 $\widetilde {\mathsf {h}} (\mathsf {v}) \leq \widehat {\mathsf {h}} (\mathsf {v})$
.
$\widetilde {\mathsf {h}} (\mathsf {v}) \leq \widehat {\mathsf {h}} (\mathsf {v})$
.
Proof. Let us first verify the lemma when 
 $v = n^{-1} \mathsf {v} \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. In this case, Assumption 5.2 implies that the four corners
$v = n^{-1} \mathsf {v} \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. In this case, Assumption 5.2 implies that the four corners 
 $\big \{ \big ( \mathfrak {a} (\mathfrak {t}_1), \mathfrak {t}_1 \big ), \big ( \mathfrak {b} (\mathfrak {t}_1), \mathfrak {t}_1 \big ), \big ( \mathfrak {a} (\mathfrak {t}_2), \mathfrak {t}_2 \big ), \big ( \mathfrak {b} (\mathfrak {t}_2), \mathfrak {t}_2 \big ) \big \}$
 of
$\big \{ \big ( \mathfrak {a} (\mathfrak {t}_1), \mathfrak {t}_1 \big ), \big ( \mathfrak {b} (\mathfrak {t}_1), \mathfrak {t}_1 \big ), \big ( \mathfrak {a} (\mathfrak {t}_2), \mathfrak {t}_2 \big ), \big ( \mathfrak {b} (\mathfrak {t}_2), \mathfrak {t}_2 \big ) \big \}$
 of 
 $\mathfrak {D}$
 are outside of
$\mathfrak {D}$
 are outside of 
 $\overline {\mathfrak {L}}$
; they are thus bounded away from
$\overline {\mathfrak {L}}$
; they are thus bounded away from 
 $\overline {\mathfrak {L}}$
 (recall we view
$\overline {\mathfrak {L}}$
 (recall we view 
 $\mathfrak {D}$
 and
$\mathfrak {D}$
 and 
 $\mathfrak {P}$
 as fixed with respect to n). Since
$\mathfrak {P}$
 as fixed with respect to n). Since 
 $\zeta _1, \zeta _2 \leq n^{\delta - 2/ 3}$
, the edge of
$\zeta _1, \zeta _2 \leq n^{\delta - 2/ 3}$
, the edge of 
 $\widetilde {H}$
 is
$\widetilde {H}$
 is 
 $n^{\delta - 2/3} \ll 1$
 tilted with respect to
$n^{\delta - 2/3} \ll 1$
 tilted with respect to 
 $H^*$
 at levels
$H^*$
 at levels 
 $\mathfrak {t}_1$
 and
$\mathfrak {t}_1$
 and 
 $\mathfrak {t}_2$
. Hence,
$\mathfrak {t}_2$
. Hence, 
 $\widetilde {H} (v) = H^* (v)$
 at these four corners, so
$\widetilde {H} (v) = H^* (v)$
 at these four corners, so 
 $\widetilde {\mathsf {h}} (nv) = \widetilde {\mathsf {H}} (nv) = n H^* (v) = n h (v) = \mathsf {h} (n v)$
 there. The second part of Proposition 5.4, together with the fact that
$\widetilde {\mathsf {h}} (nv) = \widetilde {\mathsf {H}} (nv) = n H^* (v) = n h (v) = \mathsf {h} (n v)$
 there. The second part of Proposition 5.4, together with the fact that 
 $\zeta _j \Upsilon _{\mathfrak {t}_j} (x) + \mathcal {O} (\zeta _1^2) = \mathcal {O} (n^{\delta -2 / 3})$
, implies that the edge of
$\zeta _j \Upsilon _{\mathfrak {t}_j} (x) + \mathcal {O} (\zeta _1^2) = \mathcal {O} (n^{\delta -2 / 3})$
, implies that the edge of 
 $\widehat {H}^*$
 is also
$\widehat {H}^*$
 is also 
 $n^{\delta - 2/3} \ll 1$
 tilted with respect to
$n^{\delta - 2/3} \ll 1$
 tilted with respect to 
 $H^*$
. So, similar reasoning gives
$H^*$
. So, similar reasoning gives 
 $\widehat {h} (v) = h(v)$
 at these four corners, yielding
$\widehat {h} (v) = h(v)$
 at these four corners, yielding 
 $\widehat {\mathsf {h}} (nv) = \mathsf {h} (nv) = \widetilde {\mathsf {h}} (nv)$
 there. Since
$\widehat {\mathsf {h}} (nv) = \mathsf {h} (nv) = \widetilde {\mathsf {h}} (nv)$
 there. Since 
 $\widetilde {\mathsf {h}}$
 is constant along the east and west boundaries of
$\widetilde {\mathsf {h}}$
 is constant along the east and west boundaries of 
 $\mathsf {D}$
, it follows that
$\mathsf {D}$
, it follows that 
 $\widehat {\mathsf {h}} (\mathsf {v}) = \widetilde {\mathsf {h}} (\mathsf {v}) = \widetilde {\mathsf {h}} (\mathsf {v})$
 there.
$\widehat {\mathsf {h}} (\mathsf {v}) = \widetilde {\mathsf {h}} (\mathsf {v}) = \widetilde {\mathsf {h}} (\mathsf {v})$
 there.
 We next verify 
 $\widetilde {\mathsf {h}} (\mathsf {v}) \leq \widehat {\mathsf {h}} (\mathsf {v})$
 when
$\widetilde {\mathsf {h}} (\mathsf {v}) \leq \widehat {\mathsf {h}} (\mathsf {v})$
 when 
 $v = n^{-1} \mathsf {v} \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
. Set
$v = n^{-1} \mathsf {v} \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
. Set 
 $v = (x, \mathfrak {t}_j)$
, and let
$v = (x, \mathfrak {t}_j)$
, and let 
 $v_0 = (x_0, \mathfrak {t}_j) \in \partial \mathfrak {A}(\mathfrak {P})$
 denote a point with
$v_0 = (x_0, \mathfrak {t}_j) \in \partial \mathfrak {A}(\mathfrak {P})$
 denote a point with 
 $|x - x_0|$
 minimal.
$|x - x_0|$
 minimal.
 First, suppose 
 $|x - x_0| \geq (\zeta _j')^{8/ 9}$
 and
$|x - x_0| \geq (\zeta _j')^{8/ 9}$
 and 
 $v \notin \overline {\mathfrak {L}}$
. Then, by Remark 5.3 and the bound
$v \notin \overline {\mathfrak {L}}$
. Then, by Remark 5.3 and the bound 
 $|\zeta _j| = \mathcal {O} (n^{\delta - 2/3})$
, we have
$|\zeta _j| = \mathcal {O} (n^{\delta - 2/3})$
, we have 
 $|x - x_0| \geq \zeta _j' \big | \Upsilon _{\mathfrak {t}_j} (x_0) \big |$
 for sufficiently large n. Since the edges of
$|x - x_0| \geq \zeta _j' \big | \Upsilon _{\mathfrak {t}_j} (x_0) \big |$
 for sufficiently large n. Since the edges of 
 $\widetilde {H}$
 and
$\widetilde {H}$
 and 
 $\widehat {H}^*$
 are
$\widehat {H}^*$
 are 
 $\zeta _j'$
-tilted with respect to
$\zeta _j'$
-tilted with respect to 
 $H^*$
 at level
$H^*$
 at level 
 $\mathfrak {t}_j$
 (as
$\mathfrak {t}_j$
 (as 
 $\zeta _j \leq \zeta _j'$
), we have
$\zeta _j \leq \zeta _j'$
), we have 
 $\widetilde {h} (v) = h(v) = \widehat {h} (v)$
, so
$\widetilde {h} (v) = h(v) = \widehat {h} (v)$
, so 
 $\widehat {\mathsf {h}} (\mathsf {v}) = \widetilde {\mathsf {h}} (\mathsf {v})$
.
$\widehat {\mathsf {h}} (\mathsf {v}) = \widetilde {\mathsf {h}} (\mathsf {v})$
.
 Next, suppose that 
 $|x - x_0| \leq (\zeta _j')^{8/9}$
. Then,
$|x - x_0| \leq (\zeta _j')^{8/9}$
. Then, 
 $v_0 = (x_0, \mathfrak {t}_j)$
 is either a left or right endpoint of
$v_0 = (x_0, \mathfrak {t}_j)$
 is either a left or right endpoint of 
 $\mathfrak {A}(\mathfrak {P})$
, and
$\mathfrak {A}(\mathfrak {P})$
, and 
 $\partial _x H^* (v_0) \in \{ 0, 1 \}$
. We assume in what follows that
$\partial _x H^* (v_0) \in \{ 0, 1 \}$
. We assume in what follows that 
 $v_0$
 is a right endpoint of
$v_0$
 is a right endpoint of 
 $\mathfrak {A}(\mathfrak {P})$
 and that
$\mathfrak {A}(\mathfrak {P})$
 and that 
 $\partial _x H^* (v_0) = 0$
, as the alternative cases are entirely analogous. Define
$\partial _x H^* (v_0) = 0$
, as the alternative cases are entirely analogous. Define 
 $\widehat {x}_0 \in \mathbb {R}$
 such that
$\widehat {x}_0 \in \mathbb {R}$
 such that 
 $\widehat {v}_0 = (\widehat {x}_0, \mathfrak {t}_j)$
 denotes an endpoint of
$\widehat {v}_0 = (\widehat {x}_0, \mathfrak {t}_j)$
 denotes an endpoint of 
 $ \mathfrak {L}_{\operatorname {\mathrm {so}}} (\mathfrak {D}; \widehat {h})$
 or
$ \mathfrak {L}_{\operatorname {\mathrm {so}}} (\mathfrak {D}; \widehat {h})$
 or 
 $ \mathfrak {L}_{\operatorname {\mathrm {no}}} (\mathfrak {D}; \widehat {h})$
 (depending on whether
$ \mathfrak {L}_{\operatorname {\mathrm {no}}} (\mathfrak {D}; \widehat {h})$
 (depending on whether 
 $j = 1$
 or
$j = 1$
 or 
 $j = 2$
, respectively) such that
$j = 2$
, respectively) such that 
 $|\widehat {x}_0 - x_0|$
 is minimal. By Equation (5.4), we have
$|\widehat {x}_0 - x_0|$
 is minimal. By Equation (5.4), we have 
 $\widehat {x}_0 - x_0 + \zeta _j' \Upsilon _{\mathfrak {t}_j} (x_0) = \mathcal {O} (\zeta _1^2 + \zeta _2^2)$
 (recall that the
$\widehat {x}_0 - x_0 + \zeta _j' \Upsilon _{\mathfrak {t}_j} (x_0) = \mathcal {O} (\zeta _1^2 + \zeta _2^2)$
 (recall that the 
 $\xi _j$
 there is
$\xi _j$
 there is 
 $-\zeta _j$
 here). In particular, since
$-\zeta _j$
 here). In particular, since 
 $\Upsilon _{\mathfrak {t}_j} (x_0)$
 is positive (by Remark 5.1) and bounded away from
$\Upsilon _{\mathfrak {t}_j} (x_0)$
 is positive (by Remark 5.1) and bounded away from 
 $0$
 (by Remark 5.3), we have
$0$
 (by Remark 5.3), we have 
 $\widehat {x}_0 \leq x_0$
 and
$\widehat {x}_0 \leq x_0$
 and 
 $x_0 - \widehat {x}_0 = \mathcal {O} (\zeta _1 + \zeta _2)$
.
$x_0 - \widehat {x}_0 = \mathcal {O} (\zeta _1 + \zeta _2)$
.
 If 
 $x \geq \widehat {x}_0$
, then
$x \geq \widehat {x}_0$
, then 
 $v = (x, \mathfrak {t}_j) \notin \widehat {\mathfrak {L}}$
 (as
$v = (x, \mathfrak {t}_j) \notin \widehat {\mathfrak {L}}$
 (as 
 $(\widehat {x}_0, \mathfrak {t}_j)$
 must be a right endpoint of
$(\widehat {x}_0, \mathfrak {t}_j)$
 must be a right endpoint of 
 $\widehat {\mathfrak {L}}$
). Since
$\widehat {\mathfrak {L}}$
). Since 
 $x_0 \geq \widehat {x}_0$
 and
$x_0 \geq \widehat {x}_0$
 and 
 $\partial _x \widehat {H}^* (x', \mathfrak {t}_j) = \partial _x H^* (x_0, \mathfrak {t}_j) = 0$
 for
$\partial _x \widehat {H}^* (x', \mathfrak {t}_j) = \partial _x H^* (x_0, \mathfrak {t}_j) = 0$
 for 
 $x'> \widehat {x}_0$
, this implies
$x'> \widehat {x}_0$
, this implies 
 $\widehat {H}^* (x, \mathfrak {t}_j) = \widehat {H}^* (x_0, \mathfrak {t}_j) = H^* (x_0, \mathfrak {t}_j)$
 (where the last equality follows from Equation (5.4) of Proposition 5.4, since
$\widehat {H}^* (x, \mathfrak {t}_j) = \widehat {H}^* (x_0, \mathfrak {t}_j) = H^* (x_0, \mathfrak {t}_j)$
 (where the last equality follows from Equation (5.4) of Proposition 5.4, since 
 $(x_0, \mathfrak {t}_j) \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
). Hence,
$(x_0, \mathfrak {t}_j) \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
). Hence, 
 $\widehat {h} (v) = h(x_0, \mathfrak {t}_j)$
. Since the edge of
$\widehat {h} (v) = h(x_0, \mathfrak {t}_j)$
. Since the edge of 
 $\widetilde {H}$
 is
$\widetilde {H}$
 is 
 $\zeta _j$
-tilted with respect to
$\zeta _j$
-tilted with respect to 
 $H^*$
 and
$H^*$
 and 
 $\zeta _j = \mathrm {o}(n^{1/2})$
, we also have
$\zeta _j = \mathrm {o}(n^{1/2})$
, we also have 
 $\widetilde {h} (v) \leq \widetilde {h} (x + n^{-1/2}, \mathfrak {t}_j) = H^* (x + n^{-1/2}, \mathfrak {t}_j) = H^* (x_0, \mathfrak {t}_j) = h (x_0, \mathfrak {t}_j)$
. Thus
$\widetilde {h} (v) \leq \widetilde {h} (x + n^{-1/2}, \mathfrak {t}_j) = H^* (x + n^{-1/2}, \mathfrak {t}_j) = H^* (x_0, \mathfrak {t}_j) = h (x_0, \mathfrak {t}_j)$
. Thus 
 $\widetilde {h} (v) \leq h(v) = \widehat {h} (v)$
, meaning
$\widetilde {h} (v) \leq h(v) = \widehat {h} (v)$
, meaning 
 $\widehat {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
 if
$\widehat {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
 if 
 $x \geq \widehat {x}_0$
.
$x \geq \widehat {x}_0$
.
 If instead 
 $x < \widehat {x}_0$
, then
$x < \widehat {x}_0$
, then 
 $$ \begin{align} \widehat{H}^* (x, \mathfrak{t}_j) & \geq \widehat{H}^* (\widehat{x}_0, \mathfrak{t}_j) + H^* (x + x_0 - \widehat{x}_0, \mathfrak{t}_j) - H^* (x_0, \mathfrak{t}_j) + \mathcal{O} (\zeta_1^{16 / 9}) \nonumber\\ & = H^* (x + x_0 - \widehat{x}_0, \mathfrak{t}_j) + \mathcal{O} (\zeta_1^{16 / 9}) \geq H^* \big(x + \zeta_j' \Upsilon_{\mathfrak{t}_j} (x_0), \mathfrak{t}_j \big) + \mathcal{O} (\zeta_1^{16 / 9}). \end{align} $$
$$ \begin{align} \widehat{H}^* (x, \mathfrak{t}_j) & \geq \widehat{H}^* (\widehat{x}_0, \mathfrak{t}_j) + H^* (x + x_0 - \widehat{x}_0, \mathfrak{t}_j) - H^* (x_0, \mathfrak{t}_j) + \mathcal{O} (\zeta_1^{16 / 9}) \nonumber\\ & = H^* (x + x_0 - \widehat{x}_0, \mathfrak{t}_j) + \mathcal{O} (\zeta_1^{16 / 9}) \geq H^* \big(x + \zeta_j' \Upsilon_{\mathfrak{t}_j} (x_0), \mathfrak{t}_j \big) + \mathcal{O} (\zeta_1^{16 / 9}). \end{align} $$
 Here, the first statement follows from Equation (5.5) and the fact that 
 $|x - \widehat {x}_0| = \mathcal {O} \big ( |x - x_0| + |x_0 - \widehat {x}_0| \big ) = \mathcal {O} (\zeta _1^{8/9})$
; the second from the fact that
$|x - \widehat {x}_0| = \mathcal {O} \big ( |x - x_0| + |x_0 - \widehat {x}_0| \big ) = \mathcal {O} (\zeta _1^{8/9})$
; the second from the fact that 
 $\widehat {H}^* (\widehat {x}_0, \mathfrak {t}_j) = \widehat {H}^* (x_0, \mathfrak {t}_j) = H^* (x_0, \mathfrak {t}_j)$
, which holds by the equality
$\widehat {H}^* (\widehat {x}_0, \mathfrak {t}_j) = \widehat {H}^* (x_0, \mathfrak {t}_j) = H^* (x_0, \mathfrak {t}_j)$
, which holds by the equality 
 $\partial _x \widehat {H}^* (\widehat {x}_0, \mathfrak {t}_j) = \partial _x H^* (x_0, \mathfrak {t}_j) = 0$
 (as
$\partial _x \widehat {H}^* (\widehat {x}_0, \mathfrak {t}_j) = \partial _x H^* (x_0, \mathfrak {t}_j) = 0$
 (as 
 $x_0 \geq \widehat {x}_0$
) and the second part of Proposition 5.4; and the third from the facts that
$x_0 \geq \widehat {x}_0$
) and the second part of Proposition 5.4; and the third from the facts that 
 $H^*$
 is
$H^*$
 is 
 $1$
-Lipschitz and that
$1$
-Lipschitz and that 
 $\widehat {x}_0 - x_0 + \zeta _j' \Upsilon _{\mathfrak {t}_j} (x_0) = \mathcal {O} (\zeta _1^2)$
 by Equation (5.4).
$\widehat {x}_0 - x_0 + \zeta _j' \Upsilon _{\mathfrak {t}_j} (x_0) = \mathcal {O} (\zeta _1^2)$
 by Equation (5.4).
 Next, observe that 
 $(x, \mathfrak {t}_j) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
 since
$(x, \mathfrak {t}_j) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
 since 
 $x < \widehat {x}_0 \leq x_0$
 and
$x < \widehat {x}_0 \leq x_0$
 and 
 $|x - \widehat {x}_0| = \mathcal {O} (\zeta _1^{8/9})$
. By Remark A.2 concerning the square root decay of
$|x - \widehat {x}_0| = \mathcal {O} (\zeta _1^{8/9})$
. By Remark A.2 concerning the square root decay of 
 $\partial _x H^*$
 around
$\partial _x H^*$
 around 
 $\mathfrak {A}(\mathfrak {P})$
, and the fact that
$\mathfrak {A}(\mathfrak {P})$
, and the fact that 
 $\partial _x H^* (x_0, t) = 0$
, we therefore deduce the existence of a constant
$\partial _x H^* (x_0, t) = 0$
, we therefore deduce the existence of a constant 
 $c = c(\mathfrak {P}, \mathfrak {D}, \theta )> 0$
 such that
$c = c(\mathfrak {P}, \mathfrak {D}, \theta )> 0$
 such that 
 $$ \begin{align} H^* \big(x + \zeta_j' \Upsilon_{\mathfrak{t}_j} (x_0), \mathfrak{t}_j \big) \geq H^* \big( x_0 + \zeta_j \Upsilon_{\mathfrak{t}_j} (x_0), \mathfrak{t}_j \big) + c \zeta_j^{3/2}, \end{align} $$
$$ \begin{align} H^* \big(x + \zeta_j' \Upsilon_{\mathfrak{t}_j} (x_0), \mathfrak{t}_j \big) \geq H^* \big( x_0 + \zeta_j \Upsilon_{\mathfrak{t}_j} (x_0), \mathfrak{t}_j \big) + c \zeta_j^{3/2}, \end{align} $$
where we have used the fact (5.10) that 
 $\zeta _j' - \zeta _j = \theta \zeta _j$
 (as well as the fact that
$\zeta _j' - \zeta _j = \theta \zeta _j$
 (as well as the fact that 
 $\Upsilon _{\mathfrak {t}_j} (x_0)$
 is bounded away from
$\Upsilon _{\mathfrak {t}_j} (x_0)$
 is bounded away from 
 $0$
, from Remark 5.3). Inserting Equation (5.12) into Equation (5.11) yields
$0$
, from Remark 5.3). Inserting Equation (5.12) into Equation (5.11) yields 
 $$ \begin{align*} \widehat{H}^* (x, \mathfrak{t}_j) & \geq H^* \big( x + \zeta_j \Upsilon_{\mathfrak{t}_j} (x_0), \mathfrak{t}_j \big) + c \zeta_j^{3/2} + \mathcal{O} (\zeta_1^{16 / 9}) \geq \widetilde{H} (x, \mathfrak{t}_j) + c \zeta_j^{3/2} + \mathcal{O} (n^{-1}) \geq \widetilde{H} (x, \mathfrak{t}_j), \end{align*} $$
$$ \begin{align*} \widehat{H}^* (x, \mathfrak{t}_j) & \geq H^* \big( x + \zeta_j \Upsilon_{\mathfrak{t}_j} (x_0), \mathfrak{t}_j \big) + c \zeta_j^{3/2} + \mathcal{O} (\zeta_1^{16 / 9}) \geq \widetilde{H} (x, \mathfrak{t}_j) + c \zeta_j^{3/2} + \mathcal{O} (n^{-1}) \geq \widetilde{H} (x, \mathfrak{t}_j), \end{align*} $$
where the second inequality holds since 
 $|\zeta _1| \leq n^{\delta - 2/3}$
, and the third follows from the facts that the edge of
$|\zeta _1| \leq n^{\delta - 2/3}$
, and the third follows from the facts that the edge of 
 $\widetilde {H}$
 is
$\widetilde {H}$
 is 
 $\zeta _j$
-tilted and that
$\zeta _j$
-tilted and that 
 $\zeta _j \leq n^{\delta / 100 - 2/3}$
. Hence,
$\zeta _j \leq n^{\delta / 100 - 2/3}$
. Hence, 
 $\widehat {h} (v) \geq \widetilde {h} (v)$
, meaning
$\widehat {h} (v) \geq \widetilde {h} (v)$
, meaning 
 $\widehat {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
.
$\widehat {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
.
 It thus remains to consider the case when 
 $|x - x_0| \geq (\zeta _j')^{8/9}$
 and
$|x - x_0| \geq (\zeta _j')^{8/9}$
 and 
 $v = (x, \mathfrak {t}_j) \in \mathfrak {L}$
. In particular,
$v = (x, \mathfrak {t}_j) \in \mathfrak {L}$
. In particular, 
 $|x - x_0| \geq \zeta _j' \Upsilon _{\mathfrak {t}_j} (x_0) + \mathcal {O} (\zeta _j^2)$
, which implies by Equation (5.4) that
$|x - x_0| \geq \zeta _j' \Upsilon _{\mathfrak {t}_j} (x_0) + \mathcal {O} (\zeta _j^2)$
, which implies by Equation (5.4) that 
 $v = (x, \mathfrak {t}_j) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
. Thus, Equation (5.3) yields
$v = (x, \mathfrak {t}_j) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
. Thus, Equation (5.3) yields 
 $$ \begin{align*} \widehat{H}^* (v) & \geq H^* (v) - \zeta_j' \Omega_{\mathfrak{t}_j} (x) + \mathcal{O} \big( (\zeta_1' + \zeta_2')^{3/2} \big) \geq \widetilde{H} (v) - \theta \zeta_j \Omega_{\mathfrak{t}_j} (x) + \mathcal{O} (\zeta_1^{3/2}), \end{align*} $$
$$ \begin{align*} \widehat{H}^* (v) & \geq H^* (v) - \zeta_j' \Omega_{\mathfrak{t}_j} (x) + \mathcal{O} \big( (\zeta_1' + \zeta_2')^{3/2} \big) \geq \widetilde{H} (v) - \theta \zeta_j \Omega_{\mathfrak{t}_j} (x) + \mathcal{O} (\zeta_1^{3/2}), \end{align*} $$
where to deduce the last inequality we used the facts that 
 $\zeta _j' = (1 + \theta ) \zeta _j$
 and that
$\zeta _j' = (1 + \theta ) \zeta _j$
 and that 
 $\widetilde {H}$
 is
$\widetilde {H}$
 is 
 $(\zeta _j; 0)$
-tilted at v (since
$(\zeta _j; 0)$
-tilted at v (since 
 $v \in \mathfrak {L}_-^{\delta }$
, as
$v \in \mathfrak {L}_-^{\delta }$
, as 
 $|x - x_0| \geq (\zeta _j')^{8 / 9} \geq n^{\delta - 2/3}$
 and
$|x - x_0| \geq (\zeta _j')^{8 / 9} \geq n^{\delta - 2/3}$
 and 
 $\xi _j \leq \zeta _j$
). By Remark 5.3, there exists a constant
$\xi _j \leq \zeta _j$
). By Remark 5.3, there exists a constant 
 $c = c(\mathfrak {P})> 0$
 such that
$c = c(\mathfrak {P})> 0$
 such that 
 $-\Omega _{\mathfrak {t}_j} (x) \geq c |x - x_0|^{1/2}$
. In particular, since
$-\Omega _{\mathfrak {t}_j} (x) \geq c |x - x_0|^{1/2}$
. In particular, since 
 $|x - x_0| \geq (\zeta _j')^{8 / 9} \geq n^{\delta } \zeta _1$
 (as
$|x - x_0| \geq (\zeta _j')^{8 / 9} \geq n^{\delta } \zeta _1$
 (as 
 $\zeta _j \leq n^{\delta / 2 - 2 / 3}$
 and
$\zeta _j \leq n^{\delta / 2 - 2 / 3}$
 and 
 $\delta < 1/50$
), we deduce that
$\delta < 1/50$
), we deduce that 
 $-\Omega _{\mathfrak {t}_j} (x) \geq n^{\delta / 3} \zeta _1^{1/2}$
. So,
$-\Omega _{\mathfrak {t}_j} (x) \geq n^{\delta / 3} \zeta _1^{1/2}$
. So, 
 $$ \begin{align*} \widehat{H}^* (v) \geq \widetilde{H} (v) + c \theta \zeta_j |x - x_0|^{1/2} + \mathcal{O} (\zeta_1^{3/2}) \geq \widetilde{H} (u), \end{align*} $$
$$ \begin{align*} \widehat{H}^* (v) \geq \widetilde{H} (v) + c \theta \zeta_j |x - x_0|^{1/2} + \mathcal{O} (\zeta_1^{3/2}) \geq \widetilde{H} (u), \end{align*} $$
which once again implies that 
 $\widehat {h} (v) \geq \widetilde {h} (v)$
 so that
$\widehat {h} (v) \geq \widetilde {h} (v)$
 so that 
 $\widehat {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
. This verifies the lemma in all cases.
$\widehat {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
. This verifies the lemma in all cases.
Given this lemma, we can establish Proposition 5.8.
Proof of Proposition 5.8.
 Let 
 $\widehat {\mathsf {H}}: \mathsf {D} \rightarrow \mathbb {Z}$
 denote a uniformly random element of
$\widehat {\mathsf {H}}: \mathsf {D} \rightarrow \mathbb {Z}$
 denote a uniformly random element of 
 $\mathscr {G} (\widehat {\mathsf {h}})$
. By Lemma 5.11 and Lemma 3.15 (alternatively, Remark 3.16), we may couple
$\mathscr {G} (\widehat {\mathsf {h}})$
. By Lemma 5.11 and Lemma 3.15 (alternatively, Remark 3.16), we may couple 
 $\widehat {\mathsf {H}}$
 with
$\widehat {\mathsf {H}}$
 with 
 $\widetilde {\mathsf {H}}$
 such that
$\widetilde {\mathsf {H}}$
 such that 
 $\widehat {\mathsf {H}}(\mathsf {u}) \geq \widetilde {\mathsf {H}} (\mathsf {u})$
, for each
$\widehat {\mathsf {H}}(\mathsf {u}) \geq \widetilde {\mathsf {H}} (\mathsf {u})$
, for each 
 $\mathsf {u} \in \mathsf {D}$
. In particular, denoting
$\mathsf {u} \in \mathsf {D}$
. In particular, denoting 
 $\widehat {H} : \overline {\mathfrak {D}} \rightarrow \mathbb {R}$
 by
$\widehat {H} : \overline {\mathfrak {D}} \rightarrow \mathbb {R}$
 by 
 $\widehat {H} (u) = n^{-1} \widehat {\mathsf {H}} (nu)$
 for each
$\widehat {H} (u) = n^{-1} \widehat {\mathsf {H}} (nu)$
 for each 
 $u \in \overline {\mathfrak {D}}$
, we have
$u \in \overline {\mathfrak {D}}$
, we have 
 $\widehat {H} (u) \geq \widetilde {H} (u)$
.
$\widehat {H} (u) \geq \widetilde {H} (u)$
.
 Apply Theorem 4.3, with the 
 $(h; H^*; \mathsf {H}; \delta )$
 there equal to
$(h; H^*; \mathsf {H}; \delta )$
 there equal to 
 $\big ( \widehat {h}; \widehat {H}^*; \widehat {\mathsf {H}}; \delta /500 \big )$
 here. By the fourth part of Proposition 5.4, Assumption 4.1 applies. Moreover, Assumption 4.2 applies since
$\big ( \widehat {h}; \widehat {H}^*; \widehat {\mathsf {H}}; \delta /500 \big )$
 here. By the fourth part of Proposition 5.4, Assumption 4.1 applies. Moreover, Assumption 4.2 applies since 
 $\widehat {\mathsf {h}} (nv) = \big \lfloor n \widehat h(v) \big \rfloor $
 for each
$\widehat {\mathsf {h}} (nv) = \big \lfloor n \widehat h(v) \big \rfloor $
 for each 
 $v \in \partial \mathfrak {D}$
. Then, letting
$v \in \partial \mathfrak {D}$
. Then, letting 
 $\mathscr {E}$
 denote the event on which
$\mathscr {E}$
 denote the event on which 
 $$ \begin{align} \big| \widehat{H} (u) - \widehat{H}^* (u) \big| < n^{\delta / 500 - 1}, \qquad & \text{for each}\ u \in \overline{\mathfrak{D}}; \nonumber\\ \widehat{H} (u) = \widehat{H}^* (u), \qquad & \text{if}\ u \notin \widehat{\mathfrak{L}}\ \text{and}\ \operatorname{\mathrm{dist}} (u, \partial \widehat{\mathfrak{L}}) \geq n^{\delta / 500 - 2/3}, \end{align} $$
$$ \begin{align} \big| \widehat{H} (u) - \widehat{H}^* (u) \big| < n^{\delta / 500 - 1}, \qquad & \text{for each}\ u \in \overline{\mathfrak{D}}; \nonumber\\ \widehat{H} (u) = \widehat{H}^* (u), \qquad & \text{if}\ u \notin \widehat{\mathfrak{L}}\ \text{and}\ \operatorname{\mathrm{dist}} (u, \partial \widehat{\mathfrak{L}}) \geq n^{\delta / 500 - 2/3}, \end{align} $$
 Theorem 4.3 implies that 
 $\mathscr {E}$
 holds with overwhelming probability. In what follows, let us fix
$\mathscr {E}$
 holds with overwhelming probability. In what follows, let us fix 
 $u = (x, \mathfrak {s}) \in \overline {\mathfrak {D}}$
, and let
$u = (x, \mathfrak {s}) \in \overline {\mathfrak {D}}$
, and let 
 $u_0 = (x_0, \mathfrak {s}) \in \mathfrak {A}(\mathfrak {P})$
 denote a point with
$u_0 = (x_0, \mathfrak {s}) \in \mathfrak {A}(\mathfrak {P})$
 denote a point with 
 $|x - x_0|$
 minimal. We will show that the upper bounds in Equations (5.8) and (5.9) hold on
$|x - x_0|$
 minimal. We will show that the upper bounds in Equations (5.8) and (5.9) hold on 
 $\mathscr {E}$
.
$\mathscr {E}$
.
 First, assume that 
 $u \in \mathfrak {L}_-^{\delta }$
, in which case
$u \in \mathfrak {L}_-^{\delta }$
, in which case 
 $u \in \mathfrak {L}$
 and
$u \in \mathfrak {L}$
 and 
 $|x - x_0| \geq n^{\delta - 2/3} \geq (\zeta _1 + \zeta _2) \big | \Upsilon _{\mathfrak {s}} (x) \big |$
. Hence, (5.4) implies that
$|x - x_0| \geq n^{\delta - 2/3} \geq (\zeta _1 + \zeta _2) \big | \Upsilon _{\mathfrak {s}} (x) \big |$
. Hence, (5.4) implies that 
 $u \in \widehat {\mathfrak {L}}$
, and so Equation (5.3) applies and gives
$u \in \widehat {\mathfrak {L}}$
, and so Equation (5.3) applies and gives 
 $$ \begin{align} \widehat{H} (x, \mathfrak{s}) \leq \widehat{H}^* (x, \mathfrak{s}) + n^{\delta / 500 - 1} \leq H^* (x, \mathfrak{s}) - \omega (\mathfrak{s}) \Omega_{\mathfrak{s}} (x) + n^{\delta / 500 - 1} + \mathcal{O} (\zeta_1^{3/2}), \end{align} $$
$$ \begin{align} \widehat{H} (x, \mathfrak{s}) \leq \widehat{H}^* (x, \mathfrak{s}) + n^{\delta / 500 - 1} \leq H^* (x, \mathfrak{s}) - \omega (\mathfrak{s}) \Omega_{\mathfrak{s}} (x) + n^{\delta / 500 - 1} + \mathcal{O} (\zeta_1^{3/2}), \end{align} $$
where 
 $\omega : [\mathfrak {t}_1, \mathfrak {t}_2] \rightarrow \mathbb {R}$
 is given by
$\omega : [\mathfrak {t}_1, \mathfrak {t}_2] \rightarrow \mathbb {R}$
 is given by 
 $$ \begin{align*} \omega (t) = \zeta_2' \displaystyle\frac{t - \mathfrak{t}_1}{\mathfrak{t}_2 - \mathfrak{t}_1} + \zeta_1' \displaystyle\frac{\mathfrak{t}_2 - t}{\mathfrak{t}_2 - \mathfrak{t}_1}. \end{align*} $$
$$ \begin{align*} \omega (t) = \zeta_2' \displaystyle\frac{t - \mathfrak{t}_1}{\mathfrak{t}_2 - \mathfrak{t}_1} + \zeta_1' \displaystyle\frac{\mathfrak{t}_2 - t}{\mathfrak{t}_2 - \mathfrak{t}_1}. \end{align*} $$
 In particular, since 
 $(1 - \varepsilon ) \mathfrak {t}_1 + \varepsilon \mathfrak {t}_2 = \mathfrak {t}_1 + \varepsilon \mathfrak {t} \leq \mathfrak {s} \leq \mathfrak {t}_2 - \varepsilon \mathfrak {t} = \varepsilon \mathfrak {t}_1 + (1 - \varepsilon ) \mathfrak {t}_2$
, we have (recalling the definition (5.7) of
$(1 - \varepsilon ) \mathfrak {t}_1 + \varepsilon \mathfrak {t}_2 = \mathfrak {t}_1 + \varepsilon \mathfrak {t} \leq \mathfrak {s} \leq \mathfrak {t}_2 - \varepsilon \mathfrak {t} = \varepsilon \mathfrak {t}_1 + (1 - \varepsilon ) \mathfrak {t}_2$
, we have (recalling the definition (5.7) of 
 $\zeta $
, as well as the bounds
$\zeta $
, as well as the bounds 
 $\zeta _1 \geq \zeta _2$
,
$\zeta _1 \geq \zeta _2$
, 
 $\zeta _1 - \zeta _2 \geq \varsigma (\zeta _1 + \zeta _2)$
 and
$\zeta _1 - \zeta _2 \geq \varsigma (\zeta _1 + \zeta _2)$
 and 
 $\theta < 1/50$
) that
$\theta < 1/50$
) that 
 $$ \begin{align} \omega (\mathfrak{s}) \leq (1 - \varepsilon) \zeta_1' + \varepsilon \zeta_2' \leq (1 - 3 \theta \varepsilon) \zeta. \end{align} $$
$$ \begin{align} \omega (\mathfrak{s}) \leq (1 - \varepsilon) \zeta_1' + \varepsilon \zeta_2' \leq (1 - 3 \theta \varepsilon) \zeta. \end{align} $$
 Together, Equations (5.14) and (5.15) (with the fact that 
 $n^{\delta / 500 - 1} \leq \zeta _1^{3/2}$
) yield on
$n^{\delta / 500 - 1} \leq \zeta _1^{3/2}$
) yield on 
 $\mathscr {E}$
 that
$\mathscr {E}$
 that 
 $$ \begin{align*} \widetilde{H} (u) \leq \widehat{H} (x, \mathfrak{s}) \leq H^* (x, \mathfrak{s}) - (1 - 3 \theta \varepsilon) \zeta \Omega_{\mathfrak{s}} (x) + \mathcal{O} (\zeta_1^{3/2}) \leq H^* (x, \mathfrak{s}) - \zeta \Omega_{\mathfrak{s}} (x) = H^* (u) - \zeta \Omega_{\mathfrak{s}} (x), \end{align*} $$
$$ \begin{align*} \widetilde{H} (u) \leq \widehat{H} (x, \mathfrak{s}) \leq H^* (x, \mathfrak{s}) - (1 - 3 \theta \varepsilon) \zeta \Omega_{\mathfrak{s}} (x) + \mathcal{O} (\zeta_1^{3/2}) \leq H^* (x, \mathfrak{s}) - \zeta \Omega_{\mathfrak{s}} (x) = H^* (u) - \zeta \Omega_{\mathfrak{s}} (x), \end{align*} $$
where the third inequality follows from the fact that Remark 5.3 implies 
 $-\Omega _{\mathfrak {s}} (x) \geq |x - x_0|^{1/2}$
 and
$-\Omega _{\mathfrak {s}} (x) \geq |x - x_0|^{1/2}$
 and 
 $|x - x_0| \geq n^{\delta - 2/3} \geq n^{\delta / 4} (\zeta _1 + \zeta _2)$
 (as
$|x - x_0| \geq n^{\delta - 2/3} \geq n^{\delta / 4} (\zeta _1 + \zeta _2)$
 (as 
 $\zeta _1, \zeta _2 \leq n^{\delta / 2 - 2/3}$
). This verifies the upper bound in the first statement of (5.8).
$\zeta _1, \zeta _2 \leq n^{\delta / 2 - 2/3}$
). This verifies the upper bound in the first statement of (5.8).
 To verify the second, assume that 
 $u \notin \mathfrak {L}$
 and
$u \notin \mathfrak {L}$
 and 
 $|x - x_0| \geq \zeta \big | \Upsilon _{\mathfrak {s}} (x) \big |$
. Since
$|x - x_0| \geq \zeta \big | \Upsilon _{\mathfrak {s}} (x) \big |$
. Since 
 $\zeta _1, \zeta _2 \geq n^{\delta / 100 - 2/3}$
, we deduce
$\zeta _1, \zeta _2 \geq n^{\delta / 100 - 2/3}$
, we deduce 
 $\operatorname {\mathrm {dist}} (u, \mathfrak {A}(\mathfrak {P})) \gg n^{\delta / 500 - 2/3}$
; by Equation (5.13) this implies
$\operatorname {\mathrm {dist}} (u, \mathfrak {A}(\mathfrak {P})) \gg n^{\delta / 500 - 2/3}$
; by Equation (5.13) this implies 
 $\widehat {H} (u) = \widehat {H}^* (u)$
 on the event
$\widehat {H} (u) = \widehat {H}^* (u)$
 on the event 
 $\mathscr {E}$
. Additionally, the second of Proposition 5.4 implies that
$\mathscr {E}$
. Additionally, the second of Proposition 5.4 implies that 
 $\widehat {H}^* (u) = H^* (u)$
 if
$\widehat {H}^* (u) = H^* (u)$
 if 
 $|x - x_0| \geq \omega (\mathfrak {s}) \big | \Upsilon _{\mathfrak {s}} (x_0) \big | + \mathcal {O} (\zeta _1^2)$
. Since Equation (5.15) yields
$|x - x_0| \geq \omega (\mathfrak {s}) \big | \Upsilon _{\mathfrak {s}} (x_0) \big | + \mathcal {O} (\zeta _1^2)$
. Since Equation (5.15) yields 
 $\omega (\mathfrak {s}) \leq (1 - 3 \theta \varepsilon ) \zeta $
, we obtain
$\omega (\mathfrak {s}) \leq (1 - 3 \theta \varepsilon ) \zeta $
, we obtain 
 $|x - x_0| \geq \zeta \big | \Upsilon _{\mathfrak {s}} (x_0) \big | \geq \omega (\mathfrak {s}) \big | \Upsilon _{\mathfrak {s}} (x_0) \big | + \mathcal {O} (\zeta _1^2)$
, and so this condition is satisfied. Hence, on
$|x - x_0| \geq \zeta \big | \Upsilon _{\mathfrak {s}} (x_0) \big | \geq \omega (\mathfrak {s}) \big | \Upsilon _{\mathfrak {s}} (x_0) \big | + \mathcal {O} (\zeta _1^2)$
, and so this condition is satisfied. Hence, on 
 $\mathscr {E}$
, we have
$\mathscr {E}$
, we have 
 $\widetilde {H} (u) \leq \widehat {H} (u) = H^* (u)$
, and so the upper bound in the second statement of (5.8) holds.
$\widetilde {H} (u) \leq \widehat {H} (u) = H^* (u)$
, and so the upper bound in the second statement of (5.8) holds.
 It thus remains to assume 
 $|x - x_0| \leq \zeta ^{8 / 9}$
 and verify that the upper bound in Equation (5.9) holds on
$|x - x_0| \leq \zeta ^{8 / 9}$
 and verify that the upper bound in Equation (5.9) holds on 
 $\mathscr {E}$
. To that end, we assume in what follows that
$\mathscr {E}$
. To that end, we assume in what follows that 
 $(x_0, \mathfrak {s})$
 is a right endpoint of
$(x_0, \mathfrak {s})$
 is a right endpoint of 
 $\mathfrak {A}(\mathfrak {P})$
 and that
$\mathfrak {A}(\mathfrak {P})$
 and that 
 $\partial _x H^* (x_0, \mathfrak {s}) = 0$
, as the proofs in all other cases are entirely analogous. Then, let
$\partial _x H^* (x_0, \mathfrak {s}) = 0$
, as the proofs in all other cases are entirely analogous. Then, let 
 $(\widehat {x}_0, \mathfrak {s}) \in \partial \widehat {\mathfrak {L}}$
 denote the point such that
$(\widehat {x}_0, \mathfrak {s}) \in \partial \widehat {\mathfrak {L}}$
 denote the point such that 
 $|x - \widehat {x}_0|$
 is minimal. By Equation (5.4), we have
$|x - \widehat {x}_0|$
 is minimal. By Equation (5.4), we have 
 $x_0 - \widehat {x}_0 = \omega (\mathfrak {s}) \Upsilon _{\mathfrak {s}} (x_0) + \mathcal {O} (\zeta _1^2)$
. In particular, since
$x_0 - \widehat {x}_0 = \omega (\mathfrak {s}) \Upsilon _{\mathfrak {s}} (x_0) + \mathcal {O} (\zeta _1^2)$
. In particular, since 
 $\Upsilon _{\mathfrak {s}} (x_0)> 0$
 (by Remark 5.3), it follows that
$\Upsilon _{\mathfrak {s}} (x_0)> 0$
 (by Remark 5.3), it follows that 
 $x_0 \geq \widehat {x}_0$
 and
$x_0 \geq \widehat {x}_0$
 and 
 $x_0 - \widehat {x}_0 = \mathcal {O} (\zeta )$
.
$x_0 - \widehat {x}_0 = \mathcal {O} (\zeta )$
.
 Let us first assume that 
 $x \geq \widehat {x}_0 - \theta \varepsilon \zeta \Upsilon _{\mathfrak {s}} (x_0)$
. Then, since
$x \geq \widehat {x}_0 - \theta \varepsilon \zeta \Upsilon _{\mathfrak {s}} (x_0)$
. Then, since 
 $\partial _x \widehat {H}^* (x', \mathfrak {s}) = \partial _x H^* (x_0, \mathfrak {s})$
 for
$\partial _x \widehat {H}^* (x', \mathfrak {s}) = \partial _x H^* (x_0, \mathfrak {s})$
 for 
 $x' \geq \widehat {x}_0$
, we have
$x' \geq \widehat {x}_0$
, we have 
 $$ \begin{align} \widehat{H}^* (x, \mathfrak{s}) \leq \widehat{H}^* (\widehat{x}_0, \mathfrak{s}) \leq \widehat{H}^* (x_0, \mathfrak{s}) = H^* (x_0, \mathfrak{s}) \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big). \end{align} $$
$$ \begin{align} \widehat{H}^* (x, \mathfrak{s}) \leq \widehat{H}^* (\widehat{x}_0, \mathfrak{s}) \leq \widehat{H}^* (x_0, \mathfrak{s}) = H^* (x_0, \mathfrak{s}) \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big). \end{align} $$
 Here, the first inequality follows from the fact that either 
 $x \leq \widehat {x}_0$
 (in which case
$x \leq \widehat {x}_0$
 (in which case 
 $\widehat {H}^* (x, \mathfrak {s}) \leq \widehat {H}^* (\widehat {x}_0, \mathfrak {s})$
) or
$\widehat {H}^* (x, \mathfrak {s}) \leq \widehat {H}^* (\widehat {x}_0, \mathfrak {s})$
) or 
 $x> \widehat {x}_0$
 (in which case
$x> \widehat {x}_0$
 (in which case 
 $\widehat {H}^* (x, \mathfrak {s}) = \widehat {H}^* (\widehat {x}_0, \mathfrak {s})$
); the second inequality holds since
$\widehat {H}^* (x, \mathfrak {s}) = \widehat {H}^* (\widehat {x}_0, \mathfrak {s})$
); the second inequality holds since 
 $x_0 \geq \widehat {x}_0$
. The equality follows from the second statement of Equation (5.4), and the last inequality follows from the fact that
$x_0 \geq \widehat {x}_0$
. The equality follows from the second statement of Equation (5.4), and the last inequality follows from the fact that 
 $$ \begin{align*} x + \zeta \Upsilon_{\mathfrak{s}} (x_0) \geq \widehat{x}_0 + (1 - \theta \varepsilon) \zeta \Upsilon_{\mathfrak{s}} (x_0) \geq \widehat{x}_0 + \omega (\mathfrak{s}) \Upsilon_{\mathfrak{s}} (x_0) + \mathcal{O} (\zeta_1^2) \geq x_0, \end{align*} $$
$$ \begin{align*} x + \zeta \Upsilon_{\mathfrak{s}} (x_0) \geq \widehat{x}_0 + (1 - \theta \varepsilon) \zeta \Upsilon_{\mathfrak{s}} (x_0) \geq \widehat{x}_0 + \omega (\mathfrak{s}) \Upsilon_{\mathfrak{s}} (x_0) + \mathcal{O} (\zeta_1^2) \geq x_0, \end{align*} $$
where we have used Equation (5.4). Additionally, Equation (5.13) implies that 
 $\widehat {H} (x, \mathfrak {s}) = \widehat {H}^* (x, \mathfrak {s})$
, since
$\widehat {H} (x, \mathfrak {s}) = \widehat {H}^* (x, \mathfrak {s})$
, since 
 $|x - \widehat {x}_0| \gg \zeta ^{1 + \delta / 500} \gg n^{\delta / 500 - 2/3}$
. Together with Equation (5.16), this implies that on
$|x - \widehat {x}_0| \gg \zeta ^{1 + \delta / 500} \gg n^{\delta / 500 - 2/3}$
. Together with Equation (5.16), this implies that on 
 $\mathscr {E}$
 we have
$\mathscr {E}$
 we have 
 $\widetilde {H} (u) \leq \widehat {H} (x, \mathfrak {s}) = \widehat {H}^* (x, \mathfrak {s}) = H^* \big ( x + \zeta \Upsilon _{\mathfrak {s}} (x_0), \mathfrak {s} \big )$
, thereby verifying the upper bound in Equation (5.9).
$\widetilde {H} (u) \leq \widehat {H} (x, \mathfrak {s}) = \widehat {H}^* (x, \mathfrak {s}) = H^* \big ( x + \zeta \Upsilon _{\mathfrak {s}} (x_0), \mathfrak {s} \big )$
, thereby verifying the upper bound in Equation (5.9).
 So, let us instead assume that 
 $x < \widehat {x}_0 - \theta \varepsilon \zeta \Upsilon _{\mathfrak {s}} (x_0)$
. Then the bound
$x < \widehat {x}_0 - \theta \varepsilon \zeta \Upsilon _{\mathfrak {s}} (x_0)$
. Then the bound 
 $x_0 - \widehat {x}_0 = \omega (\mathfrak {s}) \Upsilon _{\mathfrak {s}} (x_0) + \mathcal {O} (\zeta ^2)$
, together with Equation (5.5) and the fact that
$x_0 - \widehat {x}_0 = \omega (\mathfrak {s}) \Upsilon _{\mathfrak {s}} (x_0) + \mathcal {O} (\zeta ^2)$
, together with Equation (5.5) and the fact that 
 $H^*$
 is
$H^*$
 is 
 $1$
-Lipschitz, implies
$1$
-Lipschitz, implies 
 $$ \begin{align} \widehat{H}^* (u) & \leq \widehat{H}^* (\widehat{x}_0, \mathfrak{s}) + H^* (x + x_0 - \widehat{x}_0, \mathfrak{s}) - H^* (x_0, \mathfrak{s}) + \mathcal{O} (\zeta_1^2) \nonumber\\ & \leq H^* \big( x + \omega (\mathfrak{s}) \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) + \widehat{H}^* (\widehat{x}_0, \mathfrak{s}) - H^* (x_0, \mathfrak{s}) + \mathcal{O} (\zeta^2) \\ & \nonumber = H^* \big( x + \omega (\mathfrak{s}) \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) + \mathcal{O} (\zeta^2). \end{align} $$
$$ \begin{align} \widehat{H}^* (u) & \leq \widehat{H}^* (\widehat{x}_0, \mathfrak{s}) + H^* (x + x_0 - \widehat{x}_0, \mathfrak{s}) - H^* (x_0, \mathfrak{s}) + \mathcal{O} (\zeta_1^2) \nonumber\\ & \leq H^* \big( x + \omega (\mathfrak{s}) \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) + \widehat{H}^* (\widehat{x}_0, \mathfrak{s}) - H^* (x_0, \mathfrak{s}) + \mathcal{O} (\zeta^2) \\ & \nonumber = H^* \big( x + \omega (\mathfrak{s}) \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) + \mathcal{O} (\zeta^2). \end{align} $$
 Here, to deduce the last equality we used the fact that 
 $\widehat {H}^* (\widehat {x}_0, \mathfrak {s}) = \widehat {H}^* (x_0, \mathfrak {s}) = H^* (x_0, \mathfrak {s})$
, which holds since
$\widehat {H}^* (\widehat {x}_0, \mathfrak {s}) = \widehat {H}^* (x_0, \mathfrak {s}) = H^* (x_0, \mathfrak {s})$
, which holds since 
 $x_0 \geq \widehat {x}_0$
, since
$x_0 \geq \widehat {x}_0$
, since 
 $\partial _x \widehat {H}^* (x', \mathfrak {s}) = \partial _x H^* (x_0, \mathfrak {s}) = 0$
 for
$\partial _x \widehat {H}^* (x', \mathfrak {s}) = \partial _x H^* (x_0, \mathfrak {s}) = 0$
 for 
 $x' \geq \widehat {x}_0$
, and by the second statement of Proposition 5.4. Next, since
$x' \geq \widehat {x}_0$
, and by the second statement of Proposition 5.4. Next, since 
 $\omega (\mathfrak {s}) \leq (1 - 3 \varepsilon \theta ) \zeta $
 (by Equation (5.15)), the square root decay of
$\omega (\mathfrak {s}) \leq (1 - 3 \varepsilon \theta ) \zeta $
 (by Equation (5.15)), the square root decay of 
 $\partial H^*$
 around
$\partial H^*$
 around 
 $\mathfrak {A}(\mathfrak {P})$
 (see Remark A.2) yields a constant
$\mathfrak {A}(\mathfrak {P})$
 (see Remark A.2) yields a constant 
 $c = c (\mathfrak {P}, \mathfrak {D}, \varepsilon , \theta )> 0$
 such that
$c = c (\mathfrak {P}, \mathfrak {D}, \varepsilon , \theta )> 0$
 such that 
 $$ \begin{align*} H^* \big( x + \omega (\mathfrak{s}) \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) - c \zeta^{3/2}. \end{align*} $$
$$ \begin{align*} H^* \big( x + \omega (\mathfrak{s}) \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) - c \zeta^{3/2}. \end{align*} $$
 This, together with Equations (5.13) and (5.17), implies on 
 $\mathscr {E}$
 that
$\mathscr {E}$
 that 
 $$ \begin{align*} \widetilde{H}(u) \leq \widehat{H} (u) \leq \widehat{H}^* (u) + n^{\delta / 500 - 1} & \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) - c \zeta^{3/2} + n^{\delta / 500 - 1} + \mathcal{O} (\zeta^2) \\ & \leq H^* \big( x_0 + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big), \end{align*} $$
$$ \begin{align*} \widetilde{H}(u) \leq \widehat{H} (u) \leq \widehat{H}^* (u) + n^{\delta / 500 - 1} & \leq H^* \big( x + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big) - c \zeta^{3/2} + n^{\delta / 500 - 1} + \mathcal{O} (\zeta^2) \\ & \leq H^* \big( x_0 + \zeta \Upsilon_{\mathfrak{s}} (x_0), \mathfrak{s} \big), \end{align*} $$
where for the last bound we used the fact that 
 $\zeta \geq n^{\delta / 100 - 2/3}$
. Thus, the upper bound in Equation (5.9) holds in
$\zeta \geq n^{\delta / 100 - 2/3}$
. Thus, the upper bound in Equation (5.9) holds in 
 $\mathscr {E}$
. As mentioned earlier, the proofs of all lower bounds are entirely analogous and therefore omitted; this establishes the proposition.
$\mathscr {E}$
. As mentioned earlier, the proofs of all lower bounds are entirely analogous and therefore omitted; this establishes the proposition.
5.3 Proof of Proposition 5.9
In this section, we establish Proposition 5.9.
Proof Proof of Proposition 5.9 (Outline).
Since the proof of this proposition is similar to that of Proposition 5.8, we only outline it. It suffices to show that, with overwhelming probability, we have
 $$ \begin{align} H^* (X_0, \mathfrak{s}) + \xi \Omega_{\mathfrak{s}} (X_0) - \mu \leq H(X_0, \mathfrak{s}) \leq H^* (X_0, \mathfrak{s}) - \xi \Omega_{\mathfrak{s}} (X_0) + \mu. \end{align} $$
$$ \begin{align} H^* (X_0, \mathfrak{s}) + \xi \Omega_{\mathfrak{s}} (X_0) - \mu \leq H(X_0, \mathfrak{s}) \leq H^* (X_0, \mathfrak{s}) - \xi \Omega_{\mathfrak{s}} (X_0) + \mu. \end{align} $$
 We only establish the upper bound in Equation (5.18), as the proof of the lower bound is entirely analogous. Throughout, we set 
 $\widetilde {h} = \widetilde {H} |_{\partial \mathfrak {D}}$
 from Assumption 5.7.
$\widetilde {h} = \widetilde {H} |_{\partial \mathfrak {D}}$
 from Assumption 5.7.
 To do this, we apply Proposition 5.4 with the 
 $(\xi _1, \xi _2)$
 there equal to
$(\xi _1, \xi _2)$
 there equal to 
 $(-\xi _1, -\xi _2)$
 here. This yields a function
$(-\xi _1, -\xi _2)$
 here. This yields a function 
 $\widehat {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 and its associated maximizer
$\widehat {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 and its associated maximizer 
 $\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 of
$\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 of 
 $\mathcal {E}$
 satisfying the four properties listed there. Define
$\mathcal {E}$
 satisfying the four properties listed there. Define 
 $\breve {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
; the associated maximizer
$\breve {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
; the associated maximizer 
 $\breve {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \breve {h})$
 of
$\breve {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \breve {h})$
 of 
 $\mathcal {E}$
; and the discretization
$\mathcal {E}$
; and the discretization 
 $\breve {\mathsf {h}} : \partial \mathsf {D} \rightarrow \mathbb {Z}$
 of
$\breve {\mathsf {h}} : \partial \mathsf {D} \rightarrow \mathbb {Z}$
 of 
 $\breve {h}$
 by setting
$\breve {h}$
 by setting 
 $$ \begin{align} \breve{h} (v) = \widehat{h} (v) + \mu + n^{\delta / 20 - 1}, \quad \text{so that} \quad \breve{H}^* (u) = \widehat{H}^* (u) + \mu + n^{\delta / 20 - 1}, \quad \text{and} \quad \breve{\mathsf{h}} (nv) = \big\lfloor n \breve{h} (v) \big\rfloor, \end{align} $$
$$ \begin{align} \breve{h} (v) = \widehat{h} (v) + \mu + n^{\delta / 20 - 1}, \quad \text{so that} \quad \breve{H}^* (u) = \widehat{H}^* (u) + \mu + n^{\delta / 20 - 1}, \quad \text{and} \quad \breve{\mathsf{h}} (nv) = \big\lfloor n \breve{h} (v) \big\rfloor, \end{align} $$
for each 
 $u \in \overline {\mathfrak {D}}$
 and
$u \in \overline {\mathfrak {D}}$
 and 
 $v \in \partial \mathfrak {D}$
. We claim that
$v \in \partial \mathfrak {D}$
. We claim that 
 $\widetilde {\mathsf {h}} (\mathsf {v}) \leq \breve {\mathsf {h}} (\mathsf {v})$
, for each
$\widetilde {\mathsf {h}} (\mathsf {v}) \leq \breve {\mathsf {h}} (\mathsf {v})$
, for each 
 $\mathsf {v} \in \partial \mathsf {D}$
.
$\mathsf {v} \in \partial \mathsf {D}$
.
 The proof that this holds when 
 $v = n^{-1} \mathsf {v} \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
 is very similar to that in the proof of Lemma 5.11, so it is omitted. Thus, suppose that
$v = n^{-1} \mathsf {v} \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
 is very similar to that in the proof of Lemma 5.11, so it is omitted. Thus, suppose that 
 $v = (x, \mathfrak {t}_j) \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
. If
$v = (x, \mathfrak {t}_j) \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
. If 
 $v \notin \overline {\mathfrak {L}}$
 and
$v \notin \overline {\mathfrak {L}}$
 and 
 $\operatorname {\mathrm {dist}} (v, \mathfrak {A}(\mathfrak {P})) \geq \zeta _j^{8/9}$
, then the proof is again entirely analogous to that in the proof of Lemma 5.11.
$\operatorname {\mathrm {dist}} (v, \mathfrak {A}(\mathfrak {P})) \geq \zeta _j^{8/9}$
, then the proof is again entirely analogous to that in the proof of Lemma 5.11.
 So, let us first assume that 
 $v = (x, \mathfrak {t}_j) \in \mathfrak {L}_-^{\delta }$
. Since
$v = (x, \mathfrak {t}_j) \in \mathfrak {L}_-^{\delta }$
. Since 
 $\widetilde {H}$
 is
$\widetilde {H}$
 is 
 $(\xi _j; \mu )$
-tilted at v, we have
$(\xi _j; \mu )$
-tilted at v, we have 
 $$ \begin{align*} \widetilde{H} (v) \leq H^* (v) - \xi_j \Omega_{\mathfrak{t}_j} (x) + \mu. \end{align*} $$
$$ \begin{align*} \widetilde{H} (v) \leq H^* (v) - \xi_j \Omega_{\mathfrak{t}_j} (x) + \mu. \end{align*} $$
 This, together with Equations (5.3), (5.19) and the bound 
 $\xi _j \leq n^{-2/3}$
 gives
$\xi _j \leq n^{-2/3}$
 gives 
 $$ \begin{align} \breve{H}^* (v) = \widehat{H}^* (v) + \mu + n^{\delta / 20 - 1} & \geq H^* (v) - \xi_j \Omega_{\mathfrak{t}_j} (x) + \mu + n^{\delta / 20 - 1} + \mathcal{O} (\xi_j^{3/2}) \nonumber\\ & \geq \widetilde{H} (v) + n^{\delta / 20 - 1} + \mathcal{O} (\xi_j^{3/2}) \geq \widetilde{H} (v) + n^{- 1}. \end{align} $$
$$ \begin{align} \breve{H}^* (v) = \widehat{H}^* (v) + \mu + n^{\delta / 20 - 1} & \geq H^* (v) - \xi_j \Omega_{\mathfrak{t}_j} (x) + \mu + n^{\delta / 20 - 1} + \mathcal{O} (\xi_j^{3/2}) \nonumber\\ & \geq \widetilde{H} (v) + n^{\delta / 20 - 1} + \mathcal{O} (\xi_j^{3/2}) \geq \widetilde{H} (v) + n^{- 1}. \end{align} $$
 Hence, 
 $n \breve {h} (v) \geq n \widetilde {h} (v) + 1$
, and so
$n \breve {h} (v) \geq n \widetilde {h} (v) + 1$
, and so 
 $\breve {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
 whenever
$\breve {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
 whenever 
 $v = n^{-1} \mathsf {v} \in \mathfrak {L}_-^{\delta }$
.
$v = n^{-1} \mathsf {v} \in \mathfrak {L}_-^{\delta }$
.
 Thus, assume instead 
 $v \notin \mathfrak {L}_-^{\delta }$
 and
$v \notin \mathfrak {L}_-^{\delta }$
 and 
 $\operatorname {\mathrm {dist}} (v, \mathfrak {A}(\mathfrak {P})) \leq \zeta _j^{8/9}$
, and let
$\operatorname {\mathrm {dist}} (v, \mathfrak {A}(\mathfrak {P})) \leq \zeta _j^{8/9}$
, and let 
 $v_0 = (x_0, \mathfrak {t}_j) \in \mathfrak {A}(\mathfrak {P})$
 be such that
$v_0 = (x_0, \mathfrak {t}_j) \in \mathfrak {A}(\mathfrak {P})$
 be such that 
 $|x - x_0|$
 is minimal. We suppose that
$|x - x_0|$
 is minimal. We suppose that 
 $v_0$
 is a right endpoint of
$v_0$
 is a right endpoint of 
 $\mathfrak {A}(\mathfrak {P})$
 and that
$\mathfrak {A}(\mathfrak {P})$
 and that 
 $\partial _x H^* (v_0) = 0$
, as the alternative cases are entirely analogous; then,
$\partial _x H^* (v_0) = 0$
, as the alternative cases are entirely analogous; then, 
 $\Upsilon _{\mathfrak {t}_j} (x_0)> 0$
 by Remark 5.1. The fact that the edge of
$\Upsilon _{\mathfrak {t}_j} (x_0)> 0$
 by Remark 5.1. The fact that the edge of 
 $\widetilde {H}$
 is
$\widetilde {H}$
 is 
 $\zeta _j$
-tilted with respect to
$\zeta _j$
-tilted with respect to 
 $H^*$
 at level
$H^*$
 at level 
 $\mathfrak {t}_j$
 implies that
$\mathfrak {t}_j$
 implies that 
 $\widetilde {H} (x, \mathfrak {t}_j) \leq H^* \big ( x + \zeta _j \Upsilon _{\mathfrak {t}_j} (x_0) , \mathfrak {t}_j \big )$
. Applying the square root decay of
$\widetilde {H} (x, \mathfrak {t}_j) \leq H^* \big ( x + \zeta _j \Upsilon _{\mathfrak {t}_j} (x_0) , \mathfrak {t}_j \big )$
. Applying the square root decay of 
 $\partial _x H^*$
 around
$\partial _x H^*$
 around 
 $\mathfrak {A}(\mathfrak {P})$
 from Remark A.2 (and the fact that
$\mathfrak {A}(\mathfrak {P})$
 from Remark A.2 (and the fact that 
 $\partial _x H^* (x_0, \mathfrak {t}_j) = 0$
), we therefore deduce the existence of a constant
$\partial _x H^* (x_0, \mathfrak {t}_j) = 0$
), we therefore deduce the existence of a constant 
 $c = c (\mathfrak {P}, \mathfrak {D})> 0$
 such that
$c = c (\mathfrak {P}, \mathfrak {D})> 0$
 such that 
 $$ \begin{align} \widetilde{H} (x, \mathfrak{t}_j) \leq H^* \big( x + \zeta_j \Upsilon_{\mathfrak{t}_j} (x_0) , \mathfrak{t}_j \big) \leq H^* (x, \mathfrak{t}_j) + c \zeta_j^{3/2} \leq H^* (v) + \mathcal{O} (n^{\delta / 30 - 1}). \end{align} $$
$$ \begin{align} \widetilde{H} (x, \mathfrak{t}_j) \leq H^* \big( x + \zeta_j \Upsilon_{\mathfrak{t}_j} (x_0) , \mathfrak{t}_j \big) \leq H^* (x, \mathfrak{t}_j) + c \zeta_j^{3/2} \leq H^* (v) + \mathcal{O} (n^{\delta / 30 - 1}). \end{align} $$
 Now, let us compare 
 $\widehat {H}^* (v)$
 and
$\widehat {H}^* (v)$
 and 
 $H^* (v)$
. To that end, define
$H^* (v)$
. To that end, define 
 $\widehat {x}_0 \in \mathbb {R}$
 such that
$\widehat {x}_0 \in \mathbb {R}$
 such that 
 $\widehat {v}_0 = (\widehat {x}_0, \mathfrak {t}_j)$
 is an endpoint of
$\widehat {v}_0 = (\widehat {x}_0, \mathfrak {t}_j)$
 is an endpoint of 
 $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {L})$
 or
$\partial _{\operatorname {\mathrm {so}}} (\mathfrak {L})$
 or 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {L})$
 and
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {L})$
 and 
 $|\widehat {x}_0 - x_0|$
 is minimal. By Equation (5.4), we have
$|\widehat {x}_0 - x_0|$
 is minimal. By Equation (5.4), we have 
 $\widehat {x}_0 - x_0 = - \xi _j' \Upsilon _{\mathfrak {t}_j} (x_0) + \mathcal {O} (\xi _1^2 + \xi _2^2)$
. In particular,
$\widehat {x}_0 - x_0 = - \xi _j' \Upsilon _{\mathfrak {t}_j} (x_0) + \mathcal {O} (\xi _1^2 + \xi _2^2)$
. In particular, 
 $\widehat {x}_0 \leq x_0$
 and
$\widehat {x}_0 \leq x_0$
 and 
 $x_0 - \widehat {x}_0 = \mathcal {O} (\xi _j)$
, the former of which implies that
$x_0 - \widehat {x}_0 = \mathcal {O} (\xi _j)$
, the former of which implies that 
 $v_0 = (x_0, \mathfrak {t}_j) \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
. Hence, the second statement of Proposition 5.4 implies that
$v_0 = (x_0, \mathfrak {t}_j) \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
. Hence, the second statement of Proposition 5.4 implies that 
 $\widehat {H}^* (x_0, \mathfrak {t}_j) = \widehat {H}^* (\widehat {x}_0, \mathfrak {t}_j)$
. Since
$\widehat {H}^* (x_0, \mathfrak {t}_j) = \widehat {H}^* (\widehat {x}_0, \mathfrak {t}_j)$
. Since 
 $\partial _x \widehat {H}^* (x, \mathfrak {t}_j) = 0$
 for
$\partial _x \widehat {H}^* (x, \mathfrak {t}_j) = 0$
 for 
 $x \geq \widehat {x}_0$
, we find that
$x \geq \widehat {x}_0$
, we find that 
 $\widehat {H}^* (\widehat {x}_0, \mathfrak {t}_j) = \widehat {H}^* (x_0, \mathfrak {t}_j) = H^* (x_0, \mathfrak {t}_j)$
, and so Equation (5.5) yields
$\widehat {H}^* (\widehat {x}_0, \mathfrak {t}_j) = \widehat {H}^* (x_0, \mathfrak {t}_j) = H^* (x_0, \mathfrak {t}_j)$
, and so Equation (5.5) yields 
 $$ \begin{align} \widehat{H}^* (x, \mathfrak{t}_j) & = \widehat{H}^* (\widehat{x}_0, \mathfrak{t}_j) + H^* (x - \widehat{x}_0 + x_0, \mathfrak{t}_j) - H^* (x_0, \mathfrak{t}_j) + \mathcal{O} \big( (\xi_1 + \xi_2) (x - \widehat{x}_0)^{3/2} + (x - \widehat{x}_0)^2 \big) \nonumber\\ & = H^* (x - \widehat{x}_0 + x_0, \mathfrak{t}_j) + \mathcal{O} (\xi_1^2) \geq H^* (x, \mathfrak{t}_j) - c (\widehat{x}_0 - x_0)^{3/2} - n^{-1} \geq H^* (v) + \mathcal{O} (n^{- 1}) \end{align} $$
$$ \begin{align} \widehat{H}^* (x, \mathfrak{t}_j) & = \widehat{H}^* (\widehat{x}_0, \mathfrak{t}_j) + H^* (x - \widehat{x}_0 + x_0, \mathfrak{t}_j) - H^* (x_0, \mathfrak{t}_j) + \mathcal{O} \big( (\xi_1 + \xi_2) (x - \widehat{x}_0)^{3/2} + (x - \widehat{x}_0)^2 \big) \nonumber\\ & = H^* (x - \widehat{x}_0 + x_0, \mathfrak{t}_j) + \mathcal{O} (\xi_1^2) \geq H^* (x, \mathfrak{t}_j) - c (\widehat{x}_0 - x_0)^{3/2} - n^{-1} \geq H^* (v) + \mathcal{O} (n^{- 1}) \end{align} $$
after decreasing c if necessary. Here, to deduce the third statement we used Remark A.2 and to deduce the fourth we used the facts that 
 $x_0 - \widehat {x}_0 = \mathcal {O} (\xi _j) = \mathcal {O} (n^{- 2/3})$
. Combining Equations (5.20), (5.21) and (5.22) then gives
$x_0 - \widehat {x}_0 = \mathcal {O} (\xi _j) = \mathcal {O} (n^{- 2/3})$
. Combining Equations (5.20), (5.21) and (5.22) then gives 
 $$ \begin{align*} \breve{h} (v) = \breve{H}^* (v) = \widehat{H}^* (v) + \mu + n^{\delta / 20 - 1} & \geq H^* (v) + n^{\delta / 20 - 1} + \mathcal{O} (n^{\delta / 30 - 1}) \\ & \geq \widetilde{H} (v) + n^{\delta / 20 - 1} + \mathcal{O} (n^{\delta / 30 - 1}) \\ & \geq \widetilde{H} (v) + n^{-1} = \widetilde{h} (v) + n^{-1}, \end{align*} $$
$$ \begin{align*} \breve{h} (v) = \breve{H}^* (v) = \widehat{H}^* (v) + \mu + n^{\delta / 20 - 1} & \geq H^* (v) + n^{\delta / 20 - 1} + \mathcal{O} (n^{\delta / 30 - 1}) \\ & \geq \widetilde{H} (v) + n^{\delta / 20 - 1} + \mathcal{O} (n^{\delta / 30 - 1}) \\ & \geq \widetilde{H} (v) + n^{-1} = \widetilde{h} (v) + n^{-1}, \end{align*} $$
which again gives 
 $\breve {\mathsf {h}} (\mathsf {v}) = \big \lfloor n \breve {h} (v) \big \rfloor \geq \big \lfloor n \widetilde {h} (v)\big \rfloor = \widetilde {\mathsf {h}} (\mathsf {v})$
. This verifies
$\breve {\mathsf {h}} (\mathsf {v}) = \big \lfloor n \breve {h} (v) \big \rfloor \geq \big \lfloor n \widetilde {h} (v)\big \rfloor = \widetilde {\mathsf {h}} (\mathsf {v})$
. This verifies 
 $\breve {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
, for each
$\breve {\mathsf {h}} (\mathsf {v}) \geq \widetilde {\mathsf {h}} (\mathsf {v})$
, for each 
 $\mathsf {v} \in \partial \mathsf {D}$
.
$\mathsf {v} \in \partial \mathsf {D}$
.
 Now, let 
 $\breve {\mathsf {H}} : \mathsf {D} \rightarrow \mathbb {Z}$
 denote a uniformly random element of
$\breve {\mathsf {H}} : \mathsf {D} \rightarrow \mathbb {Z}$
 denote a uniformly random element of 
 $\mathscr {G} (\breve {\mathsf {h}})$
, and set
$\mathscr {G} (\breve {\mathsf {h}})$
, and set 
 $\breve {H} (u) = n^{-1} \breve {\mathsf {H}} (nu)$
 for each
$\breve {H} (u) = n^{-1} \breve {\mathsf {H}} (nu)$
 for each 
 $u \in \overline {\mathfrak {D}}$
. By Lemma 3.15 (and Remark 3.16), we may couple
$u \in \overline {\mathfrak {D}}$
. By Lemma 3.15 (and Remark 3.16), we may couple 
 $\breve {\mathsf {H}} (\mathsf {u}) \geq \widetilde {\mathsf {H}} (\mathsf {u})$
, for each
$\breve {\mathsf {H}} (\mathsf {u}) \geq \widetilde {\mathsf {H}} (\mathsf {u})$
, for each 
 $\mathsf {u} \in \mathsf {D}$
; thus, under this coupling we have
$\mathsf {u} \in \mathsf {D}$
; thus, under this coupling we have 
 $\breve {H} (u) \geq \widetilde {H} (u)$
 for each
$\breve {H} (u) \geq \widetilde {H} (u)$
 for each 
 $u \in \overline {\mathfrak {D}}$
.
$u \in \overline {\mathfrak {D}}$
.
 Let us next apply Theorem 4.3, with the 
 $(h; H^*; \mathsf {H}; \delta )$
 there equal to the
$(h; H^*; \mathsf {H}; \delta )$
 there equal to the 
 $\big ( \breve {h}; \breve {H}^*; \breve {\mathsf {H}};\delta /30 \big )$
. This yields with overwhelming probability that
$\big ( \breve {h}; \breve {H}^*; \breve {\mathsf {H}};\delta /30 \big )$
. This yields with overwhelming probability that 
 $$ \begin{align} \widetilde{H} (U_0) \leq \breve{H} (U_0) \leq \breve{H}^* (X_0, \mathfrak{s}) + n^{\delta / 30 - 1} \leq \widehat{H}^* (X_0, \mathfrak{s}) + \mu + 2n^{\delta / 20 - 1}. \end{align} $$
$$ \begin{align} \widetilde{H} (U_0) \leq \breve{H} (U_0) \leq \breve{H}^* (X_0, \mathfrak{s}) + n^{\delta / 30 - 1} \leq \widehat{H}^* (X_0, \mathfrak{s}) + \mu + 2n^{\delta / 20 - 1}. \end{align} $$
 Defining 
 $\omega : [\mathfrak {t}_1, \mathfrak {t}_2] \rightarrow \mathbb {R}$
 as in Equation (5.2), the first property listed there yields
$\omega : [\mathfrak {t}_1, \mathfrak {t}_2] \rightarrow \mathbb {R}$
 as in Equation (5.2), the first property listed there yields 
 $$ \begin{align} \widehat{H}^* (X_0, \mathfrak{s}) \leq H^* (X_0, \mathfrak{s}) - \omega (\mathfrak{s}) \Omega_{\mathfrak{s}} (X_0) + \mathcal{O} \big( \xi_1^{3/2} + \xi_2^{3/2} \big). \end{align} $$
$$ \begin{align} \widehat{H}^* (X_0, \mathfrak{s}) \leq H^* (X_0, \mathfrak{s}) - \omega (\mathfrak{s}) \Omega_{\mathfrak{s}} (X_0) + \mathcal{O} \big( \xi_1^{3/2} + \xi_2^{3/2} \big). \end{align} $$
The hypotheses of the proposition and Remark 5.3 together imply (after decreasing c if necessary) that
 $$ \begin{align*} \omega (\mathfrak{s}) \leq \displaystyle\max \big\{ (1 - \varepsilon) & \xi_1 + \varepsilon \xi_2, \varepsilon \xi_1 + (1 - \varepsilon) \xi_2 \big\} < \bigg(1 - \displaystyle\frac{\varepsilon}{10} \bigg) \xi \leq \bigg( 1 - \displaystyle\frac{\varepsilon}{10} \bigg) n^{\delta / 10 - 1} \operatorname{\mathrm{dist}} (U_0, \mathfrak{A}(\mathfrak{P}))^{-1/2}; \\ & - \Omega_{\mathfrak{s}} (X_0) \geq c \operatorname{\mathrm{dist}} (U_0, \mathfrak{A}(\mathfrak{P}))^{1/2}; \qquad \xi_1^{3/2} + \xi_2^{3/2} = \mathcal{O} (n^{\delta / 30 - 1}). \end{align*} $$
$$ \begin{align*} \omega (\mathfrak{s}) \leq \displaystyle\max \big\{ (1 - \varepsilon) & \xi_1 + \varepsilon \xi_2, \varepsilon \xi_1 + (1 - \varepsilon) \xi_2 \big\} < \bigg(1 - \displaystyle\frac{\varepsilon}{10} \bigg) \xi \leq \bigg( 1 - \displaystyle\frac{\varepsilon}{10} \bigg) n^{\delta / 10 - 1} \operatorname{\mathrm{dist}} (U_0, \mathfrak{A}(\mathfrak{P}))^{-1/2}; \\ & - \Omega_{\mathfrak{s}} (X_0) \geq c \operatorname{\mathrm{dist}} (U_0, \mathfrak{A}(\mathfrak{P}))^{1/2}; \qquad \xi_1^{3/2} + \xi_2^{3/2} = \mathcal{O} (n^{\delta / 30 - 1}). \end{align*} $$
Together with Equations (5.23) and (5.24), this gives
 $$ \begin{align*} \widetilde{H} (U_0) & \leq H^* (X_0, \mathfrak{s}) - \xi \Omega_{\mathfrak{s}} (X_0) + \mu + \big( \xi - \omega (\mathfrak{s}) \big) \Omega_{\mathfrak{s}} (X_0) + \mathcal{O} (n^{\delta / 30 - 1}) \\ & \leq H^* (X_0, \mathfrak{s}) - \xi \Omega_{\mathfrak{s}} (X_0) + \mu - \displaystyle\frac{c \varepsilon}{10} n^{\delta / 10 - 1} + \mathcal{O} (n^{\delta / 30 - 1}) \leq H^* (U_0) - \xi \Omega_{\mathfrak{s}} (X_0) + \mu, \end{align*} $$
$$ \begin{align*} \widetilde{H} (U_0) & \leq H^* (X_0, \mathfrak{s}) - \xi \Omega_{\mathfrak{s}} (X_0) + \mu + \big( \xi - \omega (\mathfrak{s}) \big) \Omega_{\mathfrak{s}} (X_0) + \mathcal{O} (n^{\delta / 30 - 1}) \\ & \leq H^* (X_0, \mathfrak{s}) - \xi \Omega_{\mathfrak{s}} (X_0) + \mu - \displaystyle\frac{c \varepsilon}{10} n^{\delta / 10 - 1} + \mathcal{O} (n^{\delta / 30 - 1}) \leq H^* (U_0) - \xi \Omega_{\mathfrak{s}} (X_0) + \mu, \end{align*} $$
which confirms the upper bound in Equation (5.18).
6 Proof of concentration estimate on polygons
In this section, we establish Theorem 3.10. Before proceeding, let us briefly outline the proof; we adopt the notation of Theorem 3.10 throughout this section.
 Since the preliminary concentration result Theorem 4.3 is in itself too restrictive to this end, we will first decompose our polygonal subset 
 $\mathfrak {P} = \bigcup _{i = 1}^k \mathfrak {R}_i$
 into subregions such that each
$\mathfrak {P} = \bigcup _{i = 1}^k \mathfrak {R}_i$
 into subregions such that each 
 $\mathfrak {R}_i$
 is either frozen (outside the liquid region of
$\mathfrak {R}_i$
 is either frozen (outside the liquid region of 
 $\mathfrak {P}$
) or is a ‘double-sided trapezoid’
$\mathfrak {P}$
) or is a ‘double-sided trapezoid’ 
 $\mathfrak {D} (\mathfrak {a}, \mathfrak {b}; \mathfrak {t}_1, \mathfrak {t}_2)$
 from Equation (4.1). Scaling by n, this induces a decomposition
$\mathfrak {D} (\mathfrak {a}, \mathfrak {b}; \mathfrak {t}_1, \mathfrak {t}_2)$
 from Equation (4.1). Scaling by n, this induces a decomposition 
 $\mathsf {P} = \bigcup _{i = 1}^k \mathsf {R}_i$
 on our (tileable) polygonal domain. We then apply the alternating dynamics from Section 4.2 to this decomposition. Each step corresponds to a resampling of our tiling on some
$\mathsf {P} = \bigcup _{i = 1}^k \mathsf {R}_i$
 on our (tileable) polygonal domain. We then apply the alternating dynamics from Section 4.2 to this decomposition. Each step corresponds to a resampling of our tiling on some 
 $\mathsf {R}_i$
 (conditioned on its restriction to
$\mathsf {R}_i$
 (conditioned on its restriction to 
 $\mathsf {P} \setminus \mathsf {R}_i$
), to which Theorem 4.3 applies and shows that its tiling height function is within
$\mathsf {P} \setminus \mathsf {R}_i$
), to which Theorem 4.3 applies and shows that its tiling height function is within 
 $n^{\delta }$
 of its limit shape. Unfortunately, Proposition 4.6 shows that these dynamics only mix after about
$n^{\delta }$
 of its limit shape. Unfortunately, Proposition 4.6 shows that these dynamics only mix after about 
 $n^{22}$
 steps, which could in principle allow the previously mentioned
$n^{22}$
 steps, which could in principle allow the previously mentioned 
 $n^{\delta }$
 error to accumulate macroscopically.
$n^{\delta }$
 error to accumulate macroscopically.
 To remedy this, we use the notion of tiltedness from Section 5.1. In particular, we introduce parameters quantifying the tiltedness of certain horizontal levels 
 $\mathsf {P}$
 (that include the north and south boundaries of any
$\mathsf {P}$
 (that include the north and south boundaries of any 
 $\mathsf {R}_i$
). Then Proposition 5.8 and Proposition 5.9 will imply that, under any step of the alternating dynamics to some
$\mathsf {R}_i$
). Then Proposition 5.8 and Proposition 5.9 will imply that, under any step of the alternating dynamics to some 
 $\mathsf {R}_i$
, the tiltedness along a middle row of
$\mathsf {R}_i$
, the tiltedness along a middle row of 
 $\mathsf {R}_i$
 is likely bounded between those along its north and south boundaries. Since the tiltedness along
$\mathsf {R}_i$
 is likely bounded between those along its north and south boundaries. Since the tiltedness along 
 $\partial \mathsf {P}$
 is zero, we will be to show in this way that ‘small tiltedness’ is preserved under the alternating dynamics with high probability. Running these dynamics until they mix, this indicates that the uniformly random tiling height function
$\partial \mathsf {P}$
 is zero, we will be to show in this way that ‘small tiltedness’ is preserved under the alternating dynamics with high probability. Running these dynamics until they mix, this indicates that the uniformly random tiling height function 
 $\mathsf {H} : \mathsf {P} \rightarrow \mathbb {Z}$
 has small tiltedness, which will establish Theorem 3.10.
$\mathsf {H} : \mathsf {P} \rightarrow \mathbb {Z}$
 has small tiltedness, which will establish Theorem 3.10.
6.1 Decomposition of 
 $\mathsf {P}$
$\mathsf {P}$
 In this section, we explain a decomposition of 
 $\mathsf {P} = n \mathfrak {P}$
 into subregions that are either frozen or where Theorem 4.3 will eventually apply. Recall that we have adopted the notation of Theorem 3.10; let us abbreviate the liquid region
$\mathsf {P} = n \mathfrak {P}$
 into subregions that are either frozen or where Theorem 4.3 will eventually apply. Recall that we have adopted the notation of Theorem 3.10; let us abbreviate the liquid region 
 $\mathfrak {L} = \mathfrak {L} (\mathfrak {P})$
 and arctic boundary
$\mathfrak {L} = \mathfrak {L} (\mathfrak {P})$
 and arctic boundary 
 $\mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
. By Assumption 2.8, there exists an axis
$\mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
. By Assumption 2.8, there exists an axis 
 $\ell $
 of
$\ell $
 of 
 $\mathbb {T}$
 such that no line connecting two distinct cusp singularities of
$\mathbb {T}$
 such that no line connecting two distinct cusp singularities of 
 $\mathfrak {A}$
 is parallel to
$\mathfrak {A}$
 is parallel to 
 $\ell $
. By rotating
$\ell $
. By rotating 
 $\mathfrak {P}$
 if necessary, we may assume that
$\mathfrak {P}$
 if necessary, we may assume that 
 $\ell $
 is the x-axis. We also distinguish a tangency location of
$\ell $
 is the x-axis. We also distinguish a tangency location of 
 $\mathfrak {A}$
 to be horizontal if the tangent line to
$\mathfrak {A}$
 to be horizontal if the tangent line to 
 $\mathfrak {A}$
 through it is parallel the x-axis (has slope
$\mathfrak {A}$
 through it is parallel the x-axis (has slope 
 $0$
). In what follows, we recall the trapezoid
$0$
). In what follows, we recall the trapezoid 
 $\mathfrak {D} = \mathfrak {D} (\mathfrak {a}, \mathfrak {b}; \mathfrak {t}_1, \mathfrak {t}_2)$
 from Equation (4.1) and its boundaries Equation(4.2).
$\mathfrak {D} = \mathfrak {D} (\mathfrak {a}, \mathfrak {b}; \mathfrak {t}_1, \mathfrak {t}_2)$
 from Equation (4.1) and its boundaries Equation(4.2).
We begin with the following definition that will (partially) constrain what types of regions can be our decomposition; observe that the assumptions below are similar to Assumption 4.1.
Definition 6.1. A trapezoid 
 $\mathfrak {D}$
 is adapted to
$\mathfrak {D}$
 is adapted to 
 $H^*$
 if the following five conditions hold.
$H^*$
 if the following five conditions hold. 
- 
(1) The boundary  $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is disjoint with $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is disjoint with $\overline {\mathfrak {L}}$
, unless $\overline {\mathfrak {L}}$
, unless $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \subset \partial \mathfrak {P}$
 and $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \subset \partial \mathfrak {P}$
 and $\mathfrak {A}$
 is tangent to $\mathfrak {A}$
 is tangent to $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
; the same must hold for $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
; the same must hold for $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
.
- 
(2) The function  $H^*$
 is constant along $H^*$
 is constant along $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and along $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and along $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
.
- 
(3) There exists  $\widetilde {\mathfrak {t}} \in [\mathfrak {t}_1, \mathfrak {t}_2]$
 such that one of the following two conditions holds. $\widetilde {\mathfrak {t}} \in [\mathfrak {t}_1, \mathfrak {t}_2]$
 such that one of the following two conditions holds.- 
(a) For  $t \in [\mathfrak {t}_1, \widetilde {\mathfrak {t}}]$
, the set $t \in [\mathfrak {t}_1, \widetilde {\mathfrak {t}}]$
, the set $I_t$
 consists of one nonempty interval, and for $I_t$
 consists of one nonempty interval, and for $t \in (\widetilde {\mathfrak {t}}, \mathfrak {t}_2]$
 the set $t \in (\widetilde {\mathfrak {t}}, \mathfrak {t}_2]$
 the set $I_t$
 consists of two nonempty disjoint intervals. $I_t$
 consists of two nonempty disjoint intervals.
- 
(b) For  $t \in [\mathfrak {t}_1, \widetilde {\mathfrak {t}})$
, the set $t \in [\mathfrak {t}_1, \widetilde {\mathfrak {t}})$
, the set $I_t$
 consists of two nonempty disjoint intervals, and for $I_t$
 consists of two nonempty disjoint intervals, and for $t \in (\widetilde {\mathfrak {t}}, \mathfrak {t}_2]$
 the set $t \in (\widetilde {\mathfrak {t}}, \mathfrak {t}_2]$
 the set $I_t$
 consists of one nonempty interval. $I_t$
 consists of one nonempty interval.
 
- 
- 
(4) Any tangency location of  $\mathfrak {A} \cap \overline {\mathfrak {D}}$
 is of the form $\mathfrak {A} \cap \overline {\mathfrak {D}}$
 is of the form $\max I_t$
 or $\max I_t$
 or $\min I_t$
, for some $\min I_t$
, for some $t \in (\mathfrak {t}_1, \mathfrak {t}_2)$
. Moreover, at most one is of the form $t \in (\mathfrak {t}_1, \mathfrak {t}_2)$
. Moreover, at most one is of the form $\max I_t$
, and at most one is of the form $\max I_t$
, and at most one is of the form $\min I_t$
. $\min I_t$
.
- 
(5) We have  $\mathfrak {t}_1 - \mathfrak {t}_2 \leq \mathfrak {c}$
, where $\mathfrak {t}_1 - \mathfrak {t}_2 \leq \mathfrak {c}$
, where $\mathfrak {c} = \mathfrak {c} (\mathfrak {P})> 0$
 is given by Theorem 4.3. $\mathfrak {c} = \mathfrak {c} (\mathfrak {P})> 0$
 is given by Theorem 4.3.
Lemma 6.2. If 
 $u \in \overline {\mathfrak {L}}$
 is not a tangency location of
$u \in \overline {\mathfrak {L}}$
 is not a tangency location of 
 $\mathfrak {A}$
, then there exists a trapezoid
$\mathfrak {A}$
, then there exists a trapezoid 
 $\mathfrak {D} (u)$
 adapted to
$\mathfrak {D} (u)$
 adapted to 
 $H^*$
, containing u in its interior.
$H^*$
, containing u in its interior.
Proof. Let 
 $u = (x_0, t_0)$
 and
$u = (x_0, t_0)$
 and 
 $\ell _0 = \{ t = t_0 \} \subset \mathbb {R}^2$
 denote the line through u parallel to the x-axis. To create
$\ell _0 = \{ t = t_0 \} \subset \mathbb {R}^2$
 denote the line through u parallel to the x-axis. To create 
 $\mathfrak {D} (u) = \mathfrak {D} (\mathfrak {a}, \mathfrak {b}; \mathfrak {t}_1, \mathfrak {t}_2)$
, we will first specify segments containing its east and west boundaries, and then specify
$\mathfrak {D} (u) = \mathfrak {D} (\mathfrak {a}, \mathfrak {b}; \mathfrak {t}_1, \mathfrak {t}_2)$
, we will first specify segments containing its east and west boundaries, and then specify 
 $(\mathfrak {t}_1, \mathfrak {t}_2)$
 to make it sufficiently ‘short’ (that is, with
$(\mathfrak {t}_1, \mathfrak {t}_2)$
 to make it sufficiently ‘short’ (that is, with 
 $\mathfrak {t}_2 - \mathfrak {t}_1$
 small).
$\mathfrak {t}_2 - \mathfrak {t}_1$
 small).
 To that end, first assume that u is a cusp of 
 $\mathfrak {A}$
; we refer to Figure 11 for a depiction. Let
$\mathfrak {A}$
; we refer to Figure 11 for a depiction. Let 
 $x_1 \in \mathbb {R}$
 be maximal and
$x_1 \in \mathbb {R}$
 be maximal and 
 $x_2 \in \mathbb {R}$
 be minimal such that
$x_2 \in \mathbb {R}$
 be minimal such that 
 $x_1 < x_0 < x_2$
;
$x_1 < x_0 < x_2$
; 
 $u_1 = (x_1, t_0) \in \mathfrak {A}$
, and
$u_1 = (x_1, t_0) \in \mathfrak {A}$
, and 
 $u_2 = (x_2, t_0) \in \mathfrak {A}$
. By Assumption 2.8, neither
$u_2 = (x_2, t_0) \in \mathfrak {A}$
. By Assumption 2.8, neither 
 $u_1$
 nor
$u_1$
 nor 
 $u_2$
 is a cusp of
$u_2$
 is a cusp of 
 $\mathfrak {A}$
. If
$\mathfrak {A}$
. If 
 $u_1 \in \partial \mathfrak {P}$
, then it is a (nonhorizontal) tangency location of
$u_1 \in \partial \mathfrak {P}$
, then it is a (nonhorizontal) tangency location of 
 $\mathfrak {A}$
, so it lies along a side of
$\mathfrak {A}$
, so it lies along a side of 
 $\partial \mathfrak {P}$
 with slope
$\partial \mathfrak {P}$
 with slope 
 $1$
 or
$1$
 or 
 $\infty $
. We may then let this side contain the west boundary of
$\infty $
. We may then let this side contain the west boundary of 
 $\mathfrak {D}(u)$
. If instead
$\mathfrak {D}(u)$
. If instead 
 $u_1 \notin \partial \mathfrak {P}$
, then there exists a real number
$u_1 \notin \partial \mathfrak {P}$
, then there exists a real number 
 $\varepsilon = \varepsilon (\mathfrak {P}, u) \in (0, 1)$
 such that
$\varepsilon = \varepsilon (\mathfrak {P}, u) \in (0, 1)$
 such that 
 $(x, t_0) \in \mathfrak {P} \setminus \overline {\mathfrak {L}}$
 for each
$(x, t_0) \in \mathfrak {P} \setminus \overline {\mathfrak {L}}$
 for each 
 $x_1 - \varepsilon ^{1/2} < x < x_1$
, and such that either
$x_1 - \varepsilon ^{1/2} < x < x_1$
, and such that either 
 $\big ( (x_1 - \varepsilon ^{1/2}, x_2 + \varepsilon ^{1/2}) \times (t_0, t_0 + \varepsilon ) \big ) \cap \mathfrak {L}$
 or
$\big ( (x_1 - \varepsilon ^{1/2}, x_2 + \varepsilon ^{1/2}) \times (t_0, t_0 + \varepsilon ) \big ) \cap \mathfrak {L}$
 or 
 $\big ( (x_1 - \varepsilon ^{1/2}, x_2 + \varepsilon ^{1/2}) \times (t_0 - \varepsilon , t_0) \big ) \cap \mathfrak {L}$
 is connected (see Figure 11).
$\big ( (x_1 - \varepsilon ^{1/2}, x_2 + \varepsilon ^{1/2}) \times (t_0 - \varepsilon , t_0) \big ) \cap \mathfrak {L}$
 is connected (see Figure 11).

Figure 11 Shown above is an example of the trapezoid 
 $\mathfrak {D} (u)$
 from Lemma 6.2; only part of the polygon
$\mathfrak {D} (u)$
 from Lemma 6.2; only part of the polygon 
 $\mathfrak {P}$
 and its liquid region
$\mathfrak {P}$
 and its liquid region 
 $\mathfrak {L}$
 are depicted.
$\mathfrak {L}$
 are depicted.
 Next, recall from the first statement in Lemma 2.3 that, on 
 $\mathfrak P\setminus \mathfrak L(\mathfrak P)$
,
$\mathfrak P\setminus \mathfrak L(\mathfrak P)$
, 
 $\nabla H^*$
 is piecewise constant, taking values in
$\nabla H^*$
 is piecewise constant, taking values in 
 $\big \{(0,0), (1,0), (1,-1) \big \}$
. If
$\big \{(0,0), (1,0), (1,-1) \big \}$
. If 
 $(x_1,t_0)\in {\mathfrak A}$
 is a continuous point of
$(x_1,t_0)\in {\mathfrak A}$
 is a continuous point of 
 $\nabla H^*$
, then (upon decreasing
$\nabla H^*$
, then (upon decreasing 
 $\varepsilon $
 if necessary) there exists
$\varepsilon $
 if necessary) there exists 
 $\lambda = \lambda (\mathfrak {P}, u)> 0 \in (0, \varepsilon )$
 such that the disk
$\lambda = \lambda (\mathfrak {P}, u)> 0 \in (0, \varepsilon )$
 such that the disk 
 $\mathfrak {B}_{\lambda } (x_1 - \varepsilon ^{1/2}, t_0)$
 does not intersect
$\mathfrak {B}_{\lambda } (x_1 - \varepsilon ^{1/2}, t_0)$
 does not intersect 
 $\overline {\mathfrak {L}}$
, and
$\overline {\mathfrak {L}}$
, and 
 $\nabla H^* (x_1, t_0) $
 is constant on
$\nabla H^* (x_1, t_0) $
 is constant on 
 $\mathfrak {B}_{\lambda } (x_1 - \varepsilon ^{1/2}, t_0)$
. Then, depending on whether
$\mathfrak {B}_{\lambda } (x_1 - \varepsilon ^{1/2}, t_0)$
. Then, depending on whether 
 $\nabla H^* (x_1, t_0) = (1, -1)$
 or
$\nabla H^* (x_1, t_0) = (1, -1)$
 or 
 $\nabla H^* (x_1, t_0) \in \big \{ (0, 0), (1, 0) \big \}$
, the west boundary of
$\nabla H^* (x_1, t_0) \in \big \{ (0, 0), (1, 0) \big \}$
, the west boundary of 
 $\mathfrak {D} (u)$
 is contained in the segment obtained as the intersection between
$\mathfrak {D} (u)$
 is contained in the segment obtained as the intersection between 
 $\mathfrak {B}_{\lambda } (x_1 - \varepsilon ^{1/2}, t_0)$
 and the line passing through
$\mathfrak {B}_{\lambda } (x_1 - \varepsilon ^{1/2}, t_0)$
 and the line passing through 
 $(x_1 - \varepsilon ^{1/2}, t_0)$
 with slope
$(x_1 - \varepsilon ^{1/2}, t_0)$
 with slope 
 $1$
 or
$1$
 or 
 $\infty $
, respectively; then,
$\infty $
, respectively; then, 
 $H^*$
 is constant along this line.
$H^*$
 is constant along this line.
 If 
 $(x_1,t_0)\in {\mathfrak A}$
 is a discontinuity point of
$(x_1,t_0)\in {\mathfrak A}$
 is a discontinuity point of 
 $\nabla H^*$
, then, by Assumption 2.8,
$\nabla H^*$
, then, by Assumption 2.8, 
 $(x_1,t_0)\in {\mathfrak A}$
 is a tangency location. From our choice of
$(x_1,t_0)\in {\mathfrak A}$
 is a tangency location. From our choice of 
 $x_1$
,
$x_1$
, 
 $(x_1,t_0)$
 cannot be a horizontal tangency location. Thus, its tangent line has slope
$(x_1,t_0)$
 cannot be a horizontal tangency location. Thus, its tangent line has slope 
 $1$
 or
$1$
 or 
 $\infty $
. For
$\infty $
. For 
 $\varepsilon $
 small enough, the part of the tangent line between
$\varepsilon $
 small enough, the part of the tangent line between 
 $t=t_0-\varepsilon $
 and
$t=t_0-\varepsilon $
 and 
 $t=t_0+\varepsilon $
 is contained in the
$t=t_0+\varepsilon $
 is contained in the 
 $\mathfrak P\setminus \mathfrak L(\mathfrak P)$
. By the relations (3.1) between
$\mathfrak P\setminus \mathfrak L(\mathfrak P)$
. By the relations (3.1) between 
 $\nabla H^*$
 and the complex slope, and the Equation (3.7) between the complex slope and the slope of the tangent line of the arctic curve, if the tangent line has slope
$\nabla H^*$
 and the complex slope, and the Equation (3.7) between the complex slope and the slope of the tangent line of the arctic curve, if the tangent line has slope 
 $1$
, then
$1$
, then 
 $\nabla H^*\in \big \{ (0,0),(1,-1) \big \}$
, and if the tangent line has slope
$\nabla H^*\in \big \{ (0,0),(1,-1) \big \}$
, and if the tangent line has slope 
 $\infty $
, then
$\infty $
, then 
 $\nabla H^*\in \big \{ (0,0), (1,0) \big \}$
. In both cases
$\nabla H^*\in \big \{ (0,0), (1,0) \big \}$
. In both cases 
 $H^*$
 is constant along the tangent line. This again specifies a segment containing the west boundary of
$H^*$
 is constant along the tangent line. This again specifies a segment containing the west boundary of 
 $\mathfrak {D} (u)$
 where
$\mathfrak {D} (u)$
 where 
 $H^*$
 is a constant along it, and one containing its east boundary can be specified similarly.
$H^*$
 is a constant along it, and one containing its east boundary can be specified similarly.
 In either case (whether 
 $(x_1, t_0)$
 is a continuity or discontinuity point of
$(x_1, t_0)$
 is a continuity or discontinuity point of 
 $\nabla H^*$
), we let
$\nabla H^*$
), we let 
 $\mathfrak {t}_1 = t_0 - \lambda _0$
 and
$\mathfrak {t}_1 = t_0 - \lambda _0$
 and 
 $\mathfrak {t}_2 = t_0 + \lambda _0$
, where
$\mathfrak {t}_2 = t_0 + \lambda _0$
, where 
 $\lambda _0$
 is chosen sufficiently small so that the east and west boundary of
$\lambda _0$
 is chosen sufficiently small so that the east and west boundary of 
 $\mathfrak {D} (u)$
 are contained in the segments specified above and so that
$\mathfrak {D} (u)$
 are contained in the segments specified above and so that 
 $\mathfrak {D}$
 satisfies the third, fourth and fifth conditions of Definition 6.1. This determines
$\mathfrak {D}$
 satisfies the third, fourth and fifth conditions of Definition 6.1. This determines 
 $\mathfrak {D} (u)$
, which contains u in its interior and is quickly seen to be adapted to
$\mathfrak {D} (u)$
, which contains u in its interior and is quickly seen to be adapted to 
 $H^*$
.
$H^*$
.
 The proof is similar if instead u is not a cusp of 
 $\mathfrak {A}$
, so we only outline it. If
$\mathfrak {A}$
, so we only outline it. If 
 $u \in \mathfrak {L}$
, then the above reasoning applies, unless either
$u \in \mathfrak {L}$
, then the above reasoning applies, unless either 
 $u_1$
 or
$u_1$
 or 
 $u_2$
 is a cusp of
$u_2$
 is a cusp of 
 $\mathfrak {A}$
. Assuming for example that the former is, there exists a trapezoid
$\mathfrak {A}$
. Assuming for example that the former is, there exists a trapezoid 
 $\mathfrak {D} (u_1)$
 adapted to
$\mathfrak {D} (u_1)$
 adapted to 
 $H^*$
 that contains
$H^*$
 that contains 
 $u_1$
 in its interior. Then
$u_1$
 in its interior. Then 
 $\mathfrak {D} (u_1)$
 must also contain u in its interior since
$\mathfrak {D} (u_1)$
 must also contain u in its interior since 
 $\ell _0$
 passes through u before intersecting
$\ell _0$
 passes through u before intersecting 
 $\mathfrak {A}$
, so we may set
$\mathfrak {A}$
, so we may set 
 $\mathfrak {D} (u) = \mathfrak {D} (u_1)$
. If instead
$\mathfrak {D} (u) = \mathfrak {D} (u_1)$
. If instead 
 $u \in \mathfrak {A}$
 and is not a cusp, then the above reasoning (in the case when u is cusp) again applies, with the mild modification that we allow
$u \in \mathfrak {A}$
 and is not a cusp, then the above reasoning (in the case when u is cusp) again applies, with the mild modification that we allow 
 $u = u_1$
 or
$u = u_1$
 or 
 $u = u_2$
, depending on whether u is a left or right boundary point of
$u = u_2$
, depending on whether u is a left or right boundary point of 
 $\mathfrak {A}$
, respectively.
$\mathfrak {A}$
, respectively.
 Now, for each point 
 $u \in \overline {\mathfrak {P}}$
, in the following, we define an open subset
$u \in \overline {\mathfrak {P}}$
, in the following, we define an open subset 
 $\mathfrak {R} (u) \subset \mathfrak {P}$
 such that
$\mathfrak {R} (u) \subset \mathfrak {P}$
 such that 
 $u \in \mathfrak {R} (u)$
 if
$u \in \mathfrak {R} (u)$
 if 
 $u \in \mathfrak {P}$
, and
$u \in \mathfrak {P}$
, and 
 $u \in \overline {\mathfrak {R} (u)}$
 if
$u \in \overline {\mathfrak {R} (u)}$
 if 
 $u \in \partial \mathfrak {P}$
. In each case,
$u \in \partial \mathfrak {P}$
. In each case, 
 $\mathfrak {R} (u)$
 will be the union of at most two trapezoids (intersected with
$\mathfrak {R} (u)$
 will be the union of at most two trapezoids (intersected with 
 $\mathfrak {P}$
). We always assume (by applying a small shift if necessary) that the north and south boundaries of any such trapezoid does not contain any cusps or tangency locations of
$\mathfrak {P}$
). We always assume (by applying a small shift if necessary) that the north and south boundaries of any such trapezoid does not contain any cusps or tangency locations of 
 $\mathfrak {A}$
, except for possibly if u is a horizontal tangency location.
$\mathfrak {A}$
, except for possibly if u is a horizontal tangency location. 
- 
(1) If  $u \in \overline {\mathfrak {L}}$
 is not a horizontal tangency location of $u \in \overline {\mathfrak {L}}$
 is not a horizontal tangency location of $\mathfrak {A}$
, then let $\mathfrak {A}$
, then let $\mathfrak {R} (u) \subseteq \mathfrak {P}$
 denote a trapezoid adapted to $\mathfrak {R} (u) \subseteq \mathfrak {P}$
 denote a trapezoid adapted to $H^*$
, containing u in its interior. $H^*$
, containing u in its interior.
- 
(2) If  $u \in \mathfrak {A}$
 is horizontal tangency location on $u \in \mathfrak {A}$
 is horizontal tangency location on $\mathfrak {A} \cap \partial \mathfrak {P}$
, then let $\mathfrak {A} \cap \partial \mathfrak {P}$
, then let $\mathfrak {R} (u) \subseteq \mathfrak {P}$
 denote a trapezoid adapted to $\mathfrak {R} (u) \subseteq \mathfrak {P}$
 denote a trapezoid adapted to $H^*$
 such that u is in the interior of either $H^*$
 such that u is in the interior of either $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 or $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 or $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
. $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
.
- 
(3) If  $u \in \mathfrak {A}$
 is a horizontal tangency location on $u \in \mathfrak {A}$
 is a horizontal tangency location on $\mathfrak {A}$
 not in $\mathfrak {A}$
 not in $\partial \mathfrak {P}$
, then let $\partial \mathfrak {P}$
, then let $\mathfrak {R} (u) = \mathfrak {D}_1 (u) \cup \mathfrak {D}_2 (u)$
. Here, $\mathfrak {R} (u) = \mathfrak {D}_1 (u) \cup \mathfrak {D}_2 (u)$
. Here, $\mathfrak {D}_1 = \mathfrak {D}_1 (u)$
 and $\mathfrak {D}_1 = \mathfrak {D}_1 (u)$
 and $\mathfrak {D}_2 = \mathfrak {D}_2 (u)$
 are trapezoids adapted to $\mathfrak {D}_2 = \mathfrak {D}_2 (u)$
 are trapezoids adapted to $H^*$
 such that u is in the interior of $H^*$
 such that u is in the interior of $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}_2)$
 and of $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}_2)$
 and of $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {D}_1)$
 and such that either $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {D}_1)$
 and such that either $\mathfrak {D}_1$
 or $\mathfrak {D}_1$
 or $\mathfrak {D}_2$
 is disjoint with $\mathfrak {D}_2$
 is disjoint with $\mathfrak {L}$
. $\mathfrak {L}$
.
- 
(4) If  $u \notin \overline {\mathfrak {L}}$
, then let $u \notin \overline {\mathfrak {L}}$
, then let $\mathfrak {R} (u) = \mathfrak {D} (u) \cap \mathfrak {P}$
, for some trapezoid $\mathfrak {R} (u) = \mathfrak {D} (u) \cap \mathfrak {P}$
, for some trapezoid $\mathfrak {D} (u) \subset \mathbb {R}^2$
 containing u in its interior such that $\mathfrak {D} (u) \subset \mathbb {R}^2$
 containing u in its interior such that $\mathfrak {D} (u)$
 is disjoint with $\mathfrak {D} (u)$
 is disjoint with $\overline {\mathfrak {L}}$
. $\overline {\mathfrak {L}}$
.
 We may further assume (after applying a small shift, if necessary) that 
 $n \mathfrak {R} (u) \subset \mathbb {T}$
, for each
$n \mathfrak {R} (u) \subset \mathbb {T}$
, for each 
 $u \in \mathfrak {P}$
. The existence of these regions
$u \in \mathfrak {P}$
. The existence of these regions 
 $\mathfrak {R}_i$
 follows from Lemma 6.2 in the first case and is quickly verified from the definitions in all other cases.Footnote 
12
 We refer to Figure 11 for a depiction in the first case and to Figure 12 for depictions in the remaining three cases.
$\mathfrak {R}_i$
 follows from Lemma 6.2 in the first case and is quickly verified from the definitions in all other cases.Footnote 
12
 We refer to Figure 11 for a depiction in the first case and to Figure 12 for depictions in the remaining three cases.

Figure 12 Shown to the left, middle and right are examples of the 
 $\mathfrak {R} (u)$
 (shaded) when
$\mathfrak {R} (u)$
 (shaded) when 
 $u \in \partial \mathfrak {P}$
 is a horizontal tangency location of
$u \in \partial \mathfrak {P}$
 is a horizontal tangency location of 
 $\mathfrak {A}$
, when
$\mathfrak {A}$
, when 
 $u \notin \mathfrak {A}$
 is a horizontal tangency location of
$u \notin \mathfrak {A}$
 is a horizontal tangency location of 
 $\mathfrak {A}$
, and when
$\mathfrak {A}$
, and when 
 $u \notin \overline {\mathfrak {L}}$
, respectively. In all cases, only part of the polygon
$u \notin \overline {\mathfrak {L}}$
, respectively. In all cases, only part of the polygon 
 $\mathfrak {P}$
 and its liquid region
$\mathfrak {P}$
 and its liquid region 
 $\mathfrak {L}$
 are depicted.
$\mathfrak {L}$
 are depicted.
 Since 
 $\overline {\mathfrak {P}}$
 is compact, the
$\overline {\mathfrak {P}}$
 is compact, the 
 $\mathfrak {R} (u)$
 are open, and
$\mathfrak {R} (u)$
 are open, and 
 $\bigcup _{u \in \mathfrak {P}} \overline {\mathfrak {R} (u)} = \overline {\mathfrak {P}}$
, there exists a finite subcover
$\bigcup _{u \in \mathfrak {P}} \overline {\mathfrak {R} (u)} = \overline {\mathfrak {P}}$
, there exists a finite subcover 
 $\bigcup _{i = 1}^k \overline {\mathfrak {R}_i} = \overline {\mathfrak {P}}$
; here, each
$\bigcup _{i = 1}^k \overline {\mathfrak {R}_i} = \overline {\mathfrak {P}}$
; here, each 
 $\mathfrak {R}_i = \mathfrak {R} (u_i)$
 for some
$\mathfrak {R}_i = \mathfrak {R} (u_i)$
 for some 
 $u_i \in \overline {\mathfrak {P}}$
. In what follows, we fix such a cover and let
$u_i \in \overline {\mathfrak {P}}$
. In what follows, we fix such a cover and let 
 $\mathfrak {t}_0 < \mathfrak {t}_1 < \cdots < \mathfrak {t}_m$
 denote all real numbers for which either a north or south boundary of some
$\mathfrak {t}_0 < \mathfrak {t}_1 < \cdots < \mathfrak {t}_m$
 denote all real numbers for which either a north or south boundary of some 
 $\mathfrak {R}_i$
 lies along a line
$\mathfrak {R}_i$
 lies along a line 
 $\{ t = \mathfrak {t}_j \}$
. Observe that, if such a boundary of some
$\{ t = \mathfrak {t}_j \}$
. Observe that, if such a boundary of some 
 $\mathfrak {R}_i$
 lies along
$\mathfrak {R}_i$
 lies along 
 $\{ t = \mathfrak {t}_0 \}$
 or
$\{ t = \mathfrak {t}_0 \}$
 or 
 $\{ t = \mathfrak {t}_m \}$
, then it must lie along
$\{ t = \mathfrak {t}_m \}$
, then it must lie along 
 $\partial \mathfrak {P}$
. Moreover, since the
$\partial \mathfrak {P}$
. Moreover, since the 
 $\mathfrak {t}_j$
 are pairwise distinct, there exists a constant
$\mathfrak {t}_j$
 are pairwise distinct, there exists a constant 
 $\varepsilon _0 = \varepsilon _0 (\mathfrak {P})> 0$
 such that, for each
$\varepsilon _0 = \varepsilon _0 (\mathfrak {P})> 0$
 such that, for each 
 $1\leq j\leq m$
,
$1\leq j\leq m$
, 
 $$ \begin{align} \displaystyle\min_{1\leq j\leq m} (\mathfrak{t}_j - \mathfrak{t}_{j - 1}) \geq \varepsilon_0 (\mathfrak{t}_m - \mathfrak{t}_0). \end{align} $$
$$ \begin{align} \displaystyle\min_{1\leq j\leq m} (\mathfrak{t}_j - \mathfrak{t}_{j - 1}) \geq \varepsilon_0 (\mathfrak{t}_m - \mathfrak{t}_0). \end{align} $$
 Further, let 
 $\mathsf {R}_i = n \mathfrak {R}_i \subset \mathbb {T}$
 for each
$\mathsf {R}_i = n \mathfrak {R}_i \subset \mathbb {T}$
 for each 
 $1\leq i\leq k$
. Observe, since the
$1\leq i\leq k$
. Observe, since the 
 $\mathfrak {R}_i$
 are open and cover
$\mathfrak {R}_i$
 are open and cover 
 $\mathfrak {P}$
, that any interior vertex of
$\mathfrak {P}$
, that any interior vertex of 
 $\mathsf {P}$
 is an interior vertex of some
$\mathsf {P}$
 is an interior vertex of some 
 $\mathsf {R}_i$
. Thus, we may consider the alternating dynamics (from Definition 4.5) on
$\mathsf {R}_i$
. Thus, we may consider the alternating dynamics (from Definition 4.5) on 
 $\mathsf {P}$
 with respect to
$\mathsf {P}$
 with respect to 
 $(\mathsf {R}_1, \mathsf {R}_2, \ldots , \mathsf {R}_k)$
. In particular, let us fix the height function
$(\mathsf {R}_1, \mathsf {R}_2, \ldots , \mathsf {R}_k)$
. In particular, let us fix the height function 
 $\mathsf {H}_0 \in \mathscr {G} (\mathsf {h})$
 by setting
$\mathsf {H}_0 \in \mathscr {G} (\mathsf {h})$
 by setting 
 $\mathsf {H}_0 (u) = \big \lfloor n H^* (n^{-1} u) \big \rfloor $
, for each
$\mathsf {H}_0 (u) = \big \lfloor n H^* (n^{-1} u) \big \rfloor $
, for each 
 $u \in \mathsf {P}$
; observe that
$u \in \mathsf {P}$
; observe that 
 $\mathsf {H}_0 (u) = nH^* ( n^{-1} u)$
 for each
$\mathsf {H}_0 (u) = nH^* ( n^{-1} u)$
 for each 
 $u \in \mathsf {P} \setminus (n \cdot \mathfrak {L})$
, by Proposition 2.4. Then, run the alternating dynamics on
$u \in \mathsf {P} \setminus (n \cdot \mathfrak {L})$
, by Proposition 2.4. Then, run the alternating dynamics on 
 $\mathscr {G} (\mathsf {h})$
 with initial state
$\mathscr {G} (\mathsf {h})$
 with initial state 
 $\mathsf {H}_0$
. For each integer
$\mathsf {H}_0$
. For each integer 
 $r \geq 0$
, let
$r \geq 0$
, let 
 $\mathsf {H}_r \in \mathscr {G} (\mathsf {h})$
 denote the state of this Markov chain at time r; define its scaled version
$\mathsf {H}_r \in \mathscr {G} (\mathsf {h})$
 denote the state of this Markov chain at time r; define its scaled version 
 $H_r \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; h)$
 by
$H_r \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; h)$
 by 
 $H_r (u) = n^{-1} \mathsf {H}_r (nu)$
 for each
$H_r (u) = n^{-1} \mathsf {H}_r (nu)$
 for each 
 $u \in \overline {\mathfrak {D}}$
.
$u \in \overline {\mathfrak {D}}$
.
6.2 Proof of Theorem 3.10
 In this section, we establish Theorem 3.10. To that end, recalling the notation from Section 6.1, we define for any integer 
 $r \geq 0$
 the events
$r \geq 0$
 the events 
 $$ \begin{align} \mathscr{F}_r & = \bigcap_{u \in \overline{\mathfrak{P}}} \Big\{ \big| H_r (u) - H^* (u) \big| < n^{\delta - 1} \Big\} \cap \bigcap_{u \in \overline{\mathfrak{P}} \setminus \mathfrak{L}_+^{\delta} (\mathfrak{P})} \big\{ H_r (u) = H^* (u)\big\}; \nonumber\\ \mathscr{F}_{\infty} & = \bigcap_{u \in \overline{\mathfrak{P}}} \Big\{ \big| \mathsf{H} (n u) - n H^* (u) \big| < n^{\delta} \Big\} \cap \bigcap_{u \in \overline{\mathfrak{P}} \setminus \mathfrak{L}_+^{\delta} (\mathfrak{P})} \big\{ \mathsf{H} (nu) = n H^* (u)\big\}. \end{align} $$
$$ \begin{align} \mathscr{F}_r & = \bigcap_{u \in \overline{\mathfrak{P}}} \Big\{ \big| H_r (u) - H^* (u) \big| < n^{\delta - 1} \Big\} \cap \bigcap_{u \in \overline{\mathfrak{P}} \setminus \mathfrak{L}_+^{\delta} (\mathfrak{P})} \big\{ H_r (u) = H^* (u)\big\}; \nonumber\\ \mathscr{F}_{\infty} & = \bigcap_{u \in \overline{\mathfrak{P}}} \Big\{ \big| \mathsf{H} (n u) - n H^* (u) \big| < n^{\delta} \Big\} \cap \bigcap_{u \in \overline{\mathfrak{P}} \setminus \mathfrak{L}_+^{\delta} (\mathfrak{P})} \big\{ \mathsf{H} (nu) = n H^* (u)\big\}. \end{align} $$
 Then, Theorem 3.10 indicates that 
 $\mathscr {F}_{\infty }$
 should hold with overwhelming probability. As r becomes large, the alternating dynamics tend to stationarity, so
$\mathscr {F}_{\infty }$
 should hold with overwhelming probability. As r becomes large, the alternating dynamics tend to stationarity, so 
 $n H_r (u)$
 converges in law to
$n H_r (u)$
 converges in law to 
 $\mathsf {H} (nu)$
; hence, it instead suffices to show that
$\mathsf {H} (nu)$
; hence, it instead suffices to show that 
 $\mathscr {F}_r$
 likely holds for large r. We would like to proceed inductively by showing that
$\mathscr {F}_r$
 likely holds for large r. We would like to proceed inductively by showing that 
 $\mathscr {F}_r$
 implies
$\mathscr {F}_r$
 implies 
 $\mathscr {F}_{r + 1}$
 with high probability. It is not transparent to us how to do this directly; we instead prove a stronger version of this implication involving the notion of tilting, which we recall from Definition 5.6.
$\mathscr {F}_{r + 1}$
 with high probability. It is not transparent to us how to do this directly; we instead prove a stronger version of this implication involving the notion of tilting, which we recall from Definition 5.6.
 We first require some notation. Recall that 
 $\varepsilon _0 = \varepsilon _0 (\mathfrak {P})> 0$
 satisfies Equation (6.1), and set
$\varepsilon _0 = \varepsilon _0 (\mathfrak {P})> 0$
 satisfies Equation (6.1), and set 
 $$ \begin{align} \nu_i = 1 - \bigg( \displaystyle\frac{\varepsilon_0}{10} \bigg)^{i + 1}, \qquad 0\leq i\leq m. \end{align} $$
$$ \begin{align} \nu_i = 1 - \bigg( \displaystyle\frac{\varepsilon_0}{10} \bigg)^{i + 1}, \qquad 0\leq i\leq m. \end{align} $$
 Also, recall the subset 
 $\mathfrak {L}_-^{\delta } = \mathfrak {L}_-^{\delta } (\mathfrak {P}) \subset \mathfrak {L}$
 from Equation (5.6).
$\mathfrak {L}_-^{\delta } = \mathfrak {L}_-^{\delta } (\mathfrak {P}) \subset \mathfrak {L}$
 from Equation (5.6).
 For any real number 
 $z \geq n^{-1}$
, define the functions
$z \geq n^{-1}$
, define the functions 
 $$ \begin{align} \varkappa (z) = n^{\delta / 12 -1} z^{-1}; \qquad \varpi (z) = n^{\delta / 12 - 1} \log (nz). \end{align} $$
$$ \begin{align} \varkappa (z) = n^{\delta / 12 -1} z^{-1}; \qquad \varpi (z) = n^{\delta / 12 - 1} \log (nz). \end{align} $$
 The explicit forms of 
 $\varkappa $
 and
$\varkappa $
 and 
 $\varpi $
 above will not be central for our purposes, but a useful point will be that
$\varpi $
 above will not be central for our purposes, but a useful point will be that 
 $A \varkappa (z) + \varpi (z)$
 is minimized when
$A \varkappa (z) + \varpi (z)$
 is minimized when 
 $z = A$
.
$z = A$
.
 Now, recall the subset 
 $\mathfrak {L}_-^{\delta } = \mathfrak {L}_-^{\delta } (\mathfrak {P}) \subset \mathfrak {L}$
 from Equation (5.6), and for any integers
$\mathfrak {L}_-^{\delta } = \mathfrak {L}_-^{\delta } (\mathfrak {P}) \subset \mathfrak {L}$
 from Equation (5.6), and for any integers 
 $r \geq 0$
 and
$r \geq 0$
 and 
 $0\leq i\leq m$
 define the events
$0\leq i\leq m$
 define the events 
 $$ \begin{align} \mathscr{E}_r^{(1)} (i) & = \big\{ \text{The edge of}\ H_r\ \text{is}\ \nu_i n^{\delta / 240 - 2/3}\ \text{-tilted with respect to}\ H^*\ \text{at level}\ \mathfrak{t}_i \big\}; \nonumber\\ \mathscr{E}_r^{(2)} (i) & = \bigcap_{(x, \mathfrak{t}_i) \in \mathfrak{L}_-^{\delta / 4}} \big\{ \text{At}\ (x, \mathfrak{t}_i), H_r\ \text{is}\ (\nu_i n^{\delta / 240 - 2/3}; 0)\ \text{-tilted with respect to}\ H^* \big\}. \end{align} $$
$$ \begin{align} \mathscr{E}_r^{(1)} (i) & = \big\{ \text{The edge of}\ H_r\ \text{is}\ \nu_i n^{\delta / 240 - 2/3}\ \text{-tilted with respect to}\ H^*\ \text{at level}\ \mathfrak{t}_i \big\}; \nonumber\\ \mathscr{E}_r^{(2)} (i) & = \bigcap_{(x, \mathfrak{t}_i) \in \mathfrak{L}_-^{\delta / 4}} \big\{ \text{At}\ (x, \mathfrak{t}_i), H_r\ \text{is}\ (\nu_i n^{\delta / 240 - 2/3}; 0)\ \text{-tilted with respect to}\ H^* \big\}. \end{align} $$
 For any 
 $x \in \mathbb {R}$
 such that
$x \in \mathbb {R}$
 such that 
 $(x, \mathfrak {t}_i) \in \mathfrak {L}_-^{\delta / 4}$
, further define the event
$(x, \mathfrak {t}_i) \in \mathfrak {L}_-^{\delta / 4}$
, further define the event 
 $$ \begin{align}  \mathscr{E}_r^{(3)} (i; x) &= \bigg\{ \displaystyle\sup_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_i) + \nu_i \varkappa (z) \Omega_{\mathfrak{t}_i} (x) - \varpi (z) \Big) \nonumber\\ & \leq H_r (x, \mathfrak{t}_i) \leq \displaystyle\inf_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_j) - \nu_i \varkappa (z) \Omega_{\mathfrak{t}_i} (x) + \varpi (z) \Big) \bigg\}, \end{align} $$
$$ \begin{align}  \mathscr{E}_r^{(3)} (i; x) &= \bigg\{ \displaystyle\sup_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_i) + \nu_i \varkappa (z) \Omega_{\mathfrak{t}_i} (x) - \varpi (z) \Big) \nonumber\\ & \leq H_r (x, \mathfrak{t}_i) \leq \displaystyle\inf_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_j) - \nu_i \varkappa (z) \Omega_{\mathfrak{t}_i} (x) + \varpi (z) \Big) \bigg\}, \end{align} $$
where we recall 
 $\Omega _s (x)$
 from Equation (5.1), and let
$\Omega _s (x)$
 from Equation (5.1), and let 
 $$ \begin{align*} \mathscr{E}_r^{(3)} (i) = \bigcap_{(x, \mathfrak{t}_i) \in \mathfrak{L}_-^{\delta / 4}} \mathscr{E}_r^{(3)} (i; x). \end{align*} $$
$$ \begin{align*} \mathscr{E}_r^{(3)} (i) = \bigcap_{(x, \mathfrak{t}_i) \in \mathfrak{L}_-^{\delta / 4}} \mathscr{E}_r^{(3)} (i; x). \end{align*} $$
Then define the events
 $$ \begin{align} \mathscr{E}_r (i) & = \mathscr{E}_r^{(1)} (i) \cap \mathscr{E}_r^{(2)} (i) \cap \mathscr{E}_r^{(3)} (i); \qquad \mathscr{E}_r = \bigcap_{i = 0}^m \mathscr{E}_r (i); \qquad \mathscr{A}_r = \mathscr{E}_r \cap \mathscr{F}_r. \end{align} $$
$$ \begin{align} \mathscr{E}_r (i) & = \mathscr{E}_r^{(1)} (i) \cap \mathscr{E}_r^{(2)} (i) \cap \mathscr{E}_r^{(3)} (i); \qquad \mathscr{E}_r = \bigcap_{i = 0}^m \mathscr{E}_r (i); \qquad \mathscr{A}_r = \mathscr{E}_r \cap \mathscr{F}_r. \end{align} $$
Under this notation, we have the following proposition.
Proposition 6.3. For any real number 
 $D> 1$
, there exists a constant
$D> 1$
, there exists a constant 
 $C = C(\mathfrak {P}, D)> 1$
 such that the following holds whenever
$C = C(\mathfrak {P}, D)> 1$
 such that the following holds whenever 
 $n> C$
. For any integer
$n> C$
. For any integer 
 $r \geq 0$
, we have
$r \geq 0$
, we have 
 $\mathbb {P} (\mathscr {A}_{r + 1}) \geq \mathbb {P} (\mathscr {A}_r) - n^{-D}$
.
$\mathbb {P} (\mathscr {A}_{r + 1}) \geq \mathbb {P} (\mathscr {A}_r) - n^{-D}$
.
Given Proposition 6.3, we can quickly establish Theorem 3.10.
Proof of Theorem 3.10.
 First, observe that 
 $\mathscr {A}_0$
 holds deterministically since
$\mathscr {A}_0$
 holds deterministically since 
 $H_0 (u) = n^{-1} \mathsf {H}_0 (nu)$
 and
$H_0 (u) = n^{-1} \mathsf {H}_0 (nu)$
 and 
 $\mathsf {H} (nu) = \big \lfloor n H^* (u) \big \rfloor $
 hold for each
$\mathsf {H} (nu) = \big \lfloor n H^* (u) \big \rfloor $
 hold for each 
 $u \in \overline {\mathfrak {D}}$
 (and
$u \in \overline {\mathfrak {D}}$
 (and 
 $H_0 (u) = H^* (u)$
 for each
$H_0 (u) = H^* (u)$
 for each 
 $u \in \mathfrak {P} \setminus \mathfrak {L}$
, by Proposition 2.4). Then, inductively applying Proposition 6.3, with the D there equal to
$u \in \mathfrak {P} \setminus \mathfrak {L}$
, by Proposition 2.4). Then, inductively applying Proposition 6.3, with the D there equal to 
 $2D + 45$
 here, yields
$2D + 45$
 here, yields 
 $\mathbb {P} \big ( \mathscr {A}_{n^{40}} \big ) \geq 1 - n^{-2D}$
 for sufficiently large n. By the definition (6.7) of
$\mathbb {P} \big ( \mathscr {A}_{n^{40}} \big ) \geq 1 - n^{-2D}$
 for sufficiently large n. By the definition (6.7) of 
 $\mathscr {A}_r$
, this implies
$\mathscr {A}_r$
, this implies 
 $\mathbb {P} \big ( \mathscr {F}_{n^{40}} \big ) \geq 1 - n^{-2D}$
. Since
$\mathbb {P} \big ( \mathscr {F}_{n^{40}} \big ) \geq 1 - n^{-2D}$
. Since 
 $\operatorname {\mathrm {diam}} \mathsf {R} = n \operatorname {\mathrm {diam}} \mathfrak {R}$
, the bound Proposition 4.6 on the mixing time for the alternating dynamics (together with the definitions (6.2) of
$\operatorname {\mathrm {diam}} \mathsf {R} = n \operatorname {\mathrm {diam}} \mathfrak {R}$
, the bound Proposition 4.6 on the mixing time for the alternating dynamics (together with the definitions (6.2) of 
 $\mathscr {F}_r$
 and
$\mathscr {F}_r$
 and 
 $\mathscr {F}_{\infty }$
) gives
$\mathscr {F}_{\infty }$
) gives 
 $$ \begin{align*} \mathbb{P} ( \mathscr{F}_{\infty}) \geq \mathbb{P} \big( \mathscr{F}_{n^{40}} \big) - e^{-n} \geq 1 - n^{-2D} - e^{-n} \geq 1 - n^{-D}, \end{align*} $$
$$ \begin{align*} \mathbb{P} ( \mathscr{F}_{\infty}) \geq \mathbb{P} \big( \mathscr{F}_{n^{40}} \big) - e^{-n} \geq 1 - n^{-2D} - e^{-n} \geq 1 - n^{-D}, \end{align*} $$
for sufficiently large n. Since this holds for any D (if n is sufficiently large), this implies the theorem.
6.3 Proof of Proposition 6.3
In this section, we establish Proposition 6.3.
Proof of Proposition 6.3.
 Throughout this proof, we restrict to the event 
 $\mathscr {A}_r$
; it suffices to show that
$\mathscr {A}_r$
; it suffices to show that 
 $\mathscr {A}_{r + 1}$
 holds with overwhelming probability. In what follows, we will frequently use the equality
$\mathscr {A}_{r + 1}$
 holds with overwhelming probability. In what follows, we will frequently use the equality 
 $$ \begin{align} H_r (u) = H^* (u), \qquad \text{if}\ u \in \overline{\mathfrak{P}} \setminus \overline{\mathfrak{L}}\ \text{is bounded away from}\ \mathfrak{L}, \end{align} $$
$$ \begin{align} H_r (u) = H^* (u), \qquad \text{if}\ u \in \overline{\mathfrak{P}} \setminus \overline{\mathfrak{L}}\ \text{is bounded away from}\ \mathfrak{L}, \end{align} $$
which holds since we have restricted to the event 
 $\mathscr {F}_r \subseteq \mathscr {A}_r$
 from Equation (6.2).
$\mathscr {F}_r \subseteq \mathscr {A}_r$
 from Equation (6.2).
 Updating 
 $\mathsf {H}_r$
 to
$\mathsf {H}_r$
 to 
 $\mathsf {H}_{r + 1}$
 involves resampling it on a subdomain
$\mathsf {H}_{r + 1}$
 involves resampling it on a subdomain 
 $\mathsf {R}_j = n \mathfrak {R}_j$
, for some index
$\mathsf {R}_j = n \mathfrak {R}_j$
, for some index 
 $1\leq j\leq m$
, in the decomposition
$1\leq j\leq m$
, in the decomposition 
 $\mathsf {P} = \bigcup _{j = 1}^k \mathsf {R}_j$
. Recall from Section 6.1 that
$\mathsf {P} = \bigcup _{j = 1}^k \mathsf {R}_j$
. Recall from Section 6.1 that 
 $\mathfrak {R}_j = \mathfrak {R} (u_j)$
, for some
$\mathfrak {R}_j = \mathfrak {R} (u_j)$
, for some 
 $u_j \in \overline {\mathfrak {P}}$
, and that there are four possible cases for
$u_j \in \overline {\mathfrak {P}}$
, and that there are four possible cases for 
 $\mathfrak {R} (u_j)$
, depending on whether
$\mathfrak {R} (u_j)$
, depending on whether 
 $u_j \in \overline {\mathfrak {L}}$
 is not a tangency location of
$u_j \in \overline {\mathfrak {L}}$
 is not a tangency location of 
 $\mathfrak {A}$
;
$\mathfrak {A}$
; 
 $u_j \in \mathfrak {A}$
 is a tangency location of
$u_j \in \mathfrak {A}$
 is a tangency location of 
 $\mathfrak {A}$
 that lies on
$\mathfrak {A}$
 that lies on 
 $\partial \mathfrak {P}$
;
$\partial \mathfrak {P}$
; 
 $u_j \in \mathfrak {A}$
 is a tangency location that does not lie on
$u_j \in \mathfrak {A}$
 is a tangency location that does not lie on 
 $\partial \mathfrak {P}$
; or
$\partial \mathfrak {P}$
; or 
 $u \notin \overline {\mathfrak {L}}$
 is outside the liquid region. We will address each of these cases.
$u \notin \overline {\mathfrak {L}}$
 is outside the liquid region. We will address each of these cases.
 To that end, first consider the fourth case when 
 $u_j \notin \overline {\mathfrak {L}}$
. Then,
$u_j \notin \overline {\mathfrak {L}}$
. Then, 
 $\mathfrak {R}_j \subseteq \mathfrak {P} \setminus \overline {\mathfrak {L}}$
; in particular, it is bounded away from
$\mathfrak {R}_j \subseteq \mathfrak {P} \setminus \overline {\mathfrak {L}}$
; in particular, it is bounded away from 
 $\overline {\mathfrak {L}}$
, so Equation (6.8) implies
$\overline {\mathfrak {L}}$
, so Equation (6.8) implies 
 $\mathsf {H}_r (nu) = n H^* (u)$
 for each
$\mathsf {H}_r (nu) = n H^* (u)$
 for each 
 $u \in \mathfrak {R}_j$
. Since
$u \in \mathfrak {R}_j$
. Since 
 $\nabla H^* (u) \in \big \{ (1, 0), (1, -1), (1, -1) \big \}$
 for almost every
$\nabla H^* (u) \in \big \{ (1, 0), (1, -1), (1, -1) \big \}$
 for almost every 
 $u \notin \mathfrak {L}$
 (by the first statement of Lemma 2.3), it follows that there is only one height function on
$u \notin \mathfrak {L}$
 (by the first statement of Lemma 2.3), it follows that there is only one height function on 
 $\mathsf {R}_j$
 with boundary data
$\mathsf {R}_j$
 with boundary data 
 $\mathsf {H}_r |_{\partial \mathsf {R}_j}$
, that is, this domain is frozen. This implies that
$\mathsf {H}_r |_{\partial \mathsf {R}_j}$
, that is, this domain is frozen. This implies that 
 $\mathsf {H}_{r + 1} = \mathsf {H}_r$
, so each of the estimates involved in the definitions of the events in Equations (6.2), (6.5), (6.6) for
$\mathsf {H}_{r + 1} = \mathsf {H}_r$
, so each of the estimates involved in the definitions of the events in Equations (6.2), (6.5), (6.6) for 
 $H_{r + 1}$
 follow from their counterparts for
$H_{r + 1}$
 follow from their counterparts for 
 $H_r$
 guaranteed by
$H_r$
 guaranteed by 
 $\mathscr {A}_r$
. Thus,
$\mathscr {A}_r$
. Thus, 
 $\mathscr {A}_{r + 1}$
 holds deterministically in this case.
$\mathscr {A}_{r + 1}$
 holds deterministically in this case.
 Next, we consider the first or second case, namely, when 
 $u_j \in \overline {\mathfrak {L}}$
 and is not a horizontal tangency location outside of
$u_j \in \overline {\mathfrak {L}}$
 and is not a horizontal tangency location outside of 
 $\partial \mathfrak {P}$
. If
$\partial \mathfrak {P}$
. If 
 $u_j$
 is a horizontal tangency location of
$u_j$
 is a horizontal tangency location of 
 $\mathfrak {A}$
, then we assume
$\mathfrak {A}$
, then we assume 
 $u_j \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
, as the proof when
$u_j \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
, as the proof when 
 $u_j \in \partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
 is entirely analogous by rotation (see Remark 5.10). Then
$u_j \in \partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
 is entirely analogous by rotation (see Remark 5.10). Then 
 $\mathfrak {R}_j = \mathfrak {R} (u_j)$
 is a double-sided trapezoid satisfying the conditions of Assumption 5.2 since it is adapted with respect to
$\mathfrak {R}_j = \mathfrak {R} (u_j)$
 is a double-sided trapezoid satisfying the conditions of Assumption 5.2 since it is adapted with respect to 
 $H^*$
 (recall Definition 6.1). By Equation (6.7), suffices to show that
$H^*$
 (recall Definition 6.1). By Equation (6.7), suffices to show that 
 $\mathscr {E}_{r + 1}$
 and
$\mathscr {E}_{r + 1}$
 and 
 $\mathscr {F}_{r + 1}$
 both hold with overwhelming probability.
$\mathscr {F}_{r + 1}$
 both hold with overwhelming probability.
 We begin with the former. Fix an index 
 $1\leq i_0\leq m$
; we must show
$1\leq i_0\leq m$
; we must show 
 $\mathscr {E}_{r + 1} (i_0)$
 holds with overwhelming probability. Define indices
$\mathscr {E}_{r + 1} (i_0)$
 holds with overwhelming probability. Define indices 
 $1\leq i, i' \leq m$
 so that
$1\leq i, i' \leq m$
 so that 
 $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
 and
$\partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
 and 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 are contained in the horizontal lines
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 are contained in the horizontal lines 
 $\{ y = \mathfrak {t}_i \}$
 and
$\{ y = \mathfrak {t}_i \}$
 and 
 $\{ y = \mathfrak {t}_{i'} \}$
, respectively. Without loss of generality, we assume that
$\{ y = \mathfrak {t}_{i'} \}$
, respectively. Without loss of generality, we assume that 
 $i < i'$
. Since the update from
$i < i'$
. Since the update from 
 $H_r$
 to
$H_r$
 to 
 $H_{r + 1}$
 only affects its restriction to
$H_{r + 1}$
 only affects its restriction to 
 $\mathfrak {R}_j \subset \big \{ (x, y) \in \mathfrak {P} : \mathfrak {t}_i < t < \mathfrak {t}_{i'} \}$
, for
$\mathfrak {R}_j \subset \big \{ (x, y) \in \mathfrak {P} : \mathfrak {t}_i < t < \mathfrak {t}_{i'} \}$
, for 
 $i_0 \notin (i, i')$
 the event
$i_0 \notin (i, i')$
 the event 
 $\mathscr {E}_{r + 1} (i_0)$
 holds deterministically if
$\mathscr {E}_{r + 1} (i_0)$
 holds deterministically if 
 $\mathscr {E}_r (i_0)$
 does. Hence, we may assume that
$\mathscr {E}_r (i_0)$
 does. Hence, we may assume that 
 $i < i_0 < i'$
. In what follows, we further denote the restrictions
$i < i_0 < i'$
. In what follows, we further denote the restrictions 
 $h_r = H_r |_{\mathfrak {R}_j}$
 and
$h_r = H_r |_{\mathfrak {R}_j}$
 and 
 $\mathsf {h}_r = \mathsf {H}_r |_{\mathsf {R}_j}$
,
$\mathsf {h}_r = \mathsf {H}_r |_{\mathsf {R}_j}$
,
 We first verify that 
 $\mathscr {E}_{r + 1}^{(1)} (i_0)$
 and
$\mathscr {E}_{r + 1}^{(1)} (i_0)$
 and 
 $\mathscr {E}_{r+1}^{(2)} (i_0)$
 from Equation (6.5) both hold with overwhelming probability by suitably applying Proposition 5.8. To that end, observe that Assumption 5.7 holds with the parameters
$\mathscr {E}_{r+1}^{(2)} (i_0)$
 from Equation (6.5) both hold with overwhelming probability by suitably applying Proposition 5.8. To that end, observe that Assumption 5.7 holds with the parameters 
 $(\varepsilon , \varsigma , \delta; \mathfrak {D}, \mathsf {D}; \mathfrak {s}; \mathfrak {t}_1, \mathfrak {t}_2; \widetilde {\mathsf {h}}, \widetilde {\mathsf {H}}; \xi _1, \xi _2; \zeta _1, \zeta _2; \mu )$
 there equal to
$(\varepsilon , \varsigma , \delta; \mathfrak {D}, \mathsf {D}; \mathfrak {s}; \mathfrak {t}_1, \mathfrak {t}_2; \widetilde {\mathsf {h}}, \widetilde {\mathsf {H}}; \xi _1, \xi _2; \zeta _1, \zeta _2; \mu )$
 there equal to 
 $$ \begin{align} \bigg( \varepsilon_0, \Big( \displaystyle\frac{\varepsilon_0}{20} \Big)^{m + 1}, \frac{\delta}{4}; \mathfrak{R}_j, \mathsf{R}_j; \mathfrak{t}_{i_0}; \mathfrak{t}_i, \mathfrak{t}_{i'}; \mathsf{h}_r, \mathsf{H}_{r + 1}; & \nu_i n^{\delta / 240 - 2/3}, \nu_{i'} n^{\delta / 240 - 2/3}; \nu_i n^{\delta / 240 - 2/3}, \nu_{i'} n^{\delta / 240 - 2 / 3}; 0 \bigg), \end{align} $$
$$ \begin{align} \bigg( \varepsilon_0, \Big( \displaystyle\frac{\varepsilon_0}{20} \Big)^{m + 1}, \frac{\delta}{4}; \mathfrak{R}_j, \mathsf{R}_j; \mathfrak{t}_{i_0}; \mathfrak{t}_i, \mathfrak{t}_{i'}; \mathsf{h}_r, \mathsf{H}_{r + 1}; & \nu_i n^{\delta / 240 - 2/3}, \nu_{i'} n^{\delta / 240 - 2/3}; \nu_i n^{\delta / 240 - 2/3}, \nu_{i'} n^{\delta / 240 - 2 / 3}; 0 \bigg), \end{align} $$
here. To see this, first observe that 
 $\mathsf {h}_r$
 is constant along the east and west boundaries of
$\mathsf {h}_r$
 is constant along the east and west boundaries of 
 $\partial \mathsf {R}_j$
. Indeed, Equation (6.8) and the fact that
$\partial \mathsf {R}_j$
. Indeed, Equation (6.8) and the fact that 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j)$
 and
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j)$
 and 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
 are either subsets of
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
 are either subsets of 
 $\partial \mathfrak {P}$
 or bounded away from
$\partial \mathfrak {P}$
 or bounded away from 
 $\overline {\mathfrak {L}}$
, together imply that
$\overline {\mathfrak {L}}$
, together imply that 
 $H_r = H^*$
 along
$H_r = H^*$
 along 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. In particular,
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. In particular, 
 $\mathsf {h}_r (nv) = n h_r (v) = n h(v) = n H^* (v)$
 holds for each
$\mathsf {h}_r (nv) = n h_r (v) = n h(v) = n H^* (v)$
 holds for each 
 $v \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. So, since
$v \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. So, since 
 $\mathfrak {R}_j$
 is adapted to
$\mathfrak {R}_j$
 is adapted to 
 $H^*$
, h is constant both
$H^*$
, h is constant both 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j)$
 and
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j)$
 and 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
; thus,
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
; thus, 
 $h_r$
 is as well. Next, the inequalities on
$h_r$
 is as well. Next, the inequalities on 
 $(\mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2)$
 with respect to
$(\mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2)$
 with respect to 
 $\varepsilon $
 and on
$\varepsilon $
 and on 
 $(\xi _1, \xi _2)$
 and
$(\xi _1, \xi _2)$
 and 
 $(\zeta _1, \zeta _2)$
 with respect to
$(\zeta _1, \zeta _2)$
 with respect to 
 $\varsigma $
, in Assumption 5.7 follow from Equations (6.1) and (6.3). Moreover, the edge-tiltedness for
$\varsigma $
, in Assumption 5.7 follow from Equations (6.1) and (6.3). Moreover, the edge-tiltedness for 
 $H_r$
 with respect to
$H_r$
 with respect to 
 $H^*$
 along
$H^*$
 along 
 $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 is a consequence of our restriction to the event
$\partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 is a consequence of our restriction to the event 
 $\mathscr {E}_r^{(1)} (i) \cap \mathscr {E}_r^{(1)} (i') \subseteq \mathscr {A}_r$
. Similarly, the bulk-tiltedness of
$\mathscr {E}_r^{(1)} (i) \cap \mathscr {E}_r^{(1)} (i') \subseteq \mathscr {A}_r$
. Similarly, the bulk-tiltedness of 
 $H_r$
 with respect to
$H_r$
 with respect to 
 $H^*$
 at each
$H^*$
 at each 
 $(x, \mathfrak {t}_i), (x, \mathfrak {t}_{i'}) \in \mathfrak {L}_-^{\delta / 4}$
 holds follows from our restriction to
$(x, \mathfrak {t}_i), (x, \mathfrak {t}_{i'}) \in \mathfrak {L}_-^{\delta / 4}$
 holds follows from our restriction to 
 $\mathscr {E}_r^{(2)} (i) \cap \mathscr {E}_r^{(2)} (i') \subseteq \mathscr {A}_r$
.
$\mathscr {E}_r^{(2)} (i) \cap \mathscr {E}_r^{(2)} (i') \subseteq \mathscr {A}_r$
.
 Thus, Proposition 5.8 applies. Under the choice of parameters (6.9), the 
 $\zeta $
 in Proposition 5.8 equals
$\zeta $
 in Proposition 5.8 equals 
 $$ \begin{align} \nu' = \displaystyle\frac{\varepsilon_0}{2} \nu_i + \bigg( 1 - \displaystyle\frac{\varepsilon_0}{2} \bigg) \nu_{i'} < \nu_{i_0}, \end{align} $$
$$ \begin{align} \nu' = \displaystyle\frac{\varepsilon_0}{2} \nu_i + \bigg( 1 - \displaystyle\frac{\varepsilon_0}{2} \bigg) \nu_{i'} < \nu_{i_0}, \end{align} $$
where to deduce the last inequality we used Equation (6.3) and the fact that 
 $i < i_0 < i'$
. In particular, Proposition 5.8 implies with overwhelming probability that the edge of
$i < i_0 < i'$
. In particular, Proposition 5.8 implies with overwhelming probability that the edge of 
 $H_{r + 1}$
 is
$H_{r + 1}$
 is 
 $\nu _{i_0} n^{\delta / 240 - 2/3}$
-tilted with respect to
$\nu _{i_0} n^{\delta / 240 - 2/3}$
-tilted with respect to 
 $H^*$
 at level
$H^*$
 at level 
 $\mathfrak {t}_{i_0}$
, and that
$\mathfrak {t}_{i_0}$
, and that 
 $H_{r + 1}$
 is
$H_{r + 1}$
 is 
 $(\nu _{i_0} n^{\delta / 240 - 2/3}; 0)$
-tilted with respect to
$(\nu _{i_0} n^{\delta / 240 - 2/3}; 0)$
-tilted with respect to 
 $H^*$
 at any
$H^*$
 at any 
 $(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_i^{\delta / 4}$
. So, by Equation (6.5),
$(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_i^{\delta / 4}$
. So, by Equation (6.5), 
 $\mathscr {E}_{r + 1}^{(1)} (i_0)$
 and
$\mathscr {E}_{r + 1}^{(1)} (i_0)$
 and 
 $\mathscr {E}_{r + 1}^{(2)} (i_0)$
 both hold with overwhelming probability.
$\mathscr {E}_{r + 1}^{(2)} (i_0)$
 both hold with overwhelming probability.
 Next, let us show that 
 $\mathscr {E}_{r + 1}^{(3)} (i_0)$
 holds with overwhelming probability. To that end, we fix
$\mathscr {E}_{r + 1}^{(3)} (i_0)$
 holds with overwhelming probability. To that end, we fix 
 $x \in \mathbb {R}$
 such that
$x \in \mathbb {R}$
 such that 
 $(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
, and set
$(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
, and set 
 $$ \begin{align} \lambda = - \nu' \Omega_{\mathfrak{t}_{i_0}} (x) \geq n^{-1}, \end{align} $$
$$ \begin{align} \lambda = - \nu' \Omega_{\mathfrak{t}_{i_0}} (x) \geq n^{-1}, \end{align} $$
where the latter inequality follows from Remark 5.3 (and we recall 
 $\nu '$
 from Equation (6.10)). We will first use Proposition 5.9 to show with overwhelming probability that
$\nu '$
 from Equation (6.10)). We will first use Proposition 5.9 to show with overwhelming probability that 
 $$ \begin{align} H^* (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (\lambda) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (\lambda) \leq H_{r + 1} (x, \mathfrak{t}_{i_0}) \leq H^* (x, \mathfrak{t}_{i_0}) - \nu' \varkappa (\lambda) \Omega_{\mathfrak{t}_{i_0}} (x) + \varpi (\lambda), \end{align} $$
$$ \begin{align} H^* (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (\lambda) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (\lambda) \leq H_{r + 1} (x, \mathfrak{t}_{i_0}) \leq H^* (x, \mathfrak{t}_{i_0}) - \nu' \varkappa (\lambda) \Omega_{\mathfrak{t}_{i_0}} (x) + \varpi (\lambda), \end{align} $$
which since 
 $A \varkappa (z) + \varpi (z)$
 is minimized at
$A \varkappa (z) + \varpi (z)$
 is minimized at 
 $z = A$
 implies
$z = A$
 implies 
 $$ \begin{align} \displaystyle\sup_{z \geq n^{-1}} \Big( H^* & (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (z) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (z) \Big) \nonumber\\ & \leq H_{r + 1} (x, \mathfrak{t}_{i_0}) \leq \displaystyle\inf_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (z) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (z) \Big). \end{align} $$
$$ \begin{align} \displaystyle\sup_{z \geq n^{-1}} \Big( H^* & (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (z) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (z) \Big) \nonumber\\ & \leq H_{r + 1} (x, \mathfrak{t}_{i_0}) \leq \displaystyle\inf_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (z) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (z) \Big). \end{align} $$
We will then deduce that the event 
 $\mathscr {E}_{r + 1}^{(3)} (i_0)$
 likely holds by taking a union bound over x.
$\mathscr {E}_{r + 1}^{(3)} (i_0)$
 likely holds by taking a union bound over x.
To implement this, first observe that Assumption 5.7 applies, with the parameters
 $$ \begin{align*} (\varepsilon, \varsigma, \delta; \mathfrak{D}, \mathsf{D}; \mathfrak{s}; \mathfrak{t}_1, \mathfrak{t}_2; \widetilde{\mathsf{h}}, \widetilde{\mathsf{H}}; \xi_1, \xi_2; \zeta_1, \zeta_2; \mu), \end{align*} $$
$$ \begin{align*} (\varepsilon, \varsigma, \delta; \mathfrak{D}, \mathsf{D}; \mathfrak{s}; \mathfrak{t}_1, \mathfrak{t}_2; \widetilde{\mathsf{h}}, \widetilde{\mathsf{H}}; \xi_1, \xi_2; \zeta_1, \zeta_2; \mu), \end{align*} $$
there equal to
 $$ \begin{align} \bigg( \varepsilon_0, \Big( \displaystyle\frac{\varepsilon_0}{20} \Big)^{m + 1}, \frac{\delta}{4}; \mathfrak{R}_j, \mathsf{R}_j; \mathfrak{t}_{i_0}; \mathfrak{t}_i, \mathfrak{t}_{i'}; \mathsf{h}_r, \mathsf{H}_{r + 1}; & \nu_i \varkappa (\lambda), \nu_{i'} \varkappa (\lambda); \nu_i n^{\delta / 240 - 2/3}, \nu_{i'} n^{\delta / 240 - 2 / 3}; \varpi (\lambda) \bigg), \end{align} $$
$$ \begin{align} \bigg( \varepsilon_0, \Big( \displaystyle\frac{\varepsilon_0}{20} \Big)^{m + 1}, \frac{\delta}{4}; \mathfrak{R}_j, \mathsf{R}_j; \mathfrak{t}_{i_0}; \mathfrak{t}_i, \mathfrak{t}_{i'}; \mathsf{h}_r, \mathsf{H}_{r + 1}; & \nu_i \varkappa (\lambda), \nu_{i'} \varkappa (\lambda); \nu_i n^{\delta / 240 - 2/3}, \nu_{i'} n^{\delta / 240 - 2 / 3}; \varpi (\lambda) \bigg), \end{align} $$
here, where we recall that 
 $\lambda = - \nu ' \Omega _{\mathfrak {t}_{i_0}} (x) \geq n^{-1}$
. The verification that this assumption holds is very similar to that in the previous setting, except that the
$\lambda = - \nu ' \Omega _{\mathfrak {t}_{i_0}} (x) \geq n^{-1}$
. The verification that this assumption holds is very similar to that in the previous setting, except that the 
 $(\xi; \mu )$
-tiltedness condition now follows from the fact that we restricted to the event
$(\xi; \mu )$
-tiltedness condition now follows from the fact that we restricted to the event 
 $\mathscr {E}_r^{(3)} (i) \cap \mathscr {E}_r^{(3)} (i') \subseteq \mathscr {A}_r$
. To verify that the parameters (6.14) satisfy the inequalities stipulated in Proposition 5.9, let
$\mathscr {E}_r^{(3)} (i) \cap \mathscr {E}_r^{(3)} (i') \subseteq \mathscr {A}_r$
. To verify that the parameters (6.14) satisfy the inequalities stipulated in Proposition 5.9, let 
 $d = \min \big \{ |x - x'| : (x', \mathfrak {t}_{i_0}) \in \mathfrak {A} \big \}$
; then Remark 5.3 implies the existence of a constant
$d = \min \big \{ |x - x'| : (x', \mathfrak {t}_{i_0}) \in \mathfrak {A} \big \}$
; then Remark 5.3 implies the existence of a constant 
 $c = c(\mathfrak {P})> 0$
 such that
$c = c(\mathfrak {P})> 0$
 such that 
 $cd^{1/2} < \lambda < c^{-1} d^{1/2}$
. Under our choice Equation (6.11) of
$cd^{1/2} < \lambda < c^{-1} d^{1/2}$
. Under our choice Equation (6.11) of 
 $\lambda $
 and the definitions (6.4) of
$\lambda $
 and the definitions (6.4) of 
 $\varkappa $
 and
$\varkappa $
 and 
 $\varpi $
, we have
$\varpi $
, we have 
 $$ \begin{align*} \varkappa (\lambda) \geq c n^{\delta / 12 - 1} d^{-1/2} & \geq n^{\delta / 16 - 2/3} d^{-1/2}; \qquad \varkappa (\lambda) \leq c^{-1} n^{\delta / 12 - 1} d^{-1/2} \leq n^{ - 2/3}, \end{align*} $$
$$ \begin{align*} \varkappa (\lambda) \geq c n^{\delta / 12 - 1} d^{-1/2} & \geq n^{\delta / 16 - 2/3} d^{-1/2}; \qquad \varkappa (\lambda) \leq c^{-1} n^{\delta / 12 - 1} d^{-1/2} \leq n^{ - 2/3}, \end{align*} $$
where in the second inequality we used the fact that 
 $d \geq n^{\delta / 4 - 2/3}$
, since
$d \geq n^{\delta / 4 - 2/3}$
, since 
 $(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
.
$(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
.
 Thus, we may apply Proposition 5.9 to deduce that 
 $H_{r + 1}$
 is
$H_{r + 1}$
 is 
 $\big ( \nu ' \varkappa (\lambda ), \varpi (\lambda )\big )$
-tilted at
$\big ( \nu ' \varkappa (\lambda ), \varpi (\lambda )\big )$
-tilted at 
 $(x, \mathfrak {t}_{i_0})$
 with overwhelming probability, where
$(x, \mathfrak {t}_{i_0})$
 with overwhelming probability, where 
 $\nu '$
 is given by Equation (6.10). This implies with overwhelming probability that Equation (6.12) holds, which as mentioned above yields Equation (6.13) with overwhelming probability. This applies to a single x, so from a union bound, it follows that Equation (6.13) holds simultaneously for all
$\nu '$
 is given by Equation (6.10). This implies with overwhelming probability that Equation (6.12) holds, which as mentioned above yields Equation (6.13) with overwhelming probability. This applies to a single x, so from a union bound, it follows that Equation (6.13) holds simultaneously for all 
 $x \in n^{-2} \mathbb {Z}$
 such that
$x \in n^{-2} \mathbb {Z}$
 such that 
 $(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
. From the
$(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
. From the 
 $1$
-Lipschitz property of
$1$
-Lipschitz property of 
 $H_{r + 1}$
, we deduce with overwhelming probability that, for all
$H_{r + 1}$
, we deduce with overwhelming probability that, for all 
 $x \in \mathbb {R}$
 such that
$x \in \mathbb {R}$
 such that 
 $(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
, we have
$(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
, we have 
 $$ \begin{align*} \displaystyle\sup_{z \geq n^{-1}} \Big( H^* & (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (z) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (z) - n^{-2} \Big) \\ & \leq H_{r + 1} (x, \mathfrak{t}_{i_0}) \leq \displaystyle\inf_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (z) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (z) + n^{-2} \Big). \end{align*} $$
$$ \begin{align*} \displaystyle\sup_{z \geq n^{-1}} \Big( H^* & (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (z) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (z) - n^{-2} \Big) \\ & \leq H_{r + 1} (x, \mathfrak{t}_{i_0}) \leq \displaystyle\inf_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_{i_0}) + \nu' \varkappa (z) \Omega_{\mathfrak{t}_{i_0}} (x) - \varpi (z) + n^{-2} \Big). \end{align*} $$
 Using the fact (6.10) that 
 $\nu ' < \nu _{i_0}$
 and that
$\nu ' < \nu _{i_0}$
 and that 
 $\Omega _{\mathfrak {t}_{i_0}} (x) \geq n^{-1}$
 for
$\Omega _{\mathfrak {t}_{i_0}} (x) \geq n^{-1}$
 for 
 $(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
, it follows that
$(x, \mathfrak {t}_{i_0}) \in \mathfrak {L}_-^{\delta / 4}$
, it follows that 
 $\mathscr {E}_{r + 1}^{(3)} (i_0)$
 holds with overwhelming probability.
$\mathscr {E}_{r + 1}^{(3)} (i_0)$
 holds with overwhelming probability.
 This shows that the event 
 $\mathscr {E}_r^{(1)} (i_0) \cap \mathscr {E}_r^{(2)} (i_0) \cap \mathscr {E}_r^{(3)} (i_0)$
 holds with overwhelming probability, for any index
$\mathscr {E}_r^{(1)} (i_0) \cap \mathscr {E}_r^{(2)} (i_0) \cap \mathscr {E}_r^{(3)} (i_0)$
 holds with overwhelming probability, for any index 
 $0\leq i_0 \leq m$
. By a union bound, we deduce that the event
$0\leq i_0 \leq m$
. By a union bound, we deduce that the event 
 $\mathscr {E}_r$
 (from Equation (6.7)) holds with overwhelming probability. So, it remains to show that
$\mathscr {E}_r$
 (from Equation (6.7)) holds with overwhelming probability. So, it remains to show that 
 $\mathscr {F}_r$
 does as well.
$\mathscr {F}_r$
 does as well.
 This will follow from an application of Theorem 4.3; let us verify that 
 $(h, H^*; \mathsf {h}_r; \mathfrak {R}_j, \mathsf {R}_j)$
 satisfies the constraints on
$(h, H^*; \mathsf {h}_r; \mathfrak {R}_j, \mathsf {R}_j)$
 satisfies the constraints on 
 $(h, H^*;\mathsf {h}; \mathfrak {D}, \mathsf {D})$
 listed in Assumption 4.1 and Assumption 4.2. First, observe, if
$(h, H^*;\mathsf {h}; \mathfrak {D}, \mathsf {D})$
 listed in Assumption 4.1 and Assumption 4.2. First, observe, if 
 $u_j \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 is a tangency location for
$u_j \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 is a tangency location for 
 $\mathfrak {A}$
, then
$\mathfrak {A}$
, then 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j) \subset \partial \mathfrak {P}$
. Thus,
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j) \subset \partial \mathfrak {P}$
. Thus, 
 $\mathsf {H}_{r + 1} (nv) = n h(v)$
, and
$\mathsf {H}_{r + 1} (nv) = n h(v)$
, and 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 is packed with respect to h. Otherwise,
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 is packed with respect to h. Otherwise, 
 $\mathfrak {L}$
 extends beyond
$\mathfrak {L}$
 extends beyond 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 and
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 and 
 $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
, and so
$\partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
, and so 
 $H^*$
 admits an extension beyond the north and south boundaries of
$H^*$
 admits an extension beyond the north and south boundaries of 
 $\mathfrak {R}_j$
. This shows that
$\mathfrak {R}_j$
. This shows that 
 $\mathfrak {R}_j$
 satisfies the second condition on
$\mathfrak {R}_j$
 satisfies the second condition on 
 $\mathfrak {D}$
 in Assumption 4.1. The first, third and fourth follow from the adaptedness of
$\mathfrak {D}$
 in Assumption 4.1. The first, third and fourth follow from the adaptedness of 
 $\mathfrak {R}_j$
 to
$\mathfrak {R}_j$
 to 
 $H^*$
; the fifth follows from taking the
$H^*$
; the fifth follows from taking the 
 $(\mathfrak {P}, 1)$
 there equal to
$(\mathfrak {P}, 1)$
 there equal to 
 $(\mathfrak {P}, 1)$
 here.
$(\mathfrak {P}, 1)$
 here.
 To verify that it satisfies Assumption 4.2, observe by Equation (6.8), the fact that 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j)$
 and
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j)$
 and 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
 are either subsets of
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
 are either subsets of 
 $\partial \mathfrak {P}$
 or bounded away from
$\partial \mathfrak {P}$
 or bounded away from 
 $\overline {\mathfrak {L}}$
 gives
$\overline {\mathfrak {L}}$
 gives 
 $H_r = H^*$
 along
$H_r = H^*$
 along 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. In particular,
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. In particular, 
 $\mathsf {h}_r (nv) = n h_r (v) = n h(v)$
 for each
$\mathsf {h}_r (nv) = n h_r (v) = n h(v)$
 for each 
 $v \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. Since
$v \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. Since 
 $\mathfrak {R}_j$
 is adapted to
$\mathfrak {R}_j$
 is adapted to 
 $H^*$
, h is constant both
$H^*$
, h is constant both 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j)$
 and
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j)$
 and 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. Further, observe since we have restricted to
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
. Further, observe since we have restricted to 
 $\mathscr {E}_r^{(1)} (i) \cap \mathscr {E}_r^{(1)} (i')$
 that the edge of
$\mathscr {E}_r^{(1)} (i) \cap \mathscr {E}_r^{(1)} (i')$
 that the edge of 
 $H_r$
 is
$H_r$
 is 
 $n^{\delta / 240 - 2/3}$
-tilted with respect to
$n^{\delta / 240 - 2/3}$
-tilted with respect to 
 $H^*$
. This implies that
$H^*$
. This implies that 
 $\mathsf {h}_r (nv) = n h_r (v) = n h(v)$
 for any
$\mathsf {h}_r (nv) = n h_r (v) = n h(v)$
 for any 
 $v \notin \mathfrak {L}$
 and
$v \notin \mathfrak {L}$
 and 
 $\operatorname {\mathrm {dist}} (v, \mathfrak {A}) \geq \mathcal {O} (n^{\delta / 200 - 2/3})$
. In particular, this holds for any
$\operatorname {\mathrm {dist}} (v, \mathfrak {A}) \geq \mathcal {O} (n^{\delta / 200 - 2/3})$
. In particular, this holds for any 
 $v \notin \mathfrak {L}_+^{\delta / 2}$
, thereby verifying the second property listed in Assumption 4.2.
$v \notin \mathfrak {L}_+^{\delta / 2}$
, thereby verifying the second property listed in Assumption 4.2.
 It remains to verify the first one, namely that 
 $\big | \mathsf {h}_r (nv) - n h (nv) \big | \leq n^{\delta / 2}$
 for each
$\big | \mathsf {h}_r (nv) - n h (nv) \big | \leq n^{\delta / 2}$
 for each 
 $v \in \partial \mathfrak {R}_j$
. This is already implied by the second property for
$v \in \partial \mathfrak {R}_j$
. This is already implied by the second property for 
 $v \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
, so we must show it for
$v \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {R}_j)$
, so we must show it for 
 $v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
. We only consider
$v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j) \cup \partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
. We only consider 
 $v \in \partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
 since the case when
$v \in \partial _{\operatorname {\mathrm {so}}} (\mathfrak {R}_j)$
 since the case when 
 $v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 is entirely analogous. To that end, observe since since we restricted to the event
$v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {R}_j)$
 is entirely analogous. To that end, observe since since we restricted to the event 
 $\mathscr {E}_r^{(3)} (i)$
 that
$\mathscr {E}_r^{(3)} (i)$
 that 
 $$ \begin{align*} \displaystyle\sup_{z \geq n^{-1}}  \Big( H^* (x, \mathfrak{t}_i) + \nu_i \varkappa (z) \Omega_{\mathfrak{t}_i} (x) - \varpi (z) \Big) \leq H_r (x, \mathfrak{t}_i) \leq \displaystyle\inf_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_j) - \nu_i \varkappa (z) \Omega_{\mathfrak{t}_i} (x) + \varpi (z) \Big), \end{align*} $$
$$ \begin{align*} \displaystyle\sup_{z \geq n^{-1}}  \Big( H^* (x, \mathfrak{t}_i) + \nu_i \varkappa (z) \Omega_{\mathfrak{t}_i} (x) - \varpi (z) \Big) \leq H_r (x, \mathfrak{t}_i) \leq \displaystyle\inf_{z \geq n^{-1}} \Big( H^* (x, \mathfrak{t}_j) - \nu_i \varkappa (z) \Omega_{\mathfrak{t}_i} (x) + \varpi (z) \Big), \end{align*} $$
holds for each 
 $(x, \mathfrak {t}_i) \in \mathfrak {L}_-^{\delta / 4}$
. In particular, we my take
$(x, \mathfrak {t}_i) \in \mathfrak {L}_-^{\delta / 4}$
. In particular, we my take 
 $z = -\Omega _{\mathfrak {t}_i} (x) \geq n^{-1}$
, which by (6.4) gives
$z = -\Omega _{\mathfrak {t}_i} (x) \geq n^{-1}$
, which by (6.4) gives 
 $$ \begin{align*} \big| H_r (x, \mathfrak{t}_i) - H^* (x, \mathfrak{t}_i) \big| \leq n^{\delta / 12 - 1} + n^{\delta / 12 - 1} \log n \leq 2n^{\delta / 12 - 1} \log n \leq n^{\delta / 2 - 1}, \end{align*} $$
$$ \begin{align*} \big| H_r (x, \mathfrak{t}_i) - H^* (x, \mathfrak{t}_i) \big| \leq n^{\delta / 12 - 1} + n^{\delta / 12 - 1} \log n \leq 2n^{\delta / 12 - 1} \log n \leq n^{\delta / 2 - 1}, \end{align*} $$
for any 
 $(x, \mathfrak {t}_i) \in \partial _{\mathrm {so}}(\mathfrak {R}_j)$
. This yields the first property listed in Assumption 4.2 so that assumption holds. In particular, Theorem 4.3 applies (with the
$(x, \mathfrak {t}_i) \in \partial _{\mathrm {so}}(\mathfrak {R}_j)$
. This yields the first property listed in Assumption 4.2 so that assumption holds. In particular, Theorem 4.3 applies (with the 
 $(\mathsf {h}, \mathsf {H})$
 there equal to
$(\mathsf {h}, \mathsf {H})$
 there equal to 
 $(\mathsf {h}_r, \mathsf {H}_{r + 1})$
 here), implying that
$(\mathsf {h}_r, \mathsf {H}_{r + 1})$
 here), implying that 
 $\mathscr {F}_{r + 1}$
 holds with overwhelming probability. Hence,
$\mathscr {F}_{r + 1}$
 holds with overwhelming probability. Hence, 
 $\mathscr {E}_{r + 1} \cap \mathscr {F}_{r + 1} = \mathscr {A}_{r + 1}$
 holds with overwhelming probability upon restricting to
$\mathscr {E}_{r + 1} \cap \mathscr {F}_{r + 1} = \mathscr {A}_{r + 1}$
 holds with overwhelming probability upon restricting to 
 $\mathscr {A}_r$
, thereby establishing the proposition if
$\mathscr {A}_r$
, thereby establishing the proposition if 
 $\mathfrak {R}_j$
 is of the first or second type listed in Section 6.1.
$\mathfrak {R}_j$
 is of the first or second type listed in Section 6.1.
 It remains to consider the case when 
 $\mathfrak {R}_j = \mathfrak {R} (u_j)$
 is of the third type listed in Section 6.1, that is, when
$\mathfrak {R}_j = \mathfrak {R} (u_j)$
 is of the third type listed in Section 6.1, that is, when 
 $u_j$
 is a horizontal tangency location of
$u_j$
 is a horizontal tangency location of 
 $\mathfrak {A}$
 that does not lie on
$\mathfrak {A}$
 that does not lie on 
 $\partial \mathfrak {P}$
. Then,
$\partial \mathfrak {P}$
. Then, 
 $\mathfrak {R}_j = \mathfrak {D}_1 \cup \mathfrak {D}_2$
 for two trapezoids
$\mathfrak {R}_j = \mathfrak {D}_1 \cup \mathfrak {D}_2$
 for two trapezoids 
 $\mathfrak {D}_1 = \mathfrak {D}_1 (u_j)$
 and
$\mathfrak {D}_1 = \mathfrak {D}_1 (u_j)$
 and 
 $\mathfrak {D}_2 (u_j)$
 that contain
$\mathfrak {D}_2 (u_j)$
 that contain 
 $u_j$
 in the interiors of their south and north boundaries, respectively; set
$u_j$
 in the interiors of their south and north boundaries, respectively; set 
 $\mathsf {D}_1 = n \mathfrak {D}_1 \subset \mathbb {T}$
 and
$\mathsf {D}_1 = n \mathfrak {D}_1 \subset \mathbb {T}$
 and 
 $\mathsf {D}_2 = n \mathfrak {D}_2 \subset \mathbb {T}$
. Either
$\mathsf {D}_2 = n \mathfrak {D}_2 \subset \mathbb {T}$
. Either 
 $\mathfrak {D}_1$
 or
$\mathfrak {D}_1$
 or 
 $\mathfrak {D}_2$
 is disjoint with
$\mathfrak {D}_2$
 is disjoint with 
 $\mathfrak {L}$
; let us assume the former is, as the proof in the alternative case is entirely analogous.
$\mathfrak {L}$
; let us assume the former is, as the proof in the alternative case is entirely analogous.
 Then, the top three boundaries 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}_1) \cup \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}_1) \cup \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}_1)$
 of
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}_1) \cup \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}_1) \cup \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}_1)$
 of 
 $\mathfrak {D}_1$
 are bounded away from
$\mathfrak {D}_1$
 are bounded away from 
 $\overline {\mathfrak {D}}$
. By Equation (6.8), this yields
$\overline {\mathfrak {D}}$
. By Equation (6.8), this yields 
 $h_r (v) = H^* (v)$
 for any
$h_r (v) = H^* (v)$
 for any 
 $v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. Moreover
$v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. Moreover 
 $\nabla H^*$
 is constant,Footnote 
13
 and an element of
$\nabla H^*$
 is constant,Footnote 
13
 and an element of 
 $\{ (0, 0), (1, 0), (1, -1) \}$
, along
$\{ (0, 0), (1, 0), (1, -1) \}$
, along 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. Hence, the same holds for
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \cup \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. Hence, the same holds for 
 $\nabla H_r$
, from which it follows that there is only one height function on
$\nabla H_r$
, from which it follows that there is only one height function on 
 $\mathsf {D}_1$
 with boundary data
$\mathsf {D}_1$
 with boundary data 
 $\mathsf {H}_r |_{\mathsf {D}_1}$
. Hence,
$\mathsf {H}_r |_{\mathsf {D}_1}$
. Hence, 
 $\mathsf {H}_{r + 1} (nv) = n H^* (v) = \mathsf {H}_r (nv)$
, for each
$\mathsf {H}_{r + 1} (nv) = n H^* (v) = \mathsf {H}_r (nv)$
, for each 
 $v \in \mathfrak {D}_1$
. The inequalities (6.2), (6.5) and (6.6) hold deterministically inside
$v \in \mathfrak {D}_1$
. The inequalities (6.2), (6.5) and (6.6) hold deterministically inside 
 $\mathfrak {D}_1$
.
$\mathfrak {D}_1$
.
 Furthermore, the fact that 
 $\mathsf {H}_r (nv) = n H^* (v)$
 for
$\mathsf {H}_r (nv) = n H^* (v)$
 for 
 $v \in \mathfrak {D}_1$
 implies that the
$v \in \mathfrak {D}_1$
 implies that the 
 $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {D}_1)$
 is packed with respect to
$\partial _{\operatorname {\mathrm {so}}} (\mathfrak {D}_1)$
 is packed with respect to 
 $h_r$
; thus, the same statement holds for
$h_r$
; thus, the same statement holds for 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}_2)$
. Therefore, the same reasoning as applied above in the second case for
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}_2)$
. Therefore, the same reasoning as applied above in the second case for 
 $\mathfrak {R}_j$
 listed in Section 6.1 (when u is a horizontal tangency location of
$\mathfrak {R}_j$
 listed in Section 6.1 (when u is a horizontal tangency location of 
 $\mathfrak {A}$
 that lies on
$\mathfrak {A}$
 that lies on 
 $\partial \mathfrak {P}$
) applies to show that the inequalities (6.2), (6.5) and (6.6) hold with overwhelming probability inside
$\partial \mathfrak {P}$
) applies to show that the inequalities (6.2), (6.5) and (6.6) hold with overwhelming probability inside 
 $\mathfrak {D}_2$
. It follows that these inequalities hold with overwhelming probability on
$\mathfrak {D}_2$
. It follows that these inequalities hold with overwhelming probability on 
 $\mathfrak {D}_1 \cup \mathfrak {D}_2 = \mathfrak {R}_j$
. This implies that
$\mathfrak {D}_1 \cup \mathfrak {D}_2 = \mathfrak {R}_j$
. This implies that 
 $\mathscr {A}_{r + 1}$
 holds with overwhelming probability, which yields the proposition.
$\mathscr {A}_{r + 1}$
 holds with overwhelming probability, which yields the proposition.
7 Existence of tilted height functions
 In this section, we establish Proposition 5.4. Instead of directly producing the tilted height function 
 $\widehat {H}^*$
 as described there, we first in Section 7.1 define its complex slope as a solution to Equation (3.3), with the function
$\widehat {H}^*$
 as described there, we first in Section 7.1 define its complex slope as a solution to Equation (3.3), with the function 
 $Q_0$
 there modified in an explicit way. In Section 7.2, we solve this equation perturbatively and compare its solution to the original one. We then define the tilted height function from its complex slope using Equation (3.1) and establish Proposition 5.4 in Section 7.3.
$Q_0$
 there modified in an explicit way. In Section 7.2, we solve this equation perturbatively and compare its solution to the original one. We then define the tilted height function from its complex slope using Equation (3.1) and establish Proposition 5.4 in Section 7.3.
7.1 Modifying Q
 In this section, we introduce a function 
 $\mathcal {F}_t (x; \alpha _0)$
 that will eventually be the complex slope for our tilted height profile. Throughout this section, we adopt the notation from Proposition 5.4, recalling in particular the complex slope
$\mathcal {F}_t (x; \alpha _0)$
 that will eventually be the complex slope for our tilted height profile. Throughout this section, we adopt the notation from Proposition 5.4, recalling in particular the complex slope 
 $f_t (x)$
 associated with
$f_t (x)$
 associated with 
 $H^*$
, defined for
$H^*$
, defined for 
 $(x, t) \in \mathfrak {L} (\mathfrak {P})$
. We denote the liquid region inside
$(x, t) \in \mathfrak {L} (\mathfrak {P})$
. We denote the liquid region inside 
 $\mathfrak {D}$
 as
$\mathfrak {D}$
 as 
 $\mathfrak L=\mathfrak {D}\cap \mathfrak {L}(\mathfrak {P})$
.
$\mathfrak L=\mathfrak {D}\cap \mathfrak {L}(\mathfrak {P})$
.
 The function 
 $\mathcal {F}_t (x; \alpha _0)$
 may be interpreted as the solution to the complex Burgers equation
$\mathcal {F}_t (x; \alpha _0)$
 may be interpreted as the solution to the complex Burgers equation 
 $$ \begin{align} \partial_t \mathcal{F}_t (x; \alpha_0) + \partial_x \mathcal{F}_t (x; \alpha_0) \displaystyle\frac{\mathcal{F}_t (x; \alpha_0)}{\mathcal{F}_t (x; \alpha_0) + 1} = 0, \qquad \text{with initial data}\ \mathcal{F}_0 (x; \alpha_0) = \alpha_0 f_{\mathfrak{t}_0} (x), \end{align} $$
$$ \begin{align} \partial_t \mathcal{F}_t (x; \alpha_0) + \partial_x \mathcal{F}_t (x; \alpha_0) \displaystyle\frac{\mathcal{F}_t (x; \alpha_0)}{\mathcal{F}_t (x; \alpha_0) + 1} = 0, \qquad \text{with initial data}\ \mathcal{F}_0 (x; \alpha_0) = \alpha_0 f_{\mathfrak{t}_0} (x), \end{align} $$
for suitable real numbers 
 $\alpha _0 = 1 + \mathcal {O} \big ( |\xi _1 + \xi _2| \big )$
 and
$\alpha _0 = 1 + \mathcal {O} \big ( |\xi _1 + \xi _2| \big )$
 and 
 $\mathfrak {t}_0$
; stated alternatively, we first time-shift the solution
$\mathfrak {t}_0$
; stated alternatively, we first time-shift the solution 
 $f_t (x)$
 of Equation (3.2) by
$f_t (x)$
 of Equation (3.2) by 
 $\mathfrak {t}_0$
 and then multiply its initial data by a ‘drift’
$\mathfrak {t}_0$
 and then multiply its initial data by a ‘drift’ 
 $\alpha _0$
. However, making this precise would involve justifying the existence and uniqueness of a solution to Equation (7.1). To circumvent this, we instead define
$\alpha _0$
. However, making this precise would involve justifying the existence and uniqueness of a solution to Equation (7.1). To circumvent this, we instead define 
 $\mathcal {F}_t (x; \alpha _0)$
 as the solution to an ‘
$\mathcal {F}_t (x; \alpha _0)$
 as the solution to an ‘
 $\alpha _0$
-deformation’ of Equation (7.7).
$\alpha _0$
-deformation’ of Equation (7.7).
 To implement this, we define real numbers 
 $\mathfrak {t}_0$
 and
$\mathfrak {t}_0$
 and 
 $\alpha _0$
 by
$\alpha _0$
 by 
 $$ \begin{align} \mathfrak{t}_0 = \displaystyle\frac{\xi_2 \mathfrak{t}_1 - \xi_1 \mathfrak{t}_2}{\xi_2 - \xi_1}; \qquad \alpha_0 = \displaystyle\frac{\xi_2 - \xi_1}{\mathfrak{t}_2 - \mathfrak{t}_1} + 1 = \displaystyle\frac{\xi_1}{\mathfrak{t}_1 - \mathfrak{t}_0} + 1 = \displaystyle\frac{\xi_2}{\mathfrak{t}_2 - \mathfrak{t}_0} + 1 \end{align} $$
$$ \begin{align} \mathfrak{t}_0 = \displaystyle\frac{\xi_2 \mathfrak{t}_1 - \xi_1 \mathfrak{t}_2}{\xi_2 - \xi_1}; \qquad \alpha_0 = \displaystyle\frac{\xi_2 - \xi_1}{\mathfrak{t}_2 - \mathfrak{t}_1} + 1 = \displaystyle\frac{\xi_1}{\mathfrak{t}_1 - \mathfrak{t}_0} + 1 = \displaystyle\frac{\xi_2}{\mathfrak{t}_2 - \mathfrak{t}_0} + 1 \end{align} $$
so that 
 $\alpha _0 = 1 + \mathcal {O} \big ( |\xi _1 + \xi _2| \big )$
. We next introduce a time-shifted variant of
$\alpha _0 = 1 + \mathcal {O} \big ( |\xi _1 + \xi _2| \big )$
. We next introduce a time-shifted variant of 
 $f_t (x)$
 given by
$f_t (x)$
 given by 
 $$ \begin{align} \mathcal{F}_s (x; 1) = f_{s + \mathfrak{t}_0} (x), \qquad \text{whenever}\ (x, s + \mathfrak{t}_0) \in \overline{\mathfrak{L}}. \end{align} $$
$$ \begin{align} \mathcal{F}_s (x; 1) = f_{s + \mathfrak{t}_0} (x), \qquad \text{whenever}\ (x, s + \mathfrak{t}_0) \in \overline{\mathfrak{L}}. \end{align} $$
 Although the complex slope for the tilted height function 
 $\widehat {H}^*$
 from Proposition 5.4 will eventually be related to an
$\widehat {H}^*$
 from Proposition 5.4 will eventually be related to an 
 $\alpha _0$
-deformation of
$\alpha _0$
-deformation of 
 $\mathcal {F}_s (x; 1)$
 with
$\mathcal {F}_s (x; 1)$
 with 
 $\alpha _0$
 given explicitly by Equation (7.2), it will be useful to define this deformation (denoted by
$\alpha _0$
 given explicitly by Equation (7.2), it will be useful to define this deformation (denoted by 
 $\mathcal {F}_t (x; \alpha )$
) for any
$\mathcal {F}_t (x; \alpha )$
) for any 
 $\alpha \in \mathbb {R}$
 with
$\alpha \in \mathbb {R}$
 with 
 $|\alpha - 1|$
 sufficiently small. To that end, recalling the rational function
$|\alpha - 1|$
 sufficiently small. To that end, recalling the rational function 
 $Q: \mathbb {C}^2 \rightarrow \mathbb {C}$
 satisfying Equation (3.4) with respect to
$Q: \mathbb {C}^2 \rightarrow \mathbb {C}$
 satisfying Equation (3.4) with respect to 
 $f_t$
, we also define its time-shift
$f_t$
, we also define its time-shift 
 $\mathcal {Q}_1 $
and
$\mathcal {Q}_1 $
and 
 $\alpha $
-deformation
$\alpha $
-deformation 
 $\mathcal {Q}_{\alpha }$
, for any
$\mathcal {Q}_{\alpha }$
, for any 
 $\alpha \in \mathbb {R}$
 by
$\alpha \in \mathbb {R}$
 by 
 $$ \begin{align*} \mathcal{Q}_1 (u, v) = Q \bigg( u, v - \displaystyle\frac{\mathfrak{t}_0 u}{u + 1} \bigg); \qquad \mathcal{Q}_{\alpha} (u, v) = \displaystyle\frac{u + 1}{\alpha^{-1} u + 1} \mathcal{Q}_1 (\alpha^{-1} u, v), \end{align*} $$
$$ \begin{align*} \mathcal{Q}_1 (u, v) = Q \bigg( u, v - \displaystyle\frac{\mathfrak{t}_0 u}{u + 1} \bigg); \qquad \mathcal{Q}_{\alpha} (u, v) = \displaystyle\frac{u + 1}{\alpha^{-1} u + 1} \mathcal{Q}_1 (\alpha^{-1} u, v), \end{align*} $$
observing in particular that Equations (3.4) and (7.3) together imply
 $$ \begin{align} \mathcal{Q}_1 \bigg( \mathcal{F}_t (x; 1), x - \frac{t \mathcal{F}_t (x; 1)}{\mathcal{F}_t (x; 1) + 1}\bigg) = 0. \end{align} $$
$$ \begin{align} \mathcal{Q}_1 \bigg( \mathcal{F}_t (x; 1), x - \frac{t \mathcal{F}_t (x; 1)}{\mathcal{F}_t (x; 1) + 1}\bigg) = 0. \end{align} $$
 If 
 $|\alpha - 1|$
 is sufficiently small (in a way only dependent on
$|\alpha - 1|$
 is sufficiently small (in a way only dependent on 
 $\mathfrak {P}$
), this implies the existence of an analytic function
$\mathfrak {P}$
), this implies the existence of an analytic function 
 $\mathcal {F}_t (x; \alpha )$
, defined for
$\mathcal {F}_t (x; \alpha )$
, defined for 
 $(x, t+\mathfrak t_0)$
 in an open subset of
$(x, t+\mathfrak t_0)$
 in an open subset of 
 $\mathfrak {L}$
, that is continuous in
$\mathfrak {L}$
, that is continuous in 
 $\alpha $
 and satisfies
$\alpha $
 and satisfies 
 $$ \begin{align} \mathcal{Q}_{\alpha} \bigg( \mathcal{F}_t (x; \alpha), x - \displaystyle\frac{t \mathcal{F}_t (x; \alpha)}{\mathcal{F}_t (x; \alpha) + 1} \bigg) = 0. \end{align} $$
$$ \begin{align} \mathcal{Q}_{\alpha} \bigg( \mathcal{F}_t (x; \alpha), x - \displaystyle\frac{t \mathcal{F}_t (x; \alpha)}{\mathcal{F}_t (x; \alpha) + 1} \bigg) = 0. \end{align} $$
 For example, recall from the third part of Proposition 3.3 that there exists a real analytic function 
 $\mathcal {Q}_{0; 1}$
, locally defined around any solution of Equation (7.4) (obtained by solving Equation (7.4)) such that
$\mathcal {Q}_{0; 1}$
, locally defined around any solution of Equation (7.4) (obtained by solving Equation (7.4)) such that 
 $$ \begin{align*} \mathcal{Q}_{0; 1} \big( \mathcal{F}_t (x; 1) \big) = x \big( \mathcal{F}_t (x; 1) + 1 \big) - t \mathcal{F}_t (x; 1). \end{align*} $$
$$ \begin{align*} \mathcal{Q}_{0; 1} \big( \mathcal{F}_t (x; 1) \big) = x \big( \mathcal{F}_t (x; 1) + 1 \big) - t \mathcal{F}_t (x; 1). \end{align*} $$
Then, we may set
 $$ \begin{align} \mathcal{Q}_{0; \alpha} (u) = \displaystyle\frac{u + 1}{\alpha^{-1}u + 1} \mathcal{Q}_{0; 1} (\alpha^{-1} u), \end{align} $$
$$ \begin{align} \mathcal{Q}_{0; \alpha} (u) = \displaystyle\frac{u + 1}{\alpha^{-1}u + 1} \mathcal{Q}_{0; 1} (\alpha^{-1} u), \end{align} $$
for u in the domain of 
 $\mathcal {Q}_{0; 1}$
 and let
$\mathcal {Q}_{0; 1}$
 and let 
 $\mathcal {F}_t (x; \alpha )$
 denote the root of
$\mathcal {F}_t (x; \alpha )$
 denote the root of 
 $$ \begin{align} \mathcal{Q}_{0; \alpha} \big( \mathcal{F}_t (x; \alpha) \big) = x \big( \mathcal{F}_t (x; \alpha) + 1 \big) - t \mathcal{F}_t (x; \alpha), \end{align} $$
$$ \begin{align} \mathcal{Q}_{0; \alpha} \big( \mathcal{F}_t (x; \alpha) \big) = x \big( \mathcal{F}_t (x; \alpha) + 1 \big) - t \mathcal{F}_t (x; \alpha), \end{align} $$
chosen so that it is continuous in 
 $\alpha $
; it is directly verified that it satisfies Equation (7.5). Such a root is well defined on a nonempty open subset of
$\alpha $
; it is directly verified that it satisfies Equation (7.5). Such a root is well defined on a nonempty open subset of 
 $\mathfrak {L}$
 that contains no double root of Equation (7.5) or equivalently of Equation (7.7) (such a subset exists for
$\mathfrak {L}$
 that contains no double root of Equation (7.5) or equivalently of Equation (7.7) (such a subset exists for 
 $|\alpha - 1|$
 sufficiently small). Since any double root of Equation (7.5) is real, we may extend
$|\alpha - 1|$
 sufficiently small). Since any double root of Equation (7.5) is real, we may extend 
 $\mathcal {F}_t (x; \alpha )$
 to the
$\mathcal {F}_t (x; \alpha )$
 to the 
 $\alpha $
-deformed liquid region
$\alpha $
-deformed liquid region 
 $\mathfrak {L}_{\alpha }$
 and its arctic curve
$\mathfrak {L}_{\alpha }$
 and its arctic curve 
 $\mathfrak {A}_{\alpha }$
, defined by
$\mathfrak {A}_{\alpha }$
, defined by 
 $$ \begin{align*} &\mathfrak{L}_{\alpha} = \big\{ (x, t)\in \mathbb R^2 : (x, t+\mathfrak t_0)\in \mathfrak{D}, \mathcal{F}_t (x; \alpha) \in \mathbb{H}^- \big\}; \\ &\mathfrak{A}_{\alpha} = \big\{(x,t)\in \partial \mathfrak{L}_{\alpha}: \mathcal{F}_t (x; \alpha) \in \mathbb R\big\}, \end{align*} $$
$$ \begin{align*} &\mathfrak{L}_{\alpha} = \big\{ (x, t)\in \mathbb R^2 : (x, t+\mathfrak t_0)\in \mathfrak{D}, \mathcal{F}_t (x; \alpha) \in \mathbb{H}^- \big\}; \\ &\mathfrak{A}_{\alpha} = \big\{(x,t)\in \partial \mathfrak{L}_{\alpha}: \mathcal{F}_t (x; \alpha) \in \mathbb R\big\}, \end{align*} $$
where we observe for 
 $|\alpha - 1|$
 sufficiently small that
$|\alpha - 1|$
 sufficiently small that 
 $\mathfrak {L}_{\alpha }$
 is simply connected since
$\mathfrak {L}_{\alpha }$
 is simply connected since 
 $\mathfrak {L}$
 (and thus
$\mathfrak {L}$
 (and thus 
 $\mathfrak {L}_1$
) is.
$\mathfrak {L}_1$
) is.
We next have the below lemma that, given a solution of an equation of the type (7.7), evaluates its derivatives with respect to x and t, and shows that it satisfies the complex Burgers equation. Its proof essentially follows from a Taylor expansion and will be provided in Appendix A below.
Lemma 7.1. Fix 
 $(x_0, t_0) \in \mathbb {R}^2$
; let
$(x_0, t_0) \in \mathbb {R}^2$
; let 
 $\mathcal {F}_t (x) $
 denote a function that is real analytic in a neighborhood of
$\mathcal {F}_t (x) $
 denote a function that is real analytic in a neighborhood of 
 $(x_0, t_0)$
, and let
$(x_0, t_0)$
, and let 
 $\mathcal {Q}_0$
 denote a function that is real analytic in a neighborhood of
$\mathcal {Q}_0$
 denote a function that is real analytic in a neighborhood of 
 $\mathcal {F}_{t_0} (x_0)$
. Assume in a neighborhood of
$\mathcal {F}_{t_0} (x_0)$
. Assume in a neighborhood of 
 $(x_0, t_0)$
 that
$(x_0, t_0)$
 that 
 $$ \begin{align} \mathcal{Q}_0 \big( \mathcal{F}_t (x) \big) = x \big( \mathcal{F}_t (x) + 1 \big) - t \mathcal{F}_t (x). \end{align} $$
$$ \begin{align} \mathcal{Q}_0 \big( \mathcal{F}_t (x) \big) = x \big( \mathcal{F}_t (x) + 1 \big) - t \mathcal{F}_t (x). \end{align} $$
 Then for all 
 $(x ,t)$
 in a neighborhood of
$(x ,t)$
 in a neighborhood of 
 $(x_0, t_0)$
 we have
$(x_0, t_0)$
 we have 
 $$ \begin{align} \partial_x \mathcal{F}_t (x) = \displaystyle\frac{\mathcal{F}_t (x) + 1}{\mathcal{Q}_0' \big( \mathcal{F}_t (x) \big) - x + t}; \qquad \partial_t \mathcal{F}_t (x) = - \displaystyle\frac{\mathcal{F}_t (x)}{\mathcal{Q}_0' \big( \mathcal{F}_t (x) \big) - x + t}, \end{align} $$
$$ \begin{align} \partial_x \mathcal{F}_t (x) = \displaystyle\frac{\mathcal{F}_t (x) + 1}{\mathcal{Q}_0' \big( \mathcal{F}_t (x) \big) - x + t}; \qquad \partial_t \mathcal{F}_t (x) = - \displaystyle\frac{\mathcal{F}_t (x)}{\mathcal{Q}_0' \big( \mathcal{F}_t (x) \big) - x + t}, \end{align} $$
and in particular
 $$ \begin{align} \partial_t \mathcal{F}_t (x) + \partial_x \mathcal{F}_t (x) \displaystyle\frac{\mathcal{F}_t (x)}{\mathcal{F}_t (x) + 1} = 0. \end{align} $$
$$ \begin{align} \partial_t \mathcal{F}_t (x) + \partial_x \mathcal{F}_t (x) \displaystyle\frac{\mathcal{F}_t (x)}{\mathcal{F}_t (x) + 1} = 0. \end{align} $$
7.2 Comparison between 
 $\mathcal {F}_t (x; \alpha )$
 and
$\mathcal {F}_t (x; \alpha )$
 and 
 $\mathcal {F}_t (x; 1)$
$\mathcal {F}_t (x; 1)$
 In this section, we provide two estimates comparing 
 $\mathcal {F}_t (x; \alpha )$
 and
$\mathcal {F}_t (x; \alpha )$
 and 
 $\mathcal {F}_t (x; 1)$
. The first (given by Lemma 7.2 below) compares their logarithms, where in what follows, we take the branch of the logarithm to be so that
$\mathcal {F}_t (x; 1)$
. The first (given by Lemma 7.2 below) compares their logarithms, where in what follows, we take the branch of the logarithm to be so that 
 $\operatorname {\mathrm {Im}} \log u \in [-\pi , \pi )$
. The second (given by Lemma 7.3 below) compares the endpoints of their arctic boundaries. These results will eventually be the sources of the quantities
$\operatorname {\mathrm {Im}} \log u \in [-\pi , \pi )$
. The second (given by Lemma 7.3 below) compares the endpoints of their arctic boundaries. These results will eventually be the sources of the quantities 
 $\Omega $
 and
$\Omega $
 and 
 $\Upsilon $
 from Equation (5.1).
$\Upsilon $
 from Equation (5.1).
Lemma 7.2. Suppose 
 $|\alpha - 1|$
 is sufficiently small, and
$|\alpha - 1|$
 is sufficiently small, and 
 $v = (x, t) \in \mathfrak {L}_1 \cap \mathcal {L}_{\alpha }$
 is any point bounded away from a cusp or tangency location of
$v = (x, t) \in \mathfrak {L}_1 \cap \mathcal {L}_{\alpha }$
 is any point bounded away from a cusp or tangency location of 
 $\mathfrak {A}_1$
. Setting
$\mathfrak {A}_1$
. Setting 
 $d = \operatorname {\mathrm {dist}} (v, \mathfrak {A}_1)$
, we have the following estimates, where the implicit constants in the error below only depend on the first three derivatives of
$d = \operatorname {\mathrm {dist}} (v, \mathfrak {A}_1)$
, we have the following estimates, where the implicit constants in the error below only depend on the first three derivatives of 
 $\mathcal {Q}_{0; 1}$
 at
$\mathcal {Q}_{0; 1}$
 at 
 $\mathcal {F}_t (x; 1)$
.
$\mathcal {F}_t (x; 1)$
. 
- 
1. If  $d \geq |\alpha - 1|$
, then $d \geq |\alpha - 1|$
, then $$ \begin{align*} \log \mathcal{F}_t (x; \alpha) - \log \alpha - \log \mathcal{F}_t (x; 1) = t (1 - \alpha) \partial_x \bigg( \displaystyle\frac{\mathcal{F}_t (x; 1)}{\mathcal{F}_t (x; 1) + 1} \bigg) \Big( 1 + \mathcal{O} \big( d^{-1} |\alpha - 1| + |\alpha - 1|^{1/2} \big) \Big). \end{align*} $$ $$ \begin{align*} \log \mathcal{F}_t (x; \alpha) - \log \alpha - \log \mathcal{F}_t (x; 1) = t (1 - \alpha) \partial_x \bigg( \displaystyle\frac{\mathcal{F}_t (x; 1)}{\mathcal{F}_t (x; 1) + 1} \bigg) \Big( 1 + \mathcal{O} \big( d^{-1} |\alpha - 1| + |\alpha - 1|^{1/2} \big) \Big). \end{align*} $$
- 
2. For any  $d> 0$
, we have $d> 0$
, we have $\big | \log \mathcal {F}_t (x; \alpha ) - \log \alpha - \log \mathcal {F}_t (x; 1) \big | = \mathcal {O} \big ( |\alpha - 1|^{1/2} \big )$
. $\big | \log \mathcal {F}_t (x; \alpha ) - \log \alpha - \log \mathcal {F}_t (x; 1) \big | = \mathcal {O} \big ( |\alpha - 1|^{1/2} \big )$
.
Proof. Letting 
 $\mathcal {F} = \mathcal {F}_t (x; 1)$
 and
$\mathcal {F} = \mathcal {F}_t (x; 1)$
 and 
 $\mathcal {F}' = \mathcal {F}_t (x; \alpha )$
, we have by Equation (7.7) that
$\mathcal {F}' = \mathcal {F}_t (x; \alpha )$
, we have by Equation (7.7) that 
 $$ \begin{align} \mathcal{Q}_{0; 1} (\mathcal{F}) = x (\mathcal{F} + 1) - t \mathcal{F}; \qquad \mathcal{Q}_{0; \alpha} (\mathcal{F}') = x (\mathcal{F}' + 1) - t\mathcal{F}'. \end{align} $$
$$ \begin{align} \mathcal{Q}_{0; 1} (\mathcal{F}) = x (\mathcal{F} + 1) - t \mathcal{F}; \qquad \mathcal{Q}_{0; \alpha} (\mathcal{F}') = x (\mathcal{F}' + 1) - t\mathcal{F}'. \end{align} $$
 Next, Equation (7.6) implies for 
 $( u, x - tu/(u + 1) )$
 in a neighborhood of
$( u, x - tu/(u + 1) )$
 in a neighborhood of 
 $( \mathcal {F}', x - t \mathcal {F}'/(\mathcal {F}' + 1))$
 that
$( \mathcal {F}', x - t \mathcal {F}'/(\mathcal {F}' + 1))$
 that 
 $$ \begin{align*} \mathcal{Q}_{0; \alpha} (u) = \displaystyle\frac{u + 1}{\alpha^{-1} u + 1} \mathcal{Q}_{0; 1} (\alpha^{-1} u). \end{align*} $$
$$ \begin{align*} \mathcal{Q}_{0; \alpha} (u) = \displaystyle\frac{u + 1}{\alpha^{-1} u + 1} \mathcal{Q}_{0; 1} (\alpha^{-1} u). \end{align*} $$
 Letting 
 $\widetilde {\mathcal {F}} = \alpha ^{-1} \mathcal {F}'$
, we deduce from the second statement of Equation (7.11) that
$\widetilde {\mathcal {F}} = \alpha ^{-1} \mathcal {F}'$
, we deduce from the second statement of Equation (7.11) that 
 $$ \begin{align*} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) + \displaystyle\frac{(\alpha - 1) \widetilde{\mathcal{F}}}{\widetilde{\mathcal{F}} + 1} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) = \mathcal{Q}_{0; \alpha} (\mathcal{F}') = x (\widetilde{\mathcal{F}} + 1) - t \widetilde{\mathcal{F}}+ (\alpha - 1) (x - t) \widetilde{\mathcal{F}}. \end{align*} $$
$$ \begin{align*} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) + \displaystyle\frac{(\alpha - 1) \widetilde{\mathcal{F}}}{\widetilde{\mathcal{F}} + 1} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) = \mathcal{Q}_{0; \alpha} (\mathcal{F}') = x (\widetilde{\mathcal{F}} + 1) - t \widetilde{\mathcal{F}}+ (\alpha - 1) (x - t) \widetilde{\mathcal{F}}. \end{align*} $$
Together with the first statement of Equation (7.11), this gives
 $$ \begin{align} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) - \mathcal{Q}_{0; 1} (\mathcal{F}) = (\widetilde{\mathcal{F}} - \mathcal{F})(x - t) + \displaystyle\frac{(1 - \alpha) \widetilde{\mathcal{F}}}{\widetilde{\mathcal{F}} + 1} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) + (\alpha - 1) (x - t) \widetilde{\mathcal{F}}. \end{align} $$
$$ \begin{align} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) - \mathcal{Q}_{0; 1} (\mathcal{F}) = (\widetilde{\mathcal{F}} - \mathcal{F})(x - t) + \displaystyle\frac{(1 - \alpha) \widetilde{\mathcal{F}}}{\widetilde{\mathcal{F}} + 1} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) + (\alpha - 1) (x - t) \widetilde{\mathcal{F}}. \end{align} $$
By a Taylor expansion, we have
 $$ \begin{align*} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) - \mathcal{Q}_{0; 1} (\mathcal{F}) = (\widetilde{\mathcal{F}} - \mathcal{F}) \mathcal{Q}_{0; 1}' (\mathcal{F}) + \frac{1}{2} (\widetilde{\mathcal{F}} - \mathcal{F})^2 \mathcal{Q}_{0; 1}^{\prime\prime} (\mathcal{F}) + \mathcal{O} \big( |\widetilde{\mathcal{F}} - \mathcal{F}|^3 \big), \end{align*} $$
$$ \begin{align*} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) - \mathcal{Q}_{0; 1} (\mathcal{F}) = (\widetilde{\mathcal{F}} - \mathcal{F}) \mathcal{Q}_{0; 1}' (\mathcal{F}) + \frac{1}{2} (\widetilde{\mathcal{F}} - \mathcal{F})^2 \mathcal{Q}_{0; 1}^{\prime\prime} (\mathcal{F}) + \mathcal{O} \big( |\widetilde{\mathcal{F}} - \mathcal{F}|^3 \big), \end{align*} $$
and which by Equation (7.12) implies, since 
 $\mathcal {F}$
 is bounded away from
$\mathcal {F}$
 is bounded away from 
 $\{ -1, 0, \infty \}$
 (as
$\{ -1, 0, \infty \}$
 (as 
 $(x, t)$
 is bounded away from a tangency location of
$(x, t)$
 is bounded away from a tangency location of 
 $\mathfrak {A}_1$
), that
$\mathfrak {A}_1$
), that 
 $$ \begin{align} (\widetilde{\mathcal{F}} - \mathcal{F}) \big( \mathcal{Q}_{0; 1}' (& \mathcal{F} )- x + t \big) + \displaystyle\frac{1}{2} (\widetilde{\mathcal{F}} - \mathcal{F})^2 \mathcal{Q}_{0; 1}^{\prime\prime} (\mathcal{F}) \nonumber\\ & = (\alpha - 1) \bigg( (x - t) \mathcal{F} - \displaystyle\frac{\mathcal{F}}{\mathcal{F} + 1} \mathcal{Q}_{0; 1} (\mathcal{F}) \bigg) + \mathcal{O} \big( |\alpha - 1| |\widetilde{\mathcal{F}} - \mathcal{F}| + |\widetilde{\mathcal{F}} - \mathcal{F}|^3 \big) \\ & = \displaystyle\frac{(1 - \alpha) t\mathcal{F}}{\mathcal{F} + 1} + \mathcal{O} \big( |\alpha - 1| |\widetilde{\mathcal{F}} - \mathcal{F}| + |\widetilde{\mathcal{F}} - \mathcal{F}|^3 \big),\nonumber \end{align} $$
$$ \begin{align} (\widetilde{\mathcal{F}} - \mathcal{F}) \big( \mathcal{Q}_{0; 1}' (& \mathcal{F} )- x + t \big) + \displaystyle\frac{1}{2} (\widetilde{\mathcal{F}} - \mathcal{F})^2 \mathcal{Q}_{0; 1}^{\prime\prime} (\mathcal{F}) \nonumber\\ & = (\alpha - 1) \bigg( (x - t) \mathcal{F} - \displaystyle\frac{\mathcal{F}}{\mathcal{F} + 1} \mathcal{Q}_{0; 1} (\mathcal{F}) \bigg) + \mathcal{O} \big( |\alpha - 1| |\widetilde{\mathcal{F}} - \mathcal{F}| + |\widetilde{\mathcal{F}} - \mathcal{F}|^3 \big) \\ & = \displaystyle\frac{(1 - \alpha) t\mathcal{F}}{\mathcal{F} + 1} + \mathcal{O} \big( |\alpha - 1| |\widetilde{\mathcal{F}} - \mathcal{F}| + |\widetilde{\mathcal{F}} - \mathcal{F}|^3 \big),\nonumber \end{align} $$
where to deduce the last equality we used the first statement of Equation (7.11).
 Now, if 
 $d < \delta $
 for some
$d < \delta $
 for some 
 $\delta = \delta (\mathfrak {P})> 0$
, then
$\delta = \delta (\mathfrak {P})> 0$
, then 
 $\mathcal {Q}_{0; 1}^{\prime \prime } (\mathcal {F})$
 is bounded away from
$\mathcal {Q}_{0; 1}^{\prime \prime } (\mathcal {F})$
 is bounded away from 
 $0$
 since
$0$
 since 
 $(x, t)$
 is bounded away from a cusp of
$(x, t)$
 is bounded away from a cusp of 
 $\mathfrak {A}_1$
, so Equation (7.13) implies
$\mathfrak {A}_1$
, so Equation (7.13) implies 
 $|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1|^{1/2} \big )$
. If instead
$|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1|^{1/2} \big )$
. If instead 
 $d \geq \delta $
, then
$d \geq \delta $
, then 
 $\operatorname {\mathrm {Im}} \mathcal {Q}_{0; 1}' (\mathcal {F})$
 is bounded away from
$\operatorname {\mathrm {Im}} \mathcal {Q}_{0; 1}' (\mathcal {F})$
 is bounded away from 
 $0$
, which implies that
$0$
, which implies that 
 $|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1| \big )$
. In either case, this gives
$|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1| \big )$
. In either case, this gives 
 $|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1|^{1/2} \big )$
, which verifies the second statement of the lemma.
$|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1|^{1/2} \big )$
, which verifies the second statement of the lemma.
Next, observe from the first statement of Equation (7.9) that
 $$ \begin{align} \frac{1}{\mathcal{Q}_{0; 1}' (\mathcal{F}) - x + t} = \displaystyle\frac{\partial_x \mathcal{F}}{\mathcal{F} + 1}. \end{align} $$
$$ \begin{align} \frac{1}{\mathcal{Q}_{0; 1}' (\mathcal{F}) - x + t} = \displaystyle\frac{\partial_x \mathcal{F}}{\mathcal{F} + 1}. \end{align} $$
 By the square root decay of 
 $\mathcal {F}_t (x)$
 as
$\mathcal {F}_t (x)$
 as 
 $(x, t)$
 nears
$(x, t)$
 nears 
 $\mathfrak {A}_1$
 (see Remark A.2 below), there exist constants
$\mathfrak {A}_1$
 (see Remark A.2 below), there exist constants 
 $c = c(\mathfrak {P})> 0$
 and
$c = c(\mathfrak {P})> 0$
 and 
 $C = C(\mathfrak {P})> 1$
 such that
$C = C(\mathfrak {P})> 1$
 such that 
 $c d^{-1/2} < \partial _x \mathcal {F}_t (x; 1) < C d^{-1/2}$
. After decreasing c and increasing C if necessary, Equation (7.14) then implies
$c d^{-1/2} < \partial _x \mathcal {F}_t (x; 1) < C d^{-1/2}$
. After decreasing c and increasing C if necessary, Equation (7.14) then implies 
 $$ \begin{align*} c d^{1/2} < \mathcal{Q}_{0; 1}' (\mathcal{F}) - x + t < C d^{1/2}. \end{align*} $$
$$ \begin{align*} c d^{1/2} < \mathcal{Q}_{0; 1}' (\mathcal{F}) - x + t < C d^{1/2}. \end{align*} $$
 This, together with Equation (7.13), the bound 
 $|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1|^{1/2} \big )$
 and the fact that
$|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1|^{1/2} \big )$
 and the fact that 
 $\mathcal {Q}_{0; 1}^{\prime \prime } (\mathcal {F})$
 is bounded away from
$\mathcal {Q}_{0; 1}^{\prime \prime } (\mathcal {F})$
 is bounded away from 
 $0$
 for
$0$
 for 
 $(x, t)$
 sufficiently close to
$(x, t)$
 sufficiently close to 
 $\mathfrak {A}_1$
 quickly gives for
$\mathfrak {A}_1$
 quickly gives for 
 $d \geq |\alpha - 1|$
 that
$d \geq |\alpha - 1|$
 that 
 $$ \begin{align*} \frac{\widetilde{\mathcal{F}} - \mathcal{F}}{\mathcal F} = t (1 - \alpha) \displaystyle\frac{\partial_x \mathcal{F}}{(\mathcal{F} + 1)^2} \Big( 1 + \mathcal{O} \big( d^{-1} |\alpha - 1| + |\alpha - 1|^{1 / 2} \big) \Big). \end{align*} $$
$$ \begin{align*} \frac{\widetilde{\mathcal{F}} - \mathcal{F}}{\mathcal F} = t (1 - \alpha) \displaystyle\frac{\partial_x \mathcal{F}}{(\mathcal{F} + 1)^2} \Big( 1 + \mathcal{O} \big( d^{-1} |\alpha - 1| + |\alpha - 1|^{1 / 2} \big) \Big). \end{align*} $$
It follows that
 $$ \begin{align*} \log \bigg( \displaystyle\frac{\widetilde{\mathcal{F}}}{\mathcal{F}} \bigg) & = \displaystyle\frac{\widetilde{\mathcal{F}}}{\mathcal{F}} - 1 + \mathcal{O} \big( (\widetilde{\mathcal{F}} - \mathcal{F})^2 \big) \\ & = t (1 - \alpha) \Bigg( \displaystyle\frac{\partial_x \mathcal{F}_t (x; 1)}{\big(\mathcal{F}_t (x; 1) + 1 \big)^2} \Bigg) \Big( 1 + \mathcal{O} \big( d^{-1} |\alpha - 1| + |\alpha - 1|^{1/2} \big) \Big), \end{align*} $$
$$ \begin{align*} \log \bigg( \displaystyle\frac{\widetilde{\mathcal{F}}}{\mathcal{F}} \bigg) & = \displaystyle\frac{\widetilde{\mathcal{F}}}{\mathcal{F}} - 1 + \mathcal{O} \big( (\widetilde{\mathcal{F}} - \mathcal{F})^2 \big) \\ & = t (1 - \alpha) \Bigg( \displaystyle\frac{\partial_x \mathcal{F}_t (x; 1)}{\big(\mathcal{F}_t (x; 1) + 1 \big)^2} \Bigg) \Big( 1 + \mathcal{O} \big( d^{-1} |\alpha - 1| + |\alpha - 1|^{1/2} \big) \Big), \end{align*} $$
which implies the first statement of the lemma.
 Next, we show that if no cusp or tangency location of 
 $\mathcal {{\mathfrak A}}_1$
 is of the form
$\mathcal {{\mathfrak A}}_1$
 is of the form 
 $(x,t)$
 with
$(x,t)$
 with 
 $x\in \mathbb R$
, then the time slices
$x\in \mathbb R$
, then the time slices 
 $\big \{ x: (x, t) \in \mathfrak {L}_1 \big \}$
 and
$\big \{ x: (x, t) \in \mathfrak {L}_1 \big \}$
 and 
 $\big \{ x: (x, t) \in \mathfrak {L}_\alpha \big \}$
 contain the same number of intervals; we also estimate the distance between their endpoints.
$\big \{ x: (x, t) \in \mathfrak {L}_\alpha \big \}$
 contain the same number of intervals; we also estimate the distance between their endpoints.
Lemma 7.3. The following holds for 
 $|\alpha - 1|$
 sufficiently small.
$|\alpha - 1|$
 sufficiently small. 
- 
(1) Let  $(x_0, t) \in \mathfrak {A}_1$
 denote a right (or left) boundary point of $(x_0, t) \in \mathfrak {A}_1$
 denote a right (or left) boundary point of $\mathfrak {A}_1$
, bounded away from a cusp or horizontal tangency location of $\mathfrak {A}_1$
, bounded away from a cusp or horizontal tangency location of $\mathfrak {A}_1$
. Then, there exists $\mathfrak {A}_1$
. Then, there exists $(x_0', t) \in \mathfrak {A}_{\alpha }$
, which is a right (or left, respectively) boundary point of $(x_0', t) \in \mathfrak {A}_{\alpha }$
, which is a right (or left, respectively) boundary point of $\mathfrak {A}_{\alpha }$
 so that (7.15) $\mathfrak {A}_{\alpha }$
 so that (7.15) $$ \begin{align} x_0' - x_0 = t (\alpha - 1) \displaystyle\frac{\mathcal{F}_t (x_0)}{\big( \mathcal{F}_t (x_0) + 1\big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big). \end{align} $$ $$ \begin{align} x_0' - x_0 = t (\alpha - 1) \displaystyle\frac{\mathcal{F}_t (x_0)}{\big( \mathcal{F}_t (x_0) + 1\big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big). \end{align} $$
- 
(2) Fix  $t \in \mathbb {R}$
; suppose that $t \in \mathbb {R}$
; suppose that $\big \{ x: (x, t) \in \mathfrak {L}_1 \big \}$
 is a union of $\big \{ x: (x, t) \in \mathfrak {L}_1 \big \}$
 is a union of $k \geq 1$
 disjoint open intervals $k \geq 1$
 disjoint open intervals $(x_1, x_1') \cup (x_2, x_2') \cup \cdots \cup (x_k, x_k')$
, and that it is bounded away from a cusp or horizontal tangency location of $(x_1, x_1') \cup (x_2, x_2') \cup \cdots \cup (x_k, x_k')$
, and that it is bounded away from a cusp or horizontal tangency location of $\mathfrak {A}_1$
. Then, $\mathfrak {A}_1$
. Then, $\big \{ x : (x, t) \in \mathfrak {L}_\alpha \big \}$
 is also a union of k disjoint open intervals $\big \{ x : (x, t) \in \mathfrak {L}_\alpha \big \}$
 is also a union of k disjoint open intervals $(\widehat {x}_1, \widehat {x}_1') \cup (\widehat {x}_2, \widehat {x}_2') \cup \cdots \cup (\widehat {x}_k, \widehat {x}_k')$
. Moreover, for any index $(\widehat {x}_1, \widehat {x}_1') \cup (\widehat {x}_2, \widehat {x}_2') \cup \cdots \cup (\widehat {x}_k, \widehat {x}_k')$
. Moreover, for any index $j \in [1, k]$
, we have (7.16) $j \in [1, k]$
, we have (7.16) $$ \begin{align} &\widehat{x}_j - x_j = t (\alpha - 1) \displaystyle\frac{\mathcal{F}_t (x_j)}{\big( \mathcal{F}_t (x_j) + 1\big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big) \nonumber\\ & \widehat{x}_j' - x_j' = t (\alpha - 1) \displaystyle\frac{\mathcal{F}_t (x^{\prime}_j)}{\big( \mathcal{F}_t (x^{\prime}_j) + 1\big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big). \end{align} $$ $$ \begin{align} &\widehat{x}_j - x_j = t (\alpha - 1) \displaystyle\frac{\mathcal{F}_t (x_j)}{\big( \mathcal{F}_t (x_j) + 1\big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big) \nonumber\\ & \widehat{x}_j' - x_j' = t (\alpha - 1) \displaystyle\frac{\mathcal{F}_t (x^{\prime}_j)}{\big( \mathcal{F}_t (x^{\prime}_j) + 1\big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big). \end{align} $$
Proof. For the first statement, we will only prove the case that 
 $(x_0,t)$
 is a left boundary point of
$(x_0,t)$
 is a left boundary point of 
 ${\mathfrak A}_1$
, as the proof of the case that it is a right boundary point is entirely analogous. Observe by the second and third parts of Proposition 3.3 that
${\mathfrak A}_1$
, as the proof of the case that it is a right boundary point is entirely analogous. Observe by the second and third parts of Proposition 3.3 that 
 $(x, t) \in \mathfrak {A}_{\alpha }$
 if and only if
$(x, t) \in \mathfrak {A}_{\alpha }$
 if and only if 
 $$ \begin{align*} \mathcal{Q}_{0; \alpha} \big( \mathcal{F}_t (x; \alpha) \big) = x \big( \mathcal{F}_t (x; \alpha) + 1 \big) - t \mathcal{F}_t (x; \alpha); \qquad \mathcal{Q}_{0; \alpha}' \big( \mathcal{F}_t (x; \alpha) \big) = x - t, \end{align*} $$
$$ \begin{align*} \mathcal{Q}_{0; \alpha} \big( \mathcal{F}_t (x; \alpha) \big) = x \big( \mathcal{F}_t (x; \alpha) + 1 \big) - t \mathcal{F}_t (x; \alpha); \qquad \mathcal{Q}_{0; \alpha}' \big( \mathcal{F}_t (x; \alpha) \big) = x - t, \end{align*} $$
that is, if and only if
 $$ \begin{align} \mathcal{Q}_{0; \alpha} \big( \mathcal{F}_t (x; \alpha) \big) = \big( \mathcal{F}_t (x; \alpha) + 1 \big) \mathcal{Q}_{0; \alpha}' \big( \mathcal{F}_t (x; \alpha) \big) + t; \qquad x = \mathcal{Q}_{0; \alpha}' \big( \mathcal{F}_t (x; \alpha) \big) + t. \end{align} $$
$$ \begin{align} \mathcal{Q}_{0; \alpha} \big( \mathcal{F}_t (x; \alpha) \big) = \big( \mathcal{F}_t (x; \alpha) + 1 \big) \mathcal{Q}_{0; \alpha}' \big( \mathcal{F}_t (x; \alpha) \big) + t; \qquad x = \mathcal{Q}_{0; \alpha}' \big( \mathcal{F}_t (x; \alpha) \big) + t. \end{align} $$
 In particular, Equation (7.17) holds if 
 $(x, \alpha ) = (x_0, 1)$
. Let us produce a solution to Equation (7.17) for
$(x, \alpha ) = (x_0, 1)$
. Let us produce a solution to Equation (7.17) for 
 $\alpha $
 close to
$\alpha $
 close to 
 $1$
 by perturbing around
$1$
 by perturbing around 
 $x_0$
. To that end, abbreviate
$x_0$
. To that end, abbreviate 
 $\mathcal {F} = \mathcal {F} (x_0; 1)$
, let x be close to
$\mathcal {F} = \mathcal {F} (x_0; 1)$
, let x be close to 
 $x_0$
 and abbreviate
$x_0$
 and abbreviate 
 $\mathcal {F}' = \mathcal {F} (x; \alpha )$
. Then recall from the first statement of Equations (7.7) and (7.6) that for
$\mathcal {F}' = \mathcal {F} (x; \alpha )$
. Then recall from the first statement of Equations (7.7) and (7.6) that for 
 $|\alpha - 1|$
 sufficiently small and
$|\alpha - 1|$
 sufficiently small and 
 $\big ( u, x -tu/(u + 1) \big )$
 in a neighborhood of
$\big ( u, x -tu/(u + 1) \big )$
 in a neighborhood of 
 $( \mathcal {F}, x - t \mathcal {F}/(\mathcal {F} + 1))$
, we have
$( \mathcal {F}, x - t \mathcal {F}/(\mathcal {F} + 1))$
, we have 
 $$ \begin{align*} \mathcal{Q}_{0; \alpha} (u) = \displaystyle\frac{u + 1}{\alpha^{-1} u + 1} \mathcal{Q}_{0; 1} (\alpha^{-1} u) = \alpha \mathcal{Q}_{0; 1} (\alpha^{-1} u) + \displaystyle\frac{\alpha (1 - \alpha)}{u + \alpha} \mathcal{Q}_{0; 1} (\alpha^{-1} u) \end{align*} $$
$$ \begin{align*} \mathcal{Q}_{0; \alpha} (u) = \displaystyle\frac{u + 1}{\alpha^{-1} u + 1} \mathcal{Q}_{0; 1} (\alpha^{-1} u) = \alpha \mathcal{Q}_{0; 1} (\alpha^{-1} u) + \displaystyle\frac{\alpha (1 - \alpha)}{u + \alpha} \mathcal{Q}_{0; 1} (\alpha^{-1} u) \end{align*} $$
so that
 $$ \begin{align} \mathcal{Q}_{0; \alpha}' (u) = \mathcal{Q}_{0; 1}' (\alpha^{-1} u) + \displaystyle\frac{\alpha - 1}{u + \alpha} \bigg( \displaystyle\frac{\alpha}{u + \alpha} \mathcal{Q}_{0; 1} (\alpha^{-1} u) - \mathcal{Q}_{0; 1}' (\alpha^{-1} u) \bigg). \end{align} $$
$$ \begin{align} \mathcal{Q}_{0; \alpha}' (u) = \mathcal{Q}_{0; 1}' (\alpha^{-1} u) + \displaystyle\frac{\alpha - 1}{u + \alpha} \bigg( \displaystyle\frac{\alpha}{u + \alpha} \mathcal{Q}_{0; 1} (\alpha^{-1} u) - \mathcal{Q}_{0; 1}' (\alpha^{-1} u) \bigg). \end{align} $$
 Thus, denoting 
 $\widetilde {\mathcal {F}} = \alpha ^{-1} \mathcal {F}'$
, the first statement of Equation (7.17) holds if and only if
$\widetilde {\mathcal {F}} = \alpha ^{-1} \mathcal {F}'$
, the first statement of Equation (7.17) holds if and only if 
 $$ \begin{align*} \displaystyle\frac{\alpha \widetilde{\mathcal{F}} + 1}{\widetilde{\mathcal{F}} + 1} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) = (\alpha \widetilde{\mathcal{F}} + 1) \Bigg( \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) + \displaystyle\frac{\alpha - 1}{\alpha \widetilde{\mathcal{F}} + \alpha} \bigg( \displaystyle\frac{1}{\widetilde{\mathcal{F}} + 1} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) - \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \bigg) \Bigg) + t. \end{align*} $$
$$ \begin{align*} \displaystyle\frac{\alpha \widetilde{\mathcal{F}} + 1}{\widetilde{\mathcal{F}} + 1} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) = (\alpha \widetilde{\mathcal{F}} + 1) \Bigg( \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) + \displaystyle\frac{\alpha - 1}{\alpha \widetilde{\mathcal{F}} + \alpha} \bigg( \displaystyle\frac{1}{\widetilde{\mathcal{F}} + 1} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) - \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \bigg) \Bigg) + t. \end{align*} $$
This is equivalent to
 $$ \begin{align*} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) & = (\widetilde{\mathcal{F}} + \alpha^{-1}) \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) + \displaystyle\frac{\alpha - 1}{\alpha \widetilde{\mathcal{F}} + \alpha} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) + \displaystyle\frac{t (\widetilde{\mathcal{F}} + 1)}{\alpha \widetilde{\mathcal{F}} + 1} \\ & = (\widetilde{\mathcal{F}} + 1) \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) + t + (\alpha - 1) \Bigg( \displaystyle\frac{\mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}})}{\alpha \widetilde{\mathcal{F}} + \alpha} - \displaystyle\frac{t \widetilde{\mathcal{F}}}{\alpha \widetilde{\mathcal{F}} + 1} - \alpha^{-1} \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \Bigg), \end{align*} $$
$$ \begin{align*} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) & = (\widetilde{\mathcal{F}} + \alpha^{-1}) \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) + \displaystyle\frac{\alpha - 1}{\alpha \widetilde{\mathcal{F}} + \alpha} \mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}}) + \displaystyle\frac{t (\widetilde{\mathcal{F}} + 1)}{\alpha \widetilde{\mathcal{F}} + 1} \\ & = (\widetilde{\mathcal{F}} + 1) \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) + t + (\alpha - 1) \Bigg( \displaystyle\frac{\mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}})}{\alpha \widetilde{\mathcal{F}} + \alpha} - \displaystyle\frac{t \widetilde{\mathcal{F}}}{\alpha \widetilde{\mathcal{F}} + 1} - \alpha^{-1} \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \Bigg), \end{align*} $$
which upon subtracting from the 
 $(x, \alpha ) = (x_0, 1)$
 case of Equation (7.17) is true if and only if
$(x, \alpha ) = (x_0, 1)$
 case of Equation (7.17) is true if and only if 
 $$ \begin{align} \mathcal{Q}_{0; 1} (\mathcal{F}) - (\mathcal{F} + 1) \mathcal{Q}_{0; 1}' (\mathcal{F}) - \mathcal{Q}_{0; 1} ( & \widetilde{\mathcal{F}}) - (\widetilde{\mathcal{F}} + 1) \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \nonumber\\ & = (1 - \alpha) \Bigg( \displaystyle\frac{\mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}})}{\alpha \widetilde{\mathcal{F}} + \alpha} - \displaystyle\frac{t \widetilde{\mathcal{F}}}{\alpha \widetilde{\mathcal{F}} + 1} - \alpha^{-1} \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \Bigg). \end{align} $$
$$ \begin{align} \mathcal{Q}_{0; 1} (\mathcal{F}) - (\mathcal{F} + 1) \mathcal{Q}_{0; 1}' (\mathcal{F}) - \mathcal{Q}_{0; 1} ( & \widetilde{\mathcal{F}}) - (\widetilde{\mathcal{F}} + 1) \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \nonumber\\ & = (1 - \alpha) \Bigg( \displaystyle\frac{\mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}})}{\alpha \widetilde{\mathcal{F}} + \alpha} - \displaystyle\frac{t \widetilde{\mathcal{F}}}{\alpha \widetilde{\mathcal{F}} + 1} - \alpha^{-1} \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \Bigg). \end{align} $$
 Now, observe that the derivative of 
 $\mathcal {Q}_{0; 1} (z) - (z + 1) \mathcal {Q}_{0; 1}' (z)$
 is
$\mathcal {Q}_{0; 1} (z) - (z + 1) \mathcal {Q}_{0; 1}' (z)$
 is 
 $-(z + 1) \mathcal {Q}_{0; 1}^{\prime \prime } (z)$
, which is bounded away from
$-(z + 1) \mathcal {Q}_{0; 1}^{\prime \prime } (z)$
, which is bounded away from 
 $0$
 for
$0$
 for 
 $z = \mathcal {F}$
 since
$z = \mathcal {F}$
 since 
 $(x, t)$
 is bounded away from a horizontal tangency location or singularity of
$(x, t)$
 is bounded away from a horizontal tangency location or singularity of 
 $\mathfrak {A}$
. Hence, the implicit function theorem implies that, for
$\mathfrak {A}$
. Hence, the implicit function theorem implies that, for 
 $|\alpha - 1|$
 sufficiently small, Equation (7.19) admits a solution with
$|\alpha - 1|$
 sufficiently small, Equation (7.19) admits a solution with 
 $|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1| \big )$
. Taylor expanding then gives
$|\widetilde {\mathcal {F}} - \mathcal {F}| = \mathcal {O} \big ( |\alpha - 1| \big )$
. Taylor expanding then gives 
 $$ \begin{align} (\widetilde{\mathcal{F}} - \mathcal{F}) (\mathcal{F} + 1) \mathcal{Q}_{0; 1}^{\prime \prime } (\mathcal{F}) & = \displaystyle\frac{1 - \alpha}{\mathcal{F} + 1} \big( \mathcal{Q}_{0; 1} (\mathcal{F}) - (\mathcal{F} + 1) \mathcal{Q}_{0; 1}' (\mathcal{F}) - t\mathcal{F} \big) + \mathcal{O} \big( |\alpha - 1|^2 \big) \nonumber\\ & = \displaystyle\frac{t (\alpha - 1) (\mathcal{F} - 1)}{\mathcal{F} + 1} + \mathcal{O} \big( |\alpha - 1|^2 \big), \end{align} $$
$$ \begin{align} (\widetilde{\mathcal{F}} - \mathcal{F}) (\mathcal{F} + 1) \mathcal{Q}_{0; 1}^{\prime \prime } (\mathcal{F}) & = \displaystyle\frac{1 - \alpha}{\mathcal{F} + 1} \big( \mathcal{Q}_{0; 1} (\mathcal{F}) - (\mathcal{F} + 1) \mathcal{Q}_{0; 1}' (\mathcal{F}) - t\mathcal{F} \big) + \mathcal{O} \big( |\alpha - 1|^2 \big) \nonumber\\ & = \displaystyle\frac{t (\alpha - 1) (\mathcal{F} - 1)}{\mathcal{F} + 1} + \mathcal{O} \big( |\alpha - 1|^2 \big), \end{align} $$
where in the last equality we used the first statement of Equation (7.17). Inserting this into the second statement of Equation (7.17) and applying Equation (7.18) yields
 $$ \begin{align*} x - x_0 & = \mathcal{Q}_{0; \alpha}' (\mathcal{F}') - \mathcal{Q}_{0; 1}' (\mathcal{F}) \\ & = \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) - \mathcal{Q}_{0; 1}' (\mathcal{F}) + \displaystyle\frac{\alpha - 1}{\alpha \widetilde{\mathcal{F}} + \alpha} \bigg( \displaystyle\frac{\mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}})}{\widetilde{\mathcal{F}} + 1} - \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \bigg) \\ & = (\widetilde{\mathcal{F}} - \mathcal{F}) \mathcal{Q}_{0; 1}^{\prime \prime } (\mathcal{F}) + \displaystyle\frac{\alpha - 1}{\mathcal{F} + 1} \bigg( \displaystyle\frac{\mathcal{Q}_{0; 1} (\mathcal{F})}{\mathcal{F} + 1} - \mathcal{Q}_{0; 1}' (\mathcal{F}) \bigg) + \mathcal{O} \big( |\alpha - 1|^2 \big) \\ & = \displaystyle\frac{t (\alpha - 1) \mathcal{F}}{(\mathcal{F} + 1)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big), \end{align*} $$
$$ \begin{align*} x - x_0 & = \mathcal{Q}_{0; \alpha}' (\mathcal{F}') - \mathcal{Q}_{0; 1}' (\mathcal{F}) \\ & = \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) - \mathcal{Q}_{0; 1}' (\mathcal{F}) + \displaystyle\frac{\alpha - 1}{\alpha \widetilde{\mathcal{F}} + \alpha} \bigg( \displaystyle\frac{\mathcal{Q}_{0; 1} (\widetilde{\mathcal{F}})}{\widetilde{\mathcal{F}} + 1} - \mathcal{Q}_{0; 1}' (\widetilde{\mathcal{F}}) \bigg) \\ & = (\widetilde{\mathcal{F}} - \mathcal{F}) \mathcal{Q}_{0; 1}^{\prime \prime } (\mathcal{F}) + \displaystyle\frac{\alpha - 1}{\mathcal{F} + 1} \bigg( \displaystyle\frac{\mathcal{Q}_{0; 1} (\mathcal{F})}{\mathcal{F} + 1} - \mathcal{Q}_{0; 1}' (\mathcal{F}) \bigg) + \mathcal{O} \big( |\alpha - 1|^2 \big) \\ & = \displaystyle\frac{t (\alpha - 1) \mathcal{F}}{(\mathcal{F} + 1)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big), \end{align*} $$
where in the last equality we applied Equation (7.20) and first statement of Equation (7.17). Setting 
 $x_0' = x$
 implies the first statement of Equation (7.3).
$x_0' = x$
 implies the first statement of Equation (7.3).
 The second statement of Lemma 7.3 that 
 $\big \{ x : (x, t) \in \mathfrak {L}_\alpha \big \}$
 is a union of k disjoint open intervals follows from the first statement and the fact that
$\big \{ x : (x, t) \in \mathfrak {L}_\alpha \big \}$
 is a union of k disjoint open intervals follows from the first statement and the fact that 
 $\mathcal L_\alpha $
 is continuous in
$\mathcal L_\alpha $
 is continuous in 
 $\alpha $
. The estimates (7.16) follow from Equation (7.15) and the assumption that
$\alpha $
. The estimates (7.16) follow from Equation (7.15) and the assumption that 
 $\big \{ x: (x, t) \in \mathfrak {L}_1 \big \}$
 is bounded away from a cusp or horizontal tangency location of
$\big \{ x: (x, t) \in \mathfrak {L}_1 \big \}$
 is bounded away from a cusp or horizontal tangency location of 
 $\mathfrak {A}_1$
.
$\mathfrak {A}_1$
.
7.3 Proof of Proposition 5.4
 In this section, we establish Proposition 5.4; throughout, we recall the notation from that proposition and from Section 7.1. In particular, 
 $\mathfrak {t}_0$
 and
$\mathfrak {t}_0$
 and 
 $\alpha _0$
 are given by Equation (7.2). We first define a height function
$\alpha _0$
 are given by Equation (7.2). We first define a height function 
 $H_{\alpha }$
 from
$H_{\alpha }$
 from 
 $\mathcal {F}_t (x; \alpha )$
 for
$\mathcal {F}_t (x; \alpha )$
 for 
 $\alpha \in \mathbb {R}$
 with
$\alpha \in \mathbb {R}$
 with 
 $|\alpha - 1|$
 sufficiently small, whose complex slope is given by
$|\alpha - 1|$
 sufficiently small, whose complex slope is given by 
 $\mathcal {F}_{t - \mathfrak {t}_0} (x; \alpha )$
. The eventual tilted height function
$\mathcal {F}_{t - \mathfrak {t}_0} (x; \alpha )$
. The eventual tilted height function 
 $\widehat {H}^* : \overline {\mathfrak {D}} \rightarrow \mathbb {R}$
 given by Proposition 5.4 will be defined by setting
$\widehat {H}^* : \overline {\mathfrak {D}} \rightarrow \mathbb {R}$
 given by Proposition 5.4 will be defined by setting 
 $\widehat {H}^* = H_{\alpha _0}$
.
$\widehat {H}^* = H_{\alpha _0}$
.
 To that end, first set 
 $H_{\alpha } (v_0) = H^* (v_0)$
 for
$H_{\alpha } (v_0) = H^* (v_0)$
 for 
 $v_0 = \big ( \mathfrak {a} (\mathfrak {t}_1), \mathfrak {t}_1 \big )$
 equal to the southwest corner of
$v_0 = \big ( \mathfrak {a} (\mathfrak {t}_1), \mathfrak {t}_1 \big )$
 equal to the southwest corner of 
 $\partial \mathfrak {D}$
. To define
$\partial \mathfrak {D}$
. To define 
 $H_{\alpha }$
 on the remainder of
$H_{\alpha }$
 on the remainder of 
 $\overline {\mathfrak {D}}$
, it suffices to fix its gradient almost everywhere. Similarly to in Equation (3.1), for any
$\overline {\mathfrak {D}}$
, it suffices to fix its gradient almost everywhere. Similarly to in Equation (3.1), for any 
 $u = (x, s) \in \overline {\mathfrak {D}}$
 such that
$u = (x, s) \in \overline {\mathfrak {D}}$
 such that 
 $(x, s - \mathfrak {t}_0) \in \mathfrak {L}_{\alpha }$
, define
$(x, s - \mathfrak {t}_0) \in \mathfrak {L}_{\alpha }$
, define 
 $\nabla H_{\alpha } (u) = \big ( \partial _x H_{\alpha } (u), \partial _t H_{\alpha } (u) \big )$
 by setting
$\nabla H_{\alpha } (u) = \big ( \partial _x H_{\alpha } (u), \partial _t H_{\alpha } (u) \big )$
 by setting 
 $$ \begin{align} \partial_x H_{\alpha} (u) = - \pi^{-1} \arg^* \mathcal{F}_{s - \mathfrak{t}_0} (x; \alpha); \qquad \partial_t H_{\alpha} (u) = \pi^{-1} \arg^* \big( \mathcal{F}_{s - \mathfrak{t}_0} (x; \alpha) + 1 \big), \end{align} $$
$$ \begin{align} \partial_x H_{\alpha} (u) = - \pi^{-1} \arg^* \mathcal{F}_{s - \mathfrak{t}_0} (x; \alpha); \qquad \partial_t H_{\alpha} (u) = \pi^{-1} \arg^* \big( \mathcal{F}_{s - \mathfrak{t}_0} (x; \alpha) + 1 \big), \end{align} $$
where we observe that we are implementing the same shift by 
 $\mathfrak {t}_0$
 as in Equation (7.3).
$\mathfrak {t}_0$
 as in Equation (7.3).
 If instead 
 $u = (x, s) \in \overline {\mathfrak {D}}$
 satisfies
$u = (x, s) \in \overline {\mathfrak {D}}$
 satisfies 
 $(x, s - \mathfrak {t}_0) \notin \mathfrak {L}_{\alpha }$
, then define
$(x, s - \mathfrak {t}_0) \notin \mathfrak {L}_{\alpha }$
, then define 
 $\nabla H_{\alpha } (u)$
 as follows.
$\nabla H_{\alpha } (u)$
 as follows. 
- 
(1) For  $\alpha = 1$
, set $\alpha = 1$
, set $\nabla H_{\alpha } (u) = \nabla H^* (u) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
. $\nabla H_{\alpha } (u) = \nabla H^* (u) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
.
- 
(2) For  $\alpha \ne 1$
 with $\alpha \ne 1$
 with $|\alpha - 1|$
 sufficiently small, define $|\alpha - 1|$
 sufficiently small, define $\nabla H_{\alpha } (u)$
 so that it remains among the three frozen slopes $\nabla H_{\alpha } (u)$
 so that it remains among the three frozen slopes $\big \{ (0, 0), (1, 0), (1, -1) \big \}$
 and is continuous in $\big \{ (0, 0), (1, 0), (1, -1) \big \}$
 and is continuous in $\alpha $
 at almost every $\alpha $
 at almost every $u \in \overline {\mathfrak {D}}$
. $u \in \overline {\mathfrak {D}}$
.
 For 
 $|\alpha - 1|$
 sufficiently small, the existence of
$|\alpha - 1|$
 sufficiently small, the existence of 
 $\nabla H_{\alpha } (u)$
 satisfying these properties follows from the fact that
$\nabla H_{\alpha } (u)$
 satisfying these properties follows from the fact that 
 $\mathfrak {A}_{\alpha } $
 deforms continuously in
$\mathfrak {A}_{\alpha } $
 deforms continuously in 
 $\alpha $
 (by Lemma 7.3); alternatively, it can be viewed as a consequence of [Reference Astala, Duse, Prause and ZhongADPZ20, Remark 8.6]. This determines
$\alpha $
 (by Lemma 7.3); alternatively, it can be viewed as a consequence of [Reference Astala, Duse, Prause and ZhongADPZ20, Remark 8.6]. This determines 
 $\nabla H_{\alpha } (u)$
 for almost all
$\nabla H_{\alpha } (u)$
 for almost all 
 $u \in \overline {\mathfrak {D}}$
, fixing
$u \in \overline {\mathfrak {D}}$
, fixing 
 $H_{\alpha } : \overline {\mathfrak {D}} \rightarrow \mathbb {R}$
.
$H_{\alpha } : \overline {\mathfrak {D}} \rightarrow \mathbb {R}$
.
 We then define 
 $\widehat {H}^* : \overline {\mathfrak {D}} \rightarrow \mathbb {R}$
 and
$\widehat {H}^* : \overline {\mathfrak {D}} \rightarrow \mathbb {R}$
 and 
 $\widehat {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 by setting
$\widehat {h} : \partial \mathfrak {D} \rightarrow \mathbb {R}$
 by setting 
 $$ \begin{align*} \widehat{H}^* = H_{\alpha_0}; \qquad \widehat{h} = \widehat{H}^* |_{\partial \mathfrak{D}}. \end{align*} $$
$$ \begin{align*} \widehat{H}^* = H_{\alpha_0}; \qquad \widehat{h} = \widehat{H}^* |_{\partial \mathfrak{D}}. \end{align*} $$
 By Lemma 7.1, the complex slope 
 $\mathcal {F}_t (x; \alpha _0)$
 associated with
$\mathcal {F}_t (x; \alpha _0)$
 associated with 
 $\widehat {H}^*$
 satisfies the complex Burgers equation (3.2). We will later use [Reference Astala, Duse, Prause and ZhongADPZ20, Remark 8.6] or [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 8.3] to show that
$\widehat {H}^*$
 satisfies the complex Burgers equation (3.2). We will later use [Reference Astala, Duse, Prause and ZhongADPZ20, Remark 8.6] or [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 8.3] to show that 
 $\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 is a maximizer of
$\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 is a maximizer of 
 $\mathcal {E}$
.
$\mathcal {E}$
.
 Recalling the liquid region 
 $\widehat {\mathfrak {L}}$
 associated with
$\widehat {\mathfrak {L}}$
 associated with 
 $\widehat {H}^*$
 from Proposition 5.4, observe that
$\widehat {H}^*$
 from Proposition 5.4, observe that 
 $$ \begin{align} \widehat{\mathfrak{L}} = \big\{ u \in \overline{\mathfrak{D}} : u - (0, \mathfrak{t}_0) \in \mathfrak{L}_{\alpha} \big\}. \end{align} $$
$$ \begin{align} \widehat{\mathfrak{L}} = \big\{ u \in \overline{\mathfrak{D}} : u - (0, \mathfrak{t}_0) \in \mathfrak{L}_{\alpha} \big\}. \end{align} $$
Given these points, we can now establish Proposition 5.4. In the below, we will frequently use the identity
 $$ \begin{align} (\alpha_0 - 1) (t - \mathfrak{t}_0) = \omega (t), \qquad \text{for any}\ t \in \mathbb{R}, \end{align} $$
$$ \begin{align} (\alpha_0 - 1) (t - \mathfrak{t}_0) = \omega (t), \qquad \text{for any}\ t \in \mathbb{R}, \end{align} $$
which follows from Equations (5.2) and (7.2).
Proof of Proposition 5.4.
 We will verify that 
 $\widehat {H}^*$
 satisfies the second, third, first and fourth statement of the proposition, in this order. Throughout this proof, we fix
$\widehat {H}^*$
 satisfies the second, third, first and fourth statement of the proposition, in this order. Throughout this proof, we fix 
 $t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
.
$t \in \{ \mathfrak {t}_1, \mathfrak {s}, \mathfrak {t}_2 \}$
.
 We first establish the second statement of the proposition. To show Equation (5.4), observe from the 
 $(t, \alpha ) = (t - \mathfrak {t}_0, \alpha )$
 case of the second statement of Lemma 7.3 that the time slice
$(t, \alpha ) = (t - \mathfrak {t}_0, \alpha )$
 case of the second statement of Lemma 7.3 that the time slice 
 $\{x:(x,t-\mathfrak t_0)\in \mathfrak L_\alpha \}$
 is also a union of k disjoint open intervals
$\{x:(x,t-\mathfrak t_0)\in \mathfrak L_\alpha \}$
 is also a union of k disjoint open intervals 
 $(\widehat {x}_1, \widehat {x}_1') \cup (\widehat {x}_2, \widehat {x}_2') \cup \cdots \cup (\widehat {x}_k, \widehat {x}_k')$
. Moreover, for any index
$(\widehat {x}_1, \widehat {x}_1') \cup (\widehat {x}_2, \widehat {x}_2') \cup \cdots \cup (\widehat {x}_k, \widehat {x}_k')$
. Moreover, for any index 
 $1\leq i\leq k$
, we have
$1\leq i\leq k$
, we have 
 $$ \begin{align} \widehat x_i - x_i & = (t - \mathfrak{t}_0) (\alpha - 1) \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (x_i; 1)}{\big( \mathcal{F}_{t - \mathfrak{t}_0} (x_i; 1) + 1 \big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big) \nonumber\\ & = (t - \mathfrak{t}_0) (\alpha - 1) \displaystyle\frac{f_t (x_i)}{\big( f_t (x_i) + 1 \big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big) = (t - \mathfrak{t}_0) (\alpha - 1) \Upsilon_{t} (x_i) + \mathcal{O} \big( |\alpha - 1|^2 \big), \end{align} $$
$$ \begin{align} \widehat x_i - x_i & = (t - \mathfrak{t}_0) (\alpha - 1) \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (x_i; 1)}{\big( \mathcal{F}_{t - \mathfrak{t}_0} (x_i; 1) + 1 \big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big) \nonumber\\ & = (t - \mathfrak{t}_0) (\alpha - 1) \displaystyle\frac{f_t (x_i)}{\big( f_t (x_i) + 1 \big)^2} + \mathcal{O} \big( |\alpha - 1|^2 \big) = (t - \mathfrak{t}_0) (\alpha - 1) \Upsilon_{t} (x_i) + \mathcal{O} \big( |\alpha - 1|^2 \big), \end{align} $$
where to deduce the second equality we used Equation (7.3) and to deduce the third we used the definition (5.1) of 
 $\Upsilon _t$
. Implementing similar reasoning for the difference of
$\Upsilon _t$
. Implementing similar reasoning for the difference of 
 $\widehat x_i'-x_i'$
, setting
$\widehat x_i'-x_i'$
, setting 
 $\alpha = \alpha _0$
 in Equation (7.24), applying Equations (7.23) and (7.22) and using the fact that
$\alpha = \alpha _0$
 in Equation (7.24), applying Equations (7.23) and (7.22) and using the fact that 
 $|\alpha _0 - 1| = \mathcal {O} \big ( |\xi _1| + |\xi _2| \big )$
, we deduce Equation (5.4).
$|\alpha _0 - 1| = \mathcal {O} \big ( |\xi _1| + |\xi _2| \big )$
, we deduce Equation (5.4).
 We next show 
 $\widehat {H}^* (u) = H^* (u)$
 for
$\widehat {H}^* (u) = H^* (u)$
 for 
 $u \in \mathfrak {D}$
 with
$u \in \mathfrak {D}$
 with 
 $u \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
. We will more generally prove
$u \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
. We will more generally prove 
 $H_{\alpha } (u) = H^* (u)$
 for
$H_{\alpha } (u) = H^* (u)$
 for 
 $(x, t) = u \in \mathfrak {D}$
 satisfying
$(x, t) = u \in \mathfrak {D}$
 satisfying 
 $(x, t - \mathfrak {t}_0) \notin \mathfrak {L}_1 \cup \mathfrak {L}_{\alpha }$
, for
$(x, t - \mathfrak {t}_0) \notin \mathfrak {L}_1 \cup \mathfrak {L}_{\alpha }$
, for 
 $|\alpha - 1|$
 sufficiently small. From this,
$|\alpha - 1|$
 sufficiently small. From this, 
 $\widehat {H}^* (u) = H^* (u)$
 would follow by taking
$\widehat {H}^* (u) = H^* (u)$
 would follow by taking 
 $\alpha = \alpha _0$
.
$\alpha = \alpha _0$
.
 First, we verify that 
 $H_{\alpha } (u)$
 is constant for
$H_{\alpha } (u)$
 is constant for 
 $u \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and for
$u \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and for 
 $u \in \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
; we only consider the case
$u \in \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
; we only consider the case 
 $u \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 as the proof is entirely analogous if
$u \in \partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 as the proof is entirely analogous if 
 $u \in \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. By Assumption 5.2,
$u \in \partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. By Assumption 5.2, 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is either disjoint with
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is either disjoint with 
 $\overline {\mathfrak {L}}$
 or is a subset of
$\overline {\mathfrak {L}}$
 or is a subset of 
 $\partial \mathfrak {P}$
. In the former case,
$\partial \mathfrak {P}$
. In the former case, 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is bounded away from
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is bounded away from 
 $\overline {\mathfrak {L}}$
, so the first statement in Lemma 7.3 implies for
$\overline {\mathfrak {L}}$
, so the first statement in Lemma 7.3 implies for 
 $|\alpha - 1|$
 sufficiently small that
$|\alpha - 1|$
 sufficiently small that 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is disjoint from
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is disjoint from 
 $\mathfrak {L}_{\alpha } + (0, \mathfrak {t}_0)$
. Since
$\mathfrak {L}_{\alpha } + (0, \mathfrak {t}_0)$
. Since 
 $H_{\alpha }$
 is Lipschitz,
$H_{\alpha }$
 is Lipschitz, 
 $\nabla H_1 (u) = \nabla H^* (u)$
, the gradient
$\nabla H_1 (u) = \nabla H^* (u)$
, the gradient 
 $\nabla H_{\alpha } (u) \big \{ (0, 0), (1, 0), (1, -1) \}$
 and is continuous in
$\nabla H_{\alpha } (u) \big \{ (0, 0), (1, 0), (1, -1) \}$
 and is continuous in 
 $\alpha $
 for almost every
$\alpha $
 for almost every 
 $u \notin \mathfrak {L}_{\alpha } + (0, \mathfrak {t}_0)$
 and
$u \notin \mathfrak {L}_{\alpha } + (0, \mathfrak {t}_0)$
 and 
 $H^*$
 is constant along
$H^*$
 is constant along 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
, it follows that
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
, it follows that 
 $H_{\alpha }$
 is also constant along
$H_{\alpha }$
 is also constant along 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
.
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
.
 If instead 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and 
 $\overline {\mathfrak {L}}$
 are not disjoint, then
$\overline {\mathfrak {L}}$
 are not disjoint, then 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \subset \partial \mathfrak {P}$
, so
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) \subset \partial \mathfrak {P}$
, so 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is tangent to
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is tangent to 
 $\overline {\mathfrak {L}}$
 at some point
$\overline {\mathfrak {L}}$
 at some point 
 $(x_0, t_0)$
. In particular, the shift
$(x_0, t_0)$
. In particular, the shift 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) - (0, \mathfrak {t}_0)$
 is tangent to
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) - (0, \mathfrak {t}_0)$
 is tangent to 
 $\mathfrak {A}_1$
 at
$\mathfrak {A}_1$
 at 
 $(x_0, t_0 - \mathfrak {t}_0)$
. Then, it is quickly verified from Equations (7.6) and (7.7) that
$(x_0, t_0 - \mathfrak {t}_0)$
. Then, it is quickly verified from Equations (7.6) and (7.7) that 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) - (0, \mathfrak {t}_0)$
 remains tangent to
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D}) - (0, \mathfrak {t}_0)$
 remains tangent to 
 $\mathfrak {A}_{\alpha }$
. In particular,
$\mathfrak {A}_{\alpha }$
. In particular, 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is disjoint from the open set
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 is disjoint from the open set 
 $\mathfrak {L}_{\alpha } + (0, \mathfrak {t}_0)$
. Then following the same reasoning as above gives that
$\mathfrak {L}_{\alpha } + (0, \mathfrak {t}_0)$
. Then following the same reasoning as above gives that 
 $H_{\alpha }$
 is constant along
$H_{\alpha }$
 is constant along 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
.
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
.
 This verifies that 
 $H_{\alpha }$
 and
$H_{\alpha }$
 and 
 $H^*$
 are constant along
$H^*$
 are constant along 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
; by similar reasoning, they are also constant along
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
; by similar reasoning, they are also constant along 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. This, together with the fact that they coincide at the southwest vertex of
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. This, together with the fact that they coincide at the southwest vertex of 
 $\overline {\mathfrak {D}}$
, implies that
$\overline {\mathfrak {D}}$
, implies that 
 $H_{\alpha } = H^*$
 along
$H_{\alpha } = H^*$
 along 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. Since the definition of
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. Since the definition of 
 $\nabla H_{\alpha }$
 implies that
$\nabla H_{\alpha }$
 implies that 
 $\nabla H_{\alpha } (x, t) = \nabla H^* (x, t)$
 whenever
$\nabla H_{\alpha } (x, t) = \nabla H^* (x, t)$
 whenever 
 $(x, t - \mathfrak {t}_0) \notin \mathfrak {L}_1 \cup \mathfrak {L}_{\alpha }$
, to show
$(x, t - \mathfrak {t}_0) \notin \mathfrak {L}_1 \cup \mathfrak {L}_{\alpha }$
, to show 
 $H_{\alpha } (x, t) = H^* (x, t)$
 for
$H_{\alpha } (x, t) = H^* (x, t)$
 for 
 $(x, t - \mathfrak {t}_0) \notin \mathfrak {L}_1 \cup \mathfrak {L}_{\alpha }$
, it suffices to show that height differences are ‘conserved along liquid regions’ of
$(x, t - \mathfrak {t}_0) \notin \mathfrak {L}_1 \cup \mathfrak {L}_{\alpha }$
, it suffices to show that height differences are ‘conserved along liquid regions’ of 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 and
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 and 
 $\partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
. More specifically, let us fix a left and right endpoint
$\partial _{\operatorname {\mathrm {so}}} (\mathfrak {D})$
. More specifically, let us fix a left and right endpoint 
 $\big ( E_1 (\alpha ), t - \mathfrak {t}_0 \big ), \big ( E_2 (\alpha ), t - \mathfrak {t}_0 \big ) \in \mathfrak {A}_{\alpha }$
, respectively, such that
$\big ( E_1 (\alpha ), t - \mathfrak {t}_0 \big ), \big ( E_2 (\alpha ), t - \mathfrak {t}_0 \big ) \in \mathfrak {A}_{\alpha }$
, respectively, such that 
 $E_j (\alpha )$
 is continuous in
$E_j (\alpha )$
 is continuous in 
 $\alpha $
 for each
$\alpha $
 for each 
 $j \in \{ 1 , 2\}$
 and such that
$j \in \{ 1 , 2\}$
 and such that 
 $\big ( x, t - \mathfrak {t}_0 \big ) \in \mathfrak {L}_{\alpha }$
 for each
$\big ( x, t - \mathfrak {t}_0 \big ) \in \mathfrak {L}_{\alpha }$
 for each 
 $E_1 (\alpha ) < x < E_2 (\alpha )$
. Abbreviating
$E_1 (\alpha ) < x < E_2 (\alpha )$
. Abbreviating 
 $E_1 = E_1 (1)$
 and
$E_1 = E_1 (1)$
 and 
 $E_2 = E_2 (1)$
, we must show that
$E_2 = E_2 (1)$
, we must show that 
 $$ \begin{align} & \Big( H_{\alpha} \big( E_2 (\alpha), t \big) - H_{\alpha} \big( E_1 (\alpha), t \big) \Big) - \big( H^* (E_2, t) - H^* (E_1, t) \big) \nonumber\\ & \qquad \qquad \qquad = \big( E_2 (\alpha) - E_2 \big) \partial_x H^* (E_2, t) - \big( E_1 (\alpha) - E_1 \big) \partial_x H^* (E_1, t). \end{align} $$
$$ \begin{align} & \Big( H_{\alpha} \big( E_2 (\alpha), t \big) - H_{\alpha} \big( E_1 (\alpha), t \big) \Big) - \big( H^* (E_2, t) - H^* (E_1, t) \big) \nonumber\\ & \qquad \qquad \qquad = \big( E_2 (\alpha) - E_2 \big) \partial_x H^* (E_2, t) - \big( E_1 (\alpha) - E_1 \big) \partial_x H^* (E_1, t). \end{align} $$
 Indeed, the equality 
 $H_{\alpha } (u) = H^* (u)$
 for
$H_{\alpha } (u) = H^* (u)$
 for 
 $(x, t) = u \in \overline {\mathfrak {D}}$
 satisfying
$(x, t) = u \in \overline {\mathfrak {D}}$
 satisfying 
 $(x, t - \mathfrak {t}_0) \notin \mathfrak {L}_1 \cup \mathfrak {L}_{\alpha }$
 would then follow from the fact that
$(x, t - \mathfrak {t}_0) \notin \mathfrak {L}_1 \cup \mathfrak {L}_{\alpha }$
 would then follow from the fact that 
 $\nabla H_{\alpha } (u) = \nabla H^* (u)$
 for all such u and the fact that
$\nabla H_{\alpha } (u) = \nabla H^* (u)$
 for all such u and the fact that 
 $H_{\alpha }$
 and
$H_{\alpha }$
 and 
 $H^*$
 coincide at the northwest and southwest corners of
$H^*$
 coincide at the northwest and southwest corners of 
 $\overline {\mathfrak {D}}$
.
$\overline {\mathfrak {D}}$
.
 Observe that Equation (7.25) holds at 
 $\alpha = 1$
, since
$\alpha = 1$
, since 
 $H_1 = H^*$
. Thus, it suffices to show that the derivatives with respect to
$H_1 = H^*$
. Thus, it suffices to show that the derivatives with respect to 
 $\alpha $
 of both sides of Equation (7.25) are equal, namely,
$\alpha $
 of both sides of Equation (7.25) are equal, namely, 
 $$ \begin{align} \partial_{\alpha} \Big( H_{\alpha} \big( E_2 (\alpha), t \big) - H_{\alpha} \big( E_1 (\alpha), t \big) \Big) = \partial_x H^* (E_2, t) \partial_{\alpha} E_2 (\alpha) - \partial_x H^* (E_1, t) \partial_{\alpha} E_1 (\alpha). \end{align} $$
$$ \begin{align} \partial_{\alpha} \Big( H_{\alpha} \big( E_2 (\alpha), t \big) - H_{\alpha} \big( E_1 (\alpha), t \big) \Big) = \partial_x H^* (E_2, t) \partial_{\alpha} E_2 (\alpha) - \partial_x H^* (E_1, t) \partial_{\alpha} E_1 (\alpha). \end{align} $$
To do this, observe from Equation (7.21) that the left side of Equation (7.26) is given by
 $$ \begin{align*} \partial_{\alpha} \Big( H_{\alpha} \big( E_2 (\alpha), t \big) - H_{\alpha} \big( E_1 (\alpha), t \big) \Big) & = \partial_{\alpha} \displaystyle\int_{E_1 (\alpha)}^{E_2 (\alpha)} \partial_x H_{\alpha} (x, t) \mathrm{d} x \\ & = - \pi^{-1} \partial_{\alpha} \bigg( \displaystyle\int_{E_1 (\alpha)}^{E_2 (\alpha)} \operatorname{\mathrm{Im}} \log \mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha) \mathrm{d} x \bigg). \end{align*} $$
$$ \begin{align*} \partial_{\alpha} \Big( H_{\alpha} \big( E_2 (\alpha), t \big) - H_{\alpha} \big( E_1 (\alpha), t \big) \Big) & = \partial_{\alpha} \displaystyle\int_{E_1 (\alpha)}^{E_2 (\alpha)} \partial_x H_{\alpha} (x, t) \mathrm{d} x \\ & = - \pi^{-1} \partial_{\alpha} \bigg( \displaystyle\int_{E_1 (\alpha)}^{E_2 (\alpha)} \operatorname{\mathrm{Im}} \log \mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha) \mathrm{d} x \bigg). \end{align*} $$
Thus,
 $$ \begin{align} \partial_{\alpha} \Big( H_{\alpha} \big( E_2 (\alpha), t \big) - H_{\alpha} \big( E_1 (\alpha), t \big) \Big) &= - \pi^{-1} \operatorname{\mathrm{Im}} \displaystyle\int_{E_1 (\alpha)}^{E_2 (\alpha)} \partial_{\alpha} \log \mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha) \mathrm{d} x \nonumber\\ &\quad + \pi^{-1} \operatorname{\mathrm{Im}} \Big( \log \mathcal{F}_{t - \mathfrak{t}_0} \big( E_1 (\alpha); \alpha \big) \Big) \partial_{\alpha} E_1 (\alpha)\nonumber\\ &\quad  - \pi^{-1} \operatorname{\mathrm{Im}} \Big( \log \mathcal{F}_{t - \mathfrak{t}_0} \big( E_2 (\alpha); \alpha \big) \Big) \partial_{\alpha} E_2 (\alpha). \end{align} $$
$$ \begin{align} \partial_{\alpha} \Big( H_{\alpha} \big( E_2 (\alpha), t \big) - H_{\alpha} \big( E_1 (\alpha), t \big) \Big) &= - \pi^{-1} \operatorname{\mathrm{Im}} \displaystyle\int_{E_1 (\alpha)}^{E_2 (\alpha)} \partial_{\alpha} \log \mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha) \mathrm{d} x \nonumber\\ &\quad + \pi^{-1} \operatorname{\mathrm{Im}} \Big( \log \mathcal{F}_{t - \mathfrak{t}_0} \big( E_1 (\alpha); \alpha \big) \Big) \partial_{\alpha} E_1 (\alpha)\nonumber\\ &\quad  - \pi^{-1} \operatorname{\mathrm{Im}} \Big( \log \mathcal{F}_{t - \mathfrak{t}_0} \big( E_2 (\alpha); \alpha \big) \Big) \partial_{\alpha} E_2 (\alpha). \end{align} $$
 Letting 
 $|\alpha - 1|$
 tend to
$|\alpha - 1|$
 tend to 
 $0$
 in the first statement of Lemma 7.2 gives
$0$
 in the first statement of Lemma 7.2 gives 
 $$ \begin{align*} \partial_{\alpha} \log \mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha) = (t - \mathfrak{t}_0) \partial_x \bigg( \displaystyle\frac{\mathcal{F}_t (x; \alpha)}{\mathcal{F}_t (x; \alpha) + 1} \bigg) + 1. \end{align*} $$
$$ \begin{align*} \partial_{\alpha} \log \mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha) = (t - \mathfrak{t}_0) \partial_x \bigg( \displaystyle\frac{\mathcal{F}_t (x; \alpha)}{\mathcal{F}_t (x; \alpha) + 1} \bigg) + 1. \end{align*} $$
 Inserting this into Equation (7.27), using the fact that 
 $\mathcal {F}_{t - \mathfrak {t}_0} \big ( E_i (\alpha ); \alpha \big ) \in \mathbb {R}$
, using the equality (following from Equation (7.21) and the continuity of
$\mathcal {F}_{t - \mathfrak {t}_0} \big ( E_i (\alpha ); \alpha \big ) \in \mathbb {R}$
, using the equality (following from Equation (7.21) and the continuity of 
 $\nabla H_{\alpha }$
 almost everywhere in
$\nabla H_{\alpha }$
 almost everywhere in 
 $\mathfrak {L}_{\alpha } + (0, \mathfrak {t}_0)$
)
$\mathfrak {L}_{\alpha } + (0, \mathfrak {t}_0)$
) 
 $$ \begin{align*} \pi^{-1} \operatorname{\mathrm{Im}} \log \mathcal{F}_{t - \mathfrak{t}_0} \big( E_i (\alpha); \alpha \big) = - \partial_x H_{\alpha} \big( E_i (\alpha), t \big) = - \partial_x H^* (E_i, t) \end{align*} $$
$$ \begin{align*} \pi^{-1} \operatorname{\mathrm{Im}} \log \mathcal{F}_{t - \mathfrak{t}_0} \big( E_i (\alpha); \alpha \big) = - \partial_x H_{\alpha} \big( E_i (\alpha), t \big) = - \partial_x H^* (E_i, t) \end{align*} $$
and using the fact that
 $$ \begin{align*} \operatorname{\mathrm{Im}} \displaystyle\int_{E_1 (\alpha)}^{E_2 (\alpha)} & \Bigg( (t - \mathfrak{t}_0) \partial_x \bigg( \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha)}{\mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha) + 1} \bigg) + 1 \Bigg) \mathrm{d}x \\ & = (t - \mathfrak{t}_0) \operatorname{\mathrm{Im}} \Bigg( \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} \big( E_2 (\alpha) \big)}{\mathcal{F}_{t - \mathfrak{t}_0} \big( E_2 (\alpha) \big) + 1} - \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} \big( E_1 (\alpha) \big)}{\mathcal{F}_{t - \mathfrak{t}_0} \big( E_1 (\alpha) \big) + 1} + E_2 (\alpha) - E_1 (\alpha) \Bigg) = 0, \end{align*} $$
$$ \begin{align*} \operatorname{\mathrm{Im}} \displaystyle\int_{E_1 (\alpha)}^{E_2 (\alpha)} & \Bigg( (t - \mathfrak{t}_0) \partial_x \bigg( \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha)}{\mathcal{F}_{t - \mathfrak{t}_0} (x; \alpha) + 1} \bigg) + 1 \Bigg) \mathrm{d}x \\ & = (t - \mathfrak{t}_0) \operatorname{\mathrm{Im}} \Bigg( \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} \big( E_2 (\alpha) \big)}{\mathcal{F}_{t - \mathfrak{t}_0} \big( E_2 (\alpha) \big) + 1} - \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} \big( E_1 (\alpha) \big)}{\mathcal{F}_{t - \mathfrak{t}_0} \big( E_1 (\alpha) \big) + 1} + E_2 (\alpha) - E_1 (\alpha) \Bigg) = 0, \end{align*} $$
then yields Equation (7.26). As mentioned above, this implies Equation (7.25) and hence that 
 $\widehat {H}^* (x, t) = H^* (x, t)$
 for
$\widehat {H}^* (x, t) = H^* (x, t)$
 for 
 $(x, t) \in \overline {\mathfrak {D}}$
 with
$(x, t) \in \overline {\mathfrak {D}}$
 with 
 $(x, t) \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
. This verifies that
$(x, t) \notin \mathfrak {L} \cup \widehat {\mathfrak {L}}$
. This verifies that 
 $\widehat {H}^*$
 satisfies the second statement of the proposition.
$\widehat {H}^*$
 satisfies the second statement of the proposition.
 To show that 
 $\widehat {H}^*$
 satisfies the third statement, we must verify Equation (5.5) for sufficiently small
$\widehat {H}^*$
 satisfies the third statement, we must verify Equation (5.5) for sufficiently small 
 $\Delta $
. Let us assume in what follows that
$\Delta $
. Let us assume in what follows that 
 $(x, t)$
 and
$(x, t)$
 and 
 $(\widehat {x}, t)$
 are right endpoints of
$(\widehat {x}, t)$
 are right endpoints of 
 $\mathfrak {L}$
 and
$\mathfrak {L}$
 and 
 $\widehat {\mathfrak {L}}$
, respectively, as the proof in the case when they are left endpoints is entirely analogous. Then we may assume that
$\widehat {\mathfrak {L}}$
, respectively, as the proof in the case when they are left endpoints is entirely analogous. Then we may assume that 
 $\Delta \leq 0$
, for otherwise the fact that
$\Delta \leq 0$
, for otherwise the fact that 
 $\partial _x H^* (x, t) = \partial _x \widehat {H}^* (\widehat {x}, t)$
 (which holds by the almost everywhere continuity of
$\partial _x H^* (x, t) = \partial _x \widehat {H}^* (\widehat {x}, t)$
 (which holds by the almost everywhere continuity of 
 $\nabla H_{\alpha }$
 in
$\nabla H_{\alpha }$
 in 
 $\alpha $
, together with the facts that
$\alpha $
, together with the facts that 
 $(x, t) \notin \mathfrak {L}$
 and
$(x, t) \notin \mathfrak {L}$
 and 
 $(\widehat {x}, t) \notin \widehat {\mathfrak {L}}$
) implies
$(\widehat {x}, t) \notin \widehat {\mathfrak {L}}$
) implies 
 $H^* (x + \Delta , t) - H^* (x, t) = \Delta \partial _x H^* (x, t) = \Delta \partial _x \widehat {H}^* (\widehat {x}, t) = \widehat {H}^* (\widehat {x} + \Delta , t) - \widehat {H}^* (\widehat {x}, t)$
.
$H^* (x + \Delta , t) - H^* (x, t) = \Delta \partial _x H^* (x, t) = \Delta \partial _x \widehat {H}^* (\widehat {x}, t) = \widehat {H}^* (\widehat {x} + \Delta , t) - \widehat {H}^* (\widehat {x}, t)$
.
 So we must show Equation (5.5) for 
 $\Delta \leq 0$
. Since it holds at
$\Delta \leq 0$
. Since it holds at 
 $\Delta = 0$
 and since
$\Delta = 0$
 and since 
 $|\alpha - 1| = \mathcal {O} \big ( |\xi _1| + |\xi _2| \big )$
, it suffices to show that
$|\alpha - 1| = \mathcal {O} \big ( |\xi _1| + |\xi _2| \big )$
, it suffices to show that 
 $$ \begin{align*} \partial_x \widehat{H}^* (\widehat{x} + \Delta, t) = \partial_x H^* (x + \Delta, t) + \mathcal{O} \big( |\Delta|^{1/2} |\alpha_0 - 1| + |\Delta| \big), \end{align*} $$
$$ \begin{align*} \partial_x \widehat{H}^* (\widehat{x} + \Delta, t) = \partial_x H^* (x + \Delta, t) + \mathcal{O} \big( |\Delta|^{1/2} |\alpha_0 - 1| + |\Delta| \big), \end{align*} $$
for sufficiently small 
 $\Delta $
. In view of Equation (7.21), this is equivalent to showing that
$\Delta $
. In view of Equation (7.21), this is equivalent to showing that 
 $$ \begin{align} \bigg| \operatorname{\mathrm{Im}} \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x} + \Delta; \alpha_0)}{ \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0)} - \operatorname{\mathrm{Im}} \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (x + \Delta; 1)}{\mathcal{F}_{t - \mathfrak{t}_0} (x; 1)} \bigg| = \mathcal{O} \big( |\Delta|^{1/2} |\alpha_0 - 1| + |\Delta| \big). \end{align} $$
$$ \begin{align} \bigg| \operatorname{\mathrm{Im}} \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x} + \Delta; \alpha_0)}{ \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0)} - \operatorname{\mathrm{Im}} \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (x + \Delta; 1)}{\mathcal{F}_{t - \mathfrak{t}_0} (x; 1)} \bigg| = \mathcal{O} \big( |\Delta|^{1/2} |\alpha_0 - 1| + |\Delta| \big). \end{align} $$
To that end, observe from Equation (3.6) that
 $$ \begin{align} \mathcal{F}_{t - \mathfrak{t}_0} (x + \Delta; 1) - \mathcal{F}_{t - \mathfrak{t}_0} (x; 1) & = \bigg( \displaystyle\frac{2 \big( \mathcal{F}_{t - \mathfrak{t}_0} (x; 1) + 1\big)}{\mathcal{Q}_{0; 1}^{\prime \prime } \big( \mathcal{F}_{t - \mathfrak{t}_0} (x; 1) \big)} \bigg)^{1/2} |\Delta|^{1/2} + \mathcal{O} \big( |\Delta|^{3/2} \big); \nonumber\\ \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x} + \Delta; \alpha_0) - \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0) & = \bigg( \displaystyle\frac{2 \big( \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0) + 1\big)}{\mathcal{Q}_{0; \alpha_0}^{\prime \prime } \big( \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0) \big)} \bigg)^{1/2} |\Delta|^{1/2} + \mathcal{O} \big( |\Delta|^{3/2} \big). \end{align} $$
$$ \begin{align} \mathcal{F}_{t - \mathfrak{t}_0} (x + \Delta; 1) - \mathcal{F}_{t - \mathfrak{t}_0} (x; 1) & = \bigg( \displaystyle\frac{2 \big( \mathcal{F}_{t - \mathfrak{t}_0} (x; 1) + 1\big)}{\mathcal{Q}_{0; 1}^{\prime \prime } \big( \mathcal{F}_{t - \mathfrak{t}_0} (x; 1) \big)} \bigg)^{1/2} |\Delta|^{1/2} + \mathcal{O} \big( |\Delta|^{3/2} \big); \nonumber\\ \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x} + \Delta; \alpha_0) - \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0) & = \bigg( \displaystyle\frac{2 \big( \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0) + 1\big)}{\mathcal{Q}_{0; \alpha_0}^{\prime \prime } \big( \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0) \big)} \bigg)^{1/2} |\Delta|^{1/2} + \mathcal{O} \big( |\Delta|^{3/2} \big). \end{align} $$
 Now, observe, for uniformly bounded 
 $u \in \mathbb {C}$
, we have
$u \in \mathbb {C}$
, we have 
 $$ \begin{align} \big| \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0) - \mathcal{F}_{t - \mathfrak{t}_0} (x; 1) \big| = \mathcal{O} \big( |\alpha_0 - 1| \big); \qquad \big| \mathcal{Q}_{0; \alpha_0}^{\prime \prime } (u) - \mathcal{Q}_{0; 1}^{\prime \prime } (u) \big| = \mathcal{O} \big( |\alpha_0 - 1| \big), \end{align} $$
$$ \begin{align} \big| \mathcal{F}_{t - \mathfrak{t}_0} (\widehat{x}; \alpha_0) - \mathcal{F}_{t - \mathfrak{t}_0} (x; 1) \big| = \mathcal{O} \big( |\alpha_0 - 1| \big); \qquad \big| \mathcal{Q}_{0; \alpha_0}^{\prime \prime } (u) - \mathcal{Q}_{0; 1}^{\prime \prime } (u) \big| = \mathcal{O} \big( |\alpha_0 - 1| \big), \end{align} $$
where the former estimate follows from Equation (7.20) and the latter from Equation (7.6). Applying the two approximations in Equations (7.29) and (7.30) (with the fact that 
 $\mathcal {F}_{t - \mathfrak {t}_0} (x, 1)$
 and
$\mathcal {F}_{t - \mathfrak {t}_0} (x, 1)$
 and 
 $\mathcal {Q}_{0; 1}^{\prime \prime } \big ( \mathcal {F}_{t - \mathfrak {t}_0} (x; 1) \big )$
 are bounded away from
$\mathcal {Q}_{0; 1}^{\prime \prime } \big ( \mathcal {F}_{t - \mathfrak {t}_0} (x; 1) \big )$
 are bounded away from 
 $0$
) then yields Equation (7.28). As mentioned above, this implies Equation (5.4), thereby verifying that
$0$
) then yields Equation (7.28). As mentioned above, this implies Equation (5.4), thereby verifying that 
 $\widehat {H}^*$
 satisfies the third statement of the proposition.
$\widehat {H}^*$
 satisfies the third statement of the proposition.
 We next show that 
 $\widehat {H}^*$
 satisfies the first statement of the proposition. To that end, observe from the second and third statements of the proposition (together with Remark A.2), it suffices to show for any
$\widehat {H}^*$
 satisfies the first statement of the proposition. To that end, observe from the second and third statements of the proposition (together with Remark A.2), it suffices to show for any 
 $(x, t), (x', t) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
 such that
$(x, t), (x', t) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
 such that 
 $(z, t) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
 for each
$(z, t) \in \mathfrak {L} \cap \widehat {\mathfrak {L}}$
 for each 
 $z \in [x, x']$
 that
$z \in [x, x']$
 that 
 $$ \begin{align} \widehat{H}^* (x', t) - \widehat{H}^* (x, t) = H^* (x', t) - H^* (x, t) + \omega (t) \big( \Omega_t (x') - \Omega_t (x) \big) + \mathcal{O} \big( |\alpha_0 - 1|^{3/2} \big). \end{align} $$
$$ \begin{align} \widehat{H}^* (x', t) - \widehat{H}^* (x, t) = H^* (x', t) - H^* (x, t) + \omega (t) \big( \Omega_t (x') - \Omega_t (x) \big) + \mathcal{O} \big( |\alpha_0 - 1|^{3/2} \big). \end{align} $$
This will follow from a suitable application of Lemma 7.2. Indeed, we have
 $$ \begin{align*} \big( \widehat{H}^* (x', t) - H^* & (x', t) \big) - \big( \widehat{H}^* (x, t) - H^* (x, t) \big) \\ & = \displaystyle\int_x^{x'} \big( \partial_z \widehat{H}^* (z, t) - \partial_z H^* (z, t) \big) \mathrm{d} z \\ & = \pi^{-1} \displaystyle\int_x^{x'} \operatorname{\mathrm{Im}} \big( \log \mathcal{F}_{t - \mathfrak{t}_0} (z; 1) - \log \mathcal{F}_{t - \mathfrak{t}_0} (z; \alpha_0) \big) \mathrm{d} z \\ & = \pi^{-1} (t - \mathfrak{t}_0) (\alpha_0 - 1) \operatorname{\mathrm{Im}} \displaystyle\int_x^{x'} \partial_z \bigg( \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (z; 1)}{\mathcal{F}_{t - \mathfrak{t}_0} (z; 1) + 1} \bigg) \mathrm{d} z + \mathcal{O} \big( |\alpha_0 -1|^{3/2} \big) \\ & = \pi^{-1} \omega (t) \operatorname{\mathrm{Im}} \bigg( \displaystyle\frac{f_t ({x'})}{f_t ({x'}) + 1} - \displaystyle\frac{f_t (x)}{f_t (x) + 1} \bigg) + \mathcal{O} \big( |\alpha_0 - 1|^{3/2} \big) \\ & = \omega (t) \big( \Omega_t ({x'}) - \Omega_t (x) \big) + \mathcal{O} \big( |\alpha_0 - 1|^{3/2} \big), \end{align*} $$
$$ \begin{align*} \big( \widehat{H}^* (x', t) - H^* & (x', t) \big) - \big( \widehat{H}^* (x, t) - H^* (x, t) \big) \\ & = \displaystyle\int_x^{x'} \big( \partial_z \widehat{H}^* (z, t) - \partial_z H^* (z, t) \big) \mathrm{d} z \\ & = \pi^{-1} \displaystyle\int_x^{x'} \operatorname{\mathrm{Im}} \big( \log \mathcal{F}_{t - \mathfrak{t}_0} (z; 1) - \log \mathcal{F}_{t - \mathfrak{t}_0} (z; \alpha_0) \big) \mathrm{d} z \\ & = \pi^{-1} (t - \mathfrak{t}_0) (\alpha_0 - 1) \operatorname{\mathrm{Im}} \displaystyle\int_x^{x'} \partial_z \bigg( \displaystyle\frac{\mathcal{F}_{t - \mathfrak{t}_0} (z; 1)}{\mathcal{F}_{t - \mathfrak{t}_0} (z; 1) + 1} \bigg) \mathrm{d} z + \mathcal{O} \big( |\alpha_0 -1|^{3/2} \big) \\ & = \pi^{-1} \omega (t) \operatorname{\mathrm{Im}} \bigg( \displaystyle\frac{f_t ({x'})}{f_t ({x'}) + 1} - \displaystyle\frac{f_t (x)}{f_t (x) + 1} \bigg) + \mathcal{O} \big( |\alpha_0 - 1|^{3/2} \big) \\ & = \omega (t) \big( \Omega_t ({x'}) - \Omega_t (x) \big) + \mathcal{O} \big( |\alpha_0 - 1|^{3/2} \big), \end{align*} $$
where the second equality follows from Equation (7.21), the third from Lemma 7.2, the fourth from Equations (7.3) and (7.23) and the fifth from Equation (5.1). This confirms Equation (7.31) and thus the first statement of the proposition.
 To establish the fourth statement of the proposition, we must verify that 
 $\mathfrak {D}$
 satisfies the five assumptions listed in Assumption 4.1 with respect to
$\mathfrak {D}$
 satisfies the five assumptions listed in Assumption 4.1 with respect to 
 $\widehat {H}^*$
. We have already verified the first, namely, that
$\widehat {H}^*$
. We have already verified the first, namely, that 
 $\widehat {H}^*$
 is constant along
$\widehat {H}^*$
 is constant along 
 $\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and
$\partial _{\operatorname {\mathrm {ea}}} (\mathfrak {D})$
 and 
 $\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. That the third holds follows from Lemma 7.3, together with the fact that
$\partial _{\operatorname {\mathrm {we}}} (\mathfrak {D})$
. That the third holds follows from Lemma 7.3, together with the fact that 
 $\mathfrak {D}$
 satisfies the third assumption in Assumption 4.1 with respect to
$\mathfrak {D}$
 satisfies the third assumption in Assumption 4.1 with respect to 
 $H^*$
 (by Assumption 5.2). That the fourth holds follows from the continuity of
$H^*$
 (by Assumption 5.2). That the fourth holds follows from the continuity of 
 $\mathcal {F}$
 in
$\mathcal {F}$
 in 
 $\alpha $
, together with the fact that
$\alpha $
, together with the fact that 
 $\mathfrak {D}$
 satisfies the fourth assumption in Assumption 4.1 with respect to
$\mathfrak {D}$
 satisfies the fourth assumption in Assumption 4.1 with respect to 
 $H^*$
. The fifth follows from the fact that
$H^*$
. The fifth follows from the fact that 
 $\mathcal {F}_t (x; \alpha _0)$
 was defined to satisfy Equation (7.5). To verify the second, first suppose that
$\mathcal {F}_t (x; \alpha _0)$
 was defined to satisfy Equation (7.5). To verify the second, first suppose that 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cap \mathfrak {L}$
 is connected and nonempty. Then,
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D}) \cap \mathfrak {L}$
 is connected and nonempty. Then, 
 $\mathfrak {L}$
 admits an extension beyond the north boundary of
$\mathfrak {L}$
 admits an extension beyond the north boundary of 
 $\mathfrak {D}$
, and so
$\mathfrak {D}$
, and so 
 $H^*$
 admits an extension to some time
$H^*$
 admits an extension to some time 
 $\mathfrak {t}'> \mathfrak {t}_2$
. That
$\mathfrak {t}'> \mathfrak {t}_2$
. That 
 $\widehat {H}^*$
 does as well then follows from Lemma 7.3 and the continuity of
$\widehat {H}^*$
 does as well then follows from Lemma 7.3 and the continuity of 
 $H_{\alpha }$
 in
$H_{\alpha }$
 in 
 $\alpha $
.
$\alpha $
.
 If instead 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to h, then we must show that
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to h, then we must show that 
 $\widehat {h} (v) = H^* (v)$
 for each
$\widehat {h} (v) = H^* (v)$
 for each 
 $v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
. To that end, observe since the northwest corner
$v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
. To that end, observe since the northwest corner 
 $\big ( \mathfrak {a} (\mathfrak {t}_2), \mathfrak {t}_2 \big )$
 and northeast corner
$\big ( \mathfrak {a} (\mathfrak {t}_2), \mathfrak {t}_2 \big )$
 and northeast corner 
 $\big ( \mathfrak {b} (\mathfrak {t}_2), \mathfrak {t}_2 \big )$
 of
$\big ( \mathfrak {b} (\mathfrak {t}_2), \mathfrak {t}_2 \big )$
 of 
 $\overline {\mathfrak {D}}$
 are outside of (and thus bounded away from)
$\overline {\mathfrak {D}}$
 are outside of (and thus bounded away from) 
 $\overline {\mathfrak {L}}$
, it follows from the second statement of the proposition that
$\overline {\mathfrak {L}}$
, it follows from the second statement of the proposition that 
 $H^*$
 and
$H^*$
 and 
 $\widehat {H}^*$
 coincide on them. Since
$\widehat {H}^*$
 coincide on them. Since 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to h, we must have that
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to h, we must have that 
 $\partial _x H^* (v) = 1$
 for each
$\partial _x H^* (v) = 1$
 for each 
 $v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
. Since
$v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
. Since 
 $\widehat {H}^*$
 is
$\widehat {H}^*$
 is 
 $1$
-Lipschitz, it follows that
$1$
-Lipschitz, it follows that 
 $\widehat {H}^* (v) = H^* (v)$
 for each
$\widehat {H}^* (v) = H^* (v)$
 for each 
 $v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
, implying that
$v \in \partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
, implying that 
 $\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to
$\partial _{\operatorname {\mathrm {no}}} (\mathfrak {D})$
 is packed with respect to 
 $\widehat {h}$
. This shows that
$\widehat {h}$
. This shows that 
 $\widehat {H}^*$
 satisfies the four properties listed by the proposition.
$\widehat {H}^*$
 satisfies the four properties listed by the proposition.
 It remains to show that 
 $\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 is a maximizer of
$\widehat {H}^* \in \operatorname {\mathrm {Adm}} (\mathfrak {D}; \widehat {h})$
 is a maximizer of 
 $\mathcal {E}$
. This follows from checking the frozen star ray property in [Reference Astala, Duse, Prause and ZhongADPZ20, Definition 8.2], from which the claim follows by [Reference Astala, Duse, Prause and ZhongADPZ20, Remark 8.6] or [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 8.3]. We only briefly outline this verification here, as it is very similar to what was done in [Reference Astala, Duse, Prause and ZhongADPZ20]. Item i), ii) and iii) of the frozen star ray property in [Reference Astala, Duse, Prause and ZhongADPZ20, Definition 8.2] are quickly verified in our setting since the domain
$\mathcal {E}$
. This follows from checking the frozen star ray property in [Reference Astala, Duse, Prause and ZhongADPZ20, Definition 8.2], from which the claim follows by [Reference Astala, Duse, Prause and ZhongADPZ20, Remark 8.6] or [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 8.3]. We only briefly outline this verification here, as it is very similar to what was done in [Reference Astala, Duse, Prause and ZhongADPZ20]. Item i), ii) and iii) of the frozen star ray property in [Reference Astala, Duse, Prause and ZhongADPZ20, Definition 8.2] are quickly verified in our setting since the domain 
 $\mathfrak {D}$
 satisfies Assumption 4.1. Item iv) in [Reference Astala, Duse, Prause and ZhongADPZ20, Definition 8.2] may fail to hold in our setting since, if we parametrize the leftmost and rightmost arctic boundaries of
$\mathfrak {D}$
 satisfies Assumption 4.1. Item iv) in [Reference Astala, Duse, Prause and ZhongADPZ20, Definition 8.2] may fail to hold in our setting since, if we parametrize the leftmost and rightmost arctic boundaries of 
 $\widehat H^*$
 by
$\widehat H^*$
 by 
 $\widehat E_1(t)=\inf \big \{x: (x,t)\in \widehat {\mathfrak L} \big \}$
 by
$\widehat E_1(t)=\inf \big \{x: (x,t)\in \widehat {\mathfrak L} \big \}$
 by 
 $\widehat E_2(t)=\sup \big \{x: (x,t)\in \widehat {\mathfrak L} \big \}$
, then the two frozen regions
$\widehat E_2(t)=\sup \big \{x: (x,t)\in \widehat {\mathfrak L} \big \}$
, then the two frozen regions 
 $\big \{(x,t): \mathfrak a(t)\leq x \leq \widehat E_1(t) \big \}$
 and
$\big \{(x,t): \mathfrak a(t)\leq x \leq \widehat E_1(t) \big \}$
 and 
 $\big \{(x,t): \widehat E_1(t)\leq x \leq \mathfrak b(t)\big \}$
 may not be covered by the family of rays as in [Reference Astala, Duse, Prause and ZhongADPZ20, Definition 8.2]. However, in this case the proof of [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 8.3] still applies. Indeed, in the region not covered by those rays, one can extend the function
$\big \{(x,t): \widehat E_1(t)\leq x \leq \mathfrak b(t)\big \}$
 may not be covered by the family of rays as in [Reference Astala, Duse, Prause and ZhongADPZ20, Definition 8.2]. However, in this case the proof of [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 8.3] still applies. Indeed, in the region not covered by those rays, one can extend the function 
 $\Phi (z)$
 appearing in [Reference Astala, Duse, Prause and ZhongADPZ20, Proposition 8.1] continuously to be a constant function, and the proofs of [Reference Astala, Duse, Prause and ZhongADPZ20, Proposition 8.1, Theorem 8.3, and Remark 8.6] still continue to go through. This idea was already applied in a completely analogous way in the short proof of [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 8.4] (where the same phenomenon arose), so we refer there for a more detailed discussion.
$\Phi (z)$
 appearing in [Reference Astala, Duse, Prause and ZhongADPZ20, Proposition 8.1] continuously to be a constant function, and the proofs of [Reference Astala, Duse, Prause and ZhongADPZ20, Proposition 8.1, Theorem 8.3, and Remark 8.6] still continue to go through. This idea was already applied in a completely analogous way in the short proof of [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 8.4] (where the same phenomenon arose), so we refer there for a more detailed discussion.
Appendix A Complex Burgers equation
In this section, we collect some quantitative results on the complex Burgers equation and establish Lemma 3.7 and Lemma 7.1. We recall the polygonal domain 
 $\mathfrak P$
 from Definition 2.2, the associated boundary height function
$\mathfrak P$
 from Definition 2.2, the associated boundary height function 
 $h: \partial \mathfrak P\mapsto \mathbb R$
, the maximizer
$h: \partial \mathfrak P\mapsto \mathbb R$
, the maximizer 
 $H^* \in \operatorname {\mathrm {Adm}} (\mathfrak {P}; h)$
 of
$H^* \in \operatorname {\mathrm {Adm}} (\mathfrak {P}; h)$
 of 
 $\mathcal {E}$
 from Equation (2.4), the complex slope
$\mathcal {E}$
 from Equation (2.4), the complex slope 
 $f_t (x)$
 associated with
$f_t (x)$
 associated with 
 $H^*$
 through Equation (3.1) and the liquid region
$H^*$
 through Equation (3.1) and the liquid region 
 $\mathfrak {L} = \mathfrak {L} (\mathfrak {P}, h)$
 and arctic boundary
$\mathfrak {L} = \mathfrak {L} (\mathfrak {P}, h)$
 and arctic boundary 
 $\mathfrak {A} = \mathfrak {A} (\mathfrak {P}, h)$
 from Equations(2.5) and (2.6). We further recall that there exists an analytic function
$\mathfrak {A} = \mathfrak {A} (\mathfrak {P}, h)$
 from Equations(2.5) and (2.6). We further recall that there exists an analytic function 
 $Q_0$
 satisfying Equation (3.3). If
$Q_0$
 satisfying Equation (3.3). If 
 $(x, t) \in \mathfrak {A}$
 and
$(x, t) \in \mathfrak {A}$
 and 
 $f_t (x)$
 is a triple root of Equation (3.3), then
$f_t (x)$
 is a triple root of Equation (3.3), then 
 $(x, t)$
 is a cusp of
$(x, t)$
 is a cusp of 
 $\mathfrak {A}$
. If the tangent line through a point
$\mathfrak {A}$
. If the tangent line through a point 
 $u \in \mathfrak {A}$
 to
$u \in \mathfrak {A}$
 to 
 $\mathfrak {A}$
 has slope in the set
$\mathfrak {A}$
 has slope in the set 
 $\{ 0, 1, \infty \}$
, then u is a tangency location of
$\{ 0, 1, \infty \}$
, then u is a tangency location of 
 $\mathfrak {A}$
; in this case, the slope of u is the slope of this tangent line.
$\mathfrak {A}$
; in this case, the slope of u is the slope of this tangent line.
 Given this notation, the following result provides the behavior of the complex slope 
 $f_t (x)$
 along generic points of
$f_t (x)$
 along generic points of 
 $\mathfrak {A}$
. It follows quickly from [Reference Kenyon and OkounkovKO07, Section 1.6] and [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 1.8(c)], together with the analyticity of
$\mathfrak {A}$
. It follows quickly from [Reference Kenyon and OkounkovKO07, Section 1.6] and [Reference Astala, Duse, Prause and ZhongADPZ20, Theorem 1.8(c)], together with the analyticity of 
 $f (x, t)$
, and it can be also deduced directly from (3.3) by a Taylor expansion, so its proof is omitted.
$f (x, t)$
, and it can be also deduced directly from (3.3) by a Taylor expansion, so its proof is omitted.
Lemma A.1 [Reference Kenyon and OkounkovKO07, Reference Astala, Duse, Prause and ZhongADPZ20].
 For any fixed 
 $(x_0, t) \in \mathfrak {A}$
 which is not a tangency location or cusp of
$(x_0, t) \in \mathfrak {A}$
 which is not a tangency location or cusp of 
 $\mathfrak {A}$
, there exists a constant
$\mathfrak {A}$
, there exists a constant 
 $C = C (x_0, t, \mathfrak {P})> 0$
 and a neighborhood
$C = C (x_0, t, \mathfrak {P})> 0$
 and a neighborhood 
 $\mathfrak {U} \subset \overline {\mathfrak {P}}$
 of
$\mathfrak {U} \subset \overline {\mathfrak {P}}$
 of 
 $(x_0, t)$
 such that the following holds. For
$(x_0, t)$
 such that the following holds. For 
 $(x, t) \in \mathfrak {U} \cap \mathfrak {L}$
, we have
$(x, t) \in \mathfrak {U} \cap \mathfrak {L}$
, we have 
 $$ \begin{align*} f_t (x) = f_t (x_0) + (C x - C x_0)^{1/2} + \mathcal{O} \big( |x - x_0|^{3/2} \big), \end{align*} $$
$$ \begin{align*} f_t (x) = f_t (x_0) + (C x - C x_0)^{1/2} + \mathcal{O} \big( |x - x_0|^{3/2} \big), \end{align*} $$
where the implicit constant in the error only depends on 
 $\mathfrak {P}$
 and the distance from
$\mathfrak {P}$
 and the distance from 
 $(x, t)$
 to a tangency location or cusp of
$(x, t)$
 to a tangency location or cusp of 
 $\mathfrak {A}$
. Here, the branch of the root is so that it lies in
$\mathfrak {A}$
. Here, the branch of the root is so that it lies in 
 $\mathbb {H}^-$
.
$\mathbb {H}^-$
.
Remark A.2. Suppose 
 $u = (x, t) \in \mathfrak {L}$
 is bounded away from a cusp or tangency location of
$u = (x, t) \in \mathfrak {L}$
 is bounded away from a cusp or tangency location of 
 $\mathfrak {A}$
; let
$\mathfrak {A}$
; let 
 $d_u = \inf \big \{ |x - x_0| : (x_0, t) \in \mathfrak {A} \big \}$
. Then, the fact that
$d_u = \inf \big \{ |x - x_0| : (x_0, t) \in \mathfrak {A} \big \}$
. Then, the fact that 
 $f_t (x_0) \in \mathbb {R}$
 if
$f_t (x_0) \in \mathbb {R}$
 if 
 $(x_0, t) \in \mathfrak {A}$
 (from (3.1)), Lemma A.1 and Equation (3.1) together imply that there exists a small constant
$(x_0, t) \in \mathfrak {A}$
 (from (3.1)), Lemma A.1 and Equation (3.1) together imply that there exists a small constant 
 $\mathfrak {c}> 0$
 such that
$\mathfrak {c}> 0$
 such that 
 $\mathfrak {c} d^{1/2} < \big | \partial _x H^* (x, t) - \partial _x H^* (x_0, t) \big | < \mathfrak {c}^{-1} d^{1/2}$
.
$\mathfrak {c} d^{1/2} < \big | \partial _x H^* (x, t) - \partial _x H^* (x_0, t) \big | < \mathfrak {c}^{-1} d^{1/2}$
.
Now, let us show Lemma 3.7.
Proof of Lemma 3.7.
 Throughout this proof, let 
 $(\widetilde {x}, \widetilde {t}) \in \mathfrak {A}$
 be some point on the arctic boundary
$(\widetilde {x}, \widetilde {t}) \in \mathfrak {A}$
 be some point on the arctic boundary 
 $\mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
 near
$\mathfrak {A} = \mathfrak {A} (\mathfrak {P})$
 near 
 $(x, t)$
. Abbreviating
$(x, t)$
. Abbreviating 
 $f = f_t (x)$
 and
$f = f_t (x)$
 and 
 $\widetilde {f} = f_{\widetilde {t}} (\widetilde {x})$
, Equation (3.3) and the third part of Proposition 3.3 together imply
$\widetilde {f} = f_{\widetilde {t}} (\widetilde {x})$
, Equation (3.3) and the third part of Proposition 3.3 together imply 
 $$ \begin{align} Q_0 (f) = x (f + 1) - tf; \quad Q_0' ( f )= x - t; \quad Q_0 (\widetilde{f}) = x (\widetilde{f} + 1) - t \widetilde{f}; \quad Q_0' (\widetilde{f}) = x - t. \end{align} $$
$$ \begin{align} Q_0 (f) = x (f + 1) - tf; \quad Q_0' ( f )= x - t; \quad Q_0 (\widetilde{f}) = x (\widetilde{f} + 1) - t \widetilde{f}; \quad Q_0' (\widetilde{f}) = x - t. \end{align} $$
From this, we deduce
 $$ \begin{align*} Q_0(f) = (f + 1) Q_0' (f) + t; \qquad Q_0(\widetilde{f}) = (\widetilde{f} + 1) Q_0' (\widetilde{f}) + \widetilde{t}, \end{align*} $$
$$ \begin{align*} Q_0(f) = (f + 1) Q_0' (f) + t; \qquad Q_0(\widetilde{f}) = (\widetilde{f} + 1) Q_0' (\widetilde{f}) + \widetilde{t}, \end{align*} $$
which together imply
 $$ \begin{align} Q_0(\widetilde{f}) - Q_0(f) = (f + 1) \big( Q_0' (\widetilde{f}) - Q_0' (f) \big) + (\widetilde{f} - f) Q_0' (\widetilde{f}) + \widetilde{t} - t. \end{align} $$
$$ \begin{align} Q_0(\widetilde{f}) - Q_0(f) = (f + 1) \big( Q_0' (\widetilde{f}) - Q_0' (f) \big) + (\widetilde{f} - f) Q_0' (\widetilde{f}) + \widetilde{t} - t. \end{align} $$
 We will first use Equation (A.2) to approximately solve for 
 $\widetilde {f}$
 in terms of
$\widetilde {f}$
 in terms of 
 $(f, \widetilde {t}, t)$
 and then use Equation (A.1) to solve for
$(f, \widetilde {t}, t)$
 and then use Equation (A.1) to solve for 
 $\widetilde {x}$
 in terms of
$\widetilde {x}$
 in terms of 
 $(x, f, \widetilde {t}, t)$
. To that end, applying a Taylor expansion in Equation (A.2) (and using the fact that
$(x, f, \widetilde {t}, t)$
. To that end, applying a Taylor expansion in Equation (A.2) (and using the fact that 
 $(\widetilde {x}, \widetilde {t})$
 is close to
$(\widetilde {x}, \widetilde {t})$
 is close to 
 $(x, t)$
, which implies that
$(x, t)$
, which implies that 
 $\widetilde {f}$
 is close to f by the analyticity of
$\widetilde {f}$
 is close to f by the analyticity of 
 $f_t (x)$
 for
$f_t (x)$
 for 
 $(x, t) \in \mathfrak {L} (\mathfrak {P})$
) gives
$(x, t) \in \mathfrak {L} (\mathfrak {P})$
) gives 
 $$ \begin{align*} & (\widetilde{f} - f) Q_0' (f) + \displaystyle\frac{(\widetilde{f} - f)^2}{2} Q_0" (f) + \mathcal{O} \big( |\widetilde{f} - f|^3 \big) \\ & \qquad \qquad \quad = (f + 1) \Big( (\widetilde{f} - f) Q_0" (f) + \displaystyle\frac{Q_0"' (f)}{2} (\widetilde{f} - f)^2 + \mathcal{O} \big( |\widetilde{f} - f|^3 \big) \Big) \\ & \qquad \qquad \qquad \quad + (\widetilde{f} - f) Q_0' (f) + (\widetilde{f} - f)^2 Q_0" (f) + \mathcal{O} \big( |\widetilde{f} - f|^3 \big) + \widetilde{t} - t. \end{align*} $$
$$ \begin{align*} & (\widetilde{f} - f) Q_0' (f) + \displaystyle\frac{(\widetilde{f} - f)^2}{2} Q_0" (f) + \mathcal{O} \big( |\widetilde{f} - f|^3 \big) \\ & \qquad \qquad \quad = (f + 1) \Big( (\widetilde{f} - f) Q_0" (f) + \displaystyle\frac{Q_0"' (f)}{2} (\widetilde{f} - f)^2 + \mathcal{O} \big( |\widetilde{f} - f|^3 \big) \Big) \\ & \qquad \qquad \qquad \quad + (\widetilde{f} - f) Q_0' (f) + (\widetilde{f} - f)^2 Q_0" (f) + \mathcal{O} \big( |\widetilde{f} - f|^3 \big) + \widetilde{t} - t. \end{align*} $$
This implies
 $$ \begin{align*} (\widetilde{f} - f) (f + 1) Q_0" (f) + \displaystyle\frac{(\widetilde{f} - f)^2}{2} \big( Q_0" (f) + (f + 1) Q_0"' (f) \big) + \mathcal{O} \big( |\widetilde{f} - f|^3 \big)= t - \widetilde{t}. \end{align*} $$
$$ \begin{align*} (\widetilde{f} - f) (f + 1) Q_0" (f) + \displaystyle\frac{(\widetilde{f} - f)^2}{2} \big( Q_0" (f) + (f + 1) Q_0"' (f) \big) + \mathcal{O} \big( |\widetilde{f} - f|^3 \big)= t - \widetilde{t}. \end{align*} $$
 In particular, it follows that 
 $\widetilde {f} - f = \mathcal {O} \big ( |\widetilde {t} - t| \big )$
 and, more precisely, that
$\widetilde {f} - f = \mathcal {O} \big ( |\widetilde {t} - t| \big )$
 and, more precisely, that 
 $$ \begin{align} \widetilde{f} - f = \displaystyle\frac{t - \widetilde{t}}{(f + 1) Q_0" (f)} - \displaystyle\frac{(t - \widetilde{t})^2}{2 (f + 1)^3 Q_0" (f)^3} \big( Q_0" (f) + (f + 1) Q_0"' (f) \big) + \mathcal{O} \big( |t - \widetilde{t}|^3 \big). \end{align} $$
$$ \begin{align} \widetilde{f} - f = \displaystyle\frac{t - \widetilde{t}}{(f + 1) Q_0" (f)} - \displaystyle\frac{(t - \widetilde{t})^2}{2 (f + 1)^3 Q_0" (f)^3} \big( Q_0" (f) + (f + 1) Q_0"' (f) \big) + \mathcal{O} \big( |t - \widetilde{t}|^3 \big). \end{align} $$
Since Equation (A.1) indicates that
 $$ \begin{align*} x = Q_0' (f) + t; \qquad \widetilde{x} = Q_0' (\widetilde{f}) + \widetilde{t}, \end{align*} $$
$$ \begin{align*} x = Q_0' (f) + t; \qquad \widetilde{x} = Q_0' (\widetilde{f}) + \widetilde{t}, \end{align*} $$
subtracting, applying Equation (A.3) and Taylor expanding again gives
 $$ \begin{align*} \widetilde{x} - x = Q_0' (\widetilde{f}) - Q_0' (f) + \widetilde{t} - t & = (\widetilde{f} - f) Q_0" (f) + (\widetilde{f} - f)^2 \displaystyle\frac{Q_0"' (f)}{2} + \widetilde{t} - t + \mathcal{O} \big( |\widetilde{f} - f|^3 \big) \\ & = \displaystyle\frac{f (\widetilde{t} - t)}{f + 1} - \displaystyle\frac{(\widetilde{t} - t)^2}{2 (f + 1)^3 Q_0" (f)} + \mathcal{O} \big( |\widetilde{t} - t|^3 \big), \end{align*} $$
$$ \begin{align*} \widetilde{x} - x = Q_0' (\widetilde{f}) - Q_0' (f) + \widetilde{t} - t & = (\widetilde{f} - f) Q_0" (f) + (\widetilde{f} - f)^2 \displaystyle\frac{Q_0"' (f)}{2} + \widetilde{t} - t + \mathcal{O} \big( |\widetilde{f} - f|^3 \big) \\ & = \displaystyle\frac{f (\widetilde{t} - t)}{f + 1} - \displaystyle\frac{(\widetilde{t} - t)^2}{2 (f + 1)^3 Q_0" (f)} + \mathcal{O} \big( |\widetilde{t} - t|^3 \big), \end{align*} $$
which implies the lemma upon matching with Equation (2.7).
Next, we establish Lemma 7.1.
Proof of Lemma 7.1.
 It suffices to establish Equation (7.9), as Equation (7.10) follows from it. Since the derivation of both statements of Equation (7.9) are similar, we only establish the former. To that end, let 
 $(x', t)$
 be some point close to
$(x', t)$
 be some point close to 
 $(x, t)$
, and set
$(x, t)$
, and set 
 $\mathcal {F} = \mathcal {F}_t (x')$
. Then, Equation (7.8) implies
$\mathcal {F} = \mathcal {F}_t (x')$
. Then, Equation (7.8) implies 
 $$ \begin{align*} \mathcal{Q}_0 (\mathcal{F}) = x (\mathcal{F} + 1) - t \mathcal{F}; \qquad \mathcal{Q}_0 (\mathcal{F}') = x' (\mathcal{F}' + 1) - t \mathcal{F}', \end{align*} $$
$$ \begin{align*} \mathcal{Q}_0 (\mathcal{F}) = x (\mathcal{F} + 1) - t \mathcal{F}; \qquad \mathcal{Q}_0 (\mathcal{F}') = x' (\mathcal{F}' + 1) - t \mathcal{F}', \end{align*} $$
from which it follows that
 $$ \begin{align*} \mathcal{Q}_0 (\mathcal{F}') - \mathcal{Q}_0 (\mathcal{F}) = (\mathcal{F}' - \mathcal{F}) (x - t) + (x' - x) (\mathcal{F}' + 1). \end{align*} $$
$$ \begin{align*} \mathcal{Q}_0 (\mathcal{F}') - \mathcal{Q}_0 (\mathcal{F}) = (\mathcal{F}' - \mathcal{F}) (x - t) + (x' - x) (\mathcal{F}' + 1). \end{align*} $$
From a Taylor expansion, we deduce
 $$ \begin{align*} (\mathcal{F}' - \mathcal{F}) \big( \mathcal{Q}_0' (\mathcal{F}) - x + t) = (x' - x) (\mathcal{F} + 1) + \mathcal{O} \big( |\mathcal{F}' - \mathcal{F}|^2 + |x' - x| |\mathcal{F}' - \mathcal{F}| \big). \end{align*} $$
$$ \begin{align*} (\mathcal{F}' - \mathcal{F}) \big( \mathcal{Q}_0' (\mathcal{F}) - x + t) = (x' - x) (\mathcal{F} + 1) + \mathcal{O} \big( |\mathcal{F}' - \mathcal{F}|^2 + |x' - x| |\mathcal{F}' - \mathcal{F}| \big). \end{align*} $$
 Letting 
 $|x' - x|$
 tend to
$|x' - x|$
 tend to 
 $0$
, it follows that
$0$
, it follows that 
 $$ \begin{align*} \partial_x \mathcal{F}_t (x) = \displaystyle\lim_{x' \rightarrow x} \displaystyle\frac{\mathcal{F}' - \mathcal{F}}{x' - x} = \displaystyle\frac{\mathcal{F} + 1}{\mathcal{Q}_0' (\mathcal{F}) - x + t}, \end{align*} $$
$$ \begin{align*} \partial_x \mathcal{F}_t (x) = \displaystyle\lim_{x' \rightarrow x} \displaystyle\frac{\mathcal{F}' - \mathcal{F}}{x' - x} = \displaystyle\frac{\mathcal{F} + 1}{\mathcal{Q}_0' (\mathcal{F}) - x + t}, \end{align*} $$
which yields the first statement of Equation (7.9).
Appendix B Proof of Proposition 2.4
 In this section, we establish Proposition 2.4; throughout, we adopt the notation from that proposition. Let us begin by recalling some results from [Reference De Silva and SavinDSS10, Reference Astala, Duse, Prause and ZhongADPZ20]. There exist two functions 
 $m, M\in \operatorname {\mathrm {Adm}} (\mathfrak {P}; h)$
 (sometimes called obstacles) such that
$m, M\in \operatorname {\mathrm {Adm}} (\mathfrak {P}; h)$
 (sometimes called obstacles) such that 
 $$ \begin{align*} m(z) \le H(z) \le M(z), \qquad \text{for any}\ H\in \operatorname{\mathrm{Adm}} (\mathfrak{P})\ \text{and}\ z \in \mathfrak P. \end{align*} $$
$$ \begin{align*} m(z) \le H(z) \le M(z), \qquad \text{for any}\ H\in \operatorname{\mathrm{Adm}} (\mathfrak{P})\ \text{and}\ z \in \mathfrak P. \end{align*} $$
They are explicitly given by (see [Reference Astala, Duse, Prause and ZhongADPZ20, Equation (2.23)])
 $$ \begin{align} m(v)=\max_{u\in \partial \mathfrak{P}} \bigg(-\max_{p\in \overline {\mathcal T}}\langle p, u-v\rangle+h(u) \bigg),\quad M(v)=\min_{u\in \partial \mathfrak{P}} \bigg( \max_{p\in \overline {\mathcal T}}\langle p, v-u\rangle+h(u) \bigg), \end{align} $$
$$ \begin{align} m(v)=\max_{u\in \partial \mathfrak{P}} \bigg(-\max_{p\in \overline {\mathcal T}}\langle p, u-v\rangle+h(u) \bigg),\quad M(v)=\min_{u\in \partial \mathfrak{P}} \bigg( \max_{p\in \overline {\mathcal T}}\langle p, v-u\rangle+h(u) \bigg), \end{align} $$
where we recall the triangle 
 $\mathcal T$
 from Equation (2.1). The following result due to [Reference De Silva and SavinDSS10] indicates continuity properties for the gradient
$\mathcal T$
 from Equation (2.1). The following result due to [Reference De Silva and SavinDSS10] indicates continuity properties for the gradient 
 $\nabla H^*$
 of the maximizer
$\nabla H^*$
 of the maximizer 
 $H^*$
 of
$H^*$
 of 
 $\mathcal {E}$
 on
$\mathcal {E}$
 on 
 $\mathfrak {P}$
, as well as convexity properties for its arctic boundary. In what follows, for any direction
$\mathfrak {P}$
, as well as convexity properties for its arctic boundary. In what follows, for any direction 
 $\omega \in \mathbb {R}^2 \setminus \big \{ (0, 0) \big \}$
, the graph
$\omega \in \mathbb {R}^2 \setminus \big \{ (0, 0) \big \}$
, the graph 
 $G \subset \mathbb {R}^2$
 of a (possibly discontinuous) function (whose domain is possibly disconnected or empty) is said to be convex (or concave) in the
$G \subset \mathbb {R}^2$
 of a (possibly discontinuous) function (whose domain is possibly disconnected or empty) is said to be convex (or concave) in the 
 $\omega $
 direction, if the following holds. Let
$\omega $
 direction, if the following holds. Let 
 $\rho _{\omega } : \mathbb {R}^2 \rightarrow \mathbb {R}^2$
 denote the rotation such that
$\rho _{\omega } : \mathbb {R}^2 \rightarrow \mathbb {R}^2$
 denote the rotation such that 
 $\rho _{\omega } (\omega ) \in \mathbb {R}_{> 0} \cdot (0, 1)$
 points vertically upwards. Then, each connected component of
$\rho _{\omega } (\omega ) \in \mathbb {R}_{> 0} \cdot (0, 1)$
 points vertically upwards. Then, each connected component of 
 $\rho _{\omega } (G)$
 is convex (or concave, respectively).
$\rho _{\omega } (G)$
 is convex (or concave, respectively).
Proposition B.1 [Reference De Silva and SavinDSS10, Theorem 1.3 and 4.2].
The following two statements hold.
- 
(1) The gradient  $\nabla H^*$
 exists and is continuous on the set $\nabla H^*$
 exists and is continuous on the set $\big \{ z \in \mathfrak P: m(z)<H^*(z)<M(z) \big \}$
. $\big \{ z \in \mathfrak P: m(z)<H^*(z)<M(z) \big \}$
.
- 
(2) Fix a real number  $c \in \mathbb {R}$
; a vertex $c \in \mathbb {R}$
; a vertex $p_0 \in \big \{ (0, 0), (1, 0), (1, -1) \big \} \in \overline {\mathcal {T}}$
; and a direction $p_0 \in \big \{ (0, 0), (1, 0), (1, -1) \big \} \in \overline {\mathcal {T}}$
; and a direction $\omega \in \mathbb {R}^2 \setminus \big \{ (0, 0) \big \}$
 such that (B.2) $\omega \in \mathbb {R}^2 \setminus \big \{ (0, 0) \big \}$
 such that (B.2) $$ \begin{align} \omega\cdot(p-p_0)>0, \quad \text{for all } p \in \overline{\mathcal T}\setminus \{p_0\}. \end{align} $$ $$ \begin{align} \omega\cdot(p-p_0)>0, \quad \text{for all } p \in \overline{\mathcal T}\setminus \{p_0\}. \end{align} $$Let S denote the interior of the set  $\{ z \in \mathfrak P: H^*(z)=c+p_0\cdot z\}$
; assume that $\{ z \in \mathfrak P: H^*(z)=c+p_0\cdot z\}$
; assume that $S\neq \emptyset $
. Then $S\neq \emptyset $
. Then $\partial S\cap \mathfrak P$
 consists of the union of a convex graph (by above) and a concave graph (by below) in the $\partial S\cap \mathfrak P$
 consists of the union of a convex graph (by above) and a concave graph (by below) in the $\omega $
 direction. $\omega $
 direction.
 The following lemma, which is a quick consequence of Definition 2.2, provides integrality properties for the boundary height function 
 $h : \partial \mathfrak {P} \rightarrow \mathbb {R}$
.
$h : \partial \mathfrak {P} \rightarrow \mathbb {R}$
.
Lemma B.2. The following two statements hold.
- 
(1) For any  $(x, t) \in (n^{-1} \cdot \mathbb {Z})^2$
, we have $(x, t) \in (n^{-1} \cdot \mathbb {Z})^2$
, we have $h(x, t) \in n^{-1} \cdot \mathbb {Z}$
. $h(x, t) \in n^{-1} \cdot \mathbb {Z}$
.
- 
(2) Along any edge of  $\partial \mathfrak {P}$
 of slope $\partial \mathfrak {P}$
 of slope $0$
, we have $0$
, we have $\partial _x h = 1$
. $\partial _x h = 1$
.
- 
(3) Along any edge of  $\partial \mathfrak {P}$
 of slope $\partial \mathfrak {P}$
 of slope $\infty $
 or $\infty $
 or $1$
, h is constant and takes value $1$
, h is constant and takes value $h \in n^{-1} \cdot \mathbb {Z}$
. $h \in n^{-1} \cdot \mathbb {Z}$
.
Proof. Since (by Definition 2.2) 
 $\mathfrak {P}$
 is a polygonal domain and
$\mathfrak {P}$
 is a polygonal domain and 
 $(0, 0) \in \partial \mathfrak {P}$
, each corner vertex of the polygon
$(0, 0) \in \partial \mathfrak {P}$
, each corner vertex of the polygon 
 $\mathfrak {P}$
 lies on
$\mathfrak {P}$
 lies on 
 $(n^{-1} \cdot \mathbb {Z})^2$
. The fact that h is constant along any edge of
$(n^{-1} \cdot \mathbb {Z})^2$
. The fact that h is constant along any edge of 
 $\partial \mathfrak {P}$
 of slope
$\partial \mathfrak {P}$
 of slope 
 $\infty $
 or
$\infty $
 or 
 $1$
 and that
$1$
 and that 
 $\partial _x h = 1$
 along any edge of
$\partial _x h = 1$
 along any edge of 
 $\partial \mathfrak {P}$
 of slope
$\partial \mathfrak {P}$
 of slope 
 $0$
, follows from the fact that
$0$
, follows from the fact that 
 $\mathfrak {P}$
 is polygonal (together with the conventions relating tilings to height functions described in Section 2.1); this confirms Item B.2 and the first part of Item B.2 of the lemma. Using this, with the facts that
$\mathfrak {P}$
 is polygonal (together with the conventions relating tilings to height functions described in Section 2.1); this confirms Item B.2 and the first part of Item B.2 of the lemma. Using this, with the facts that 
 $h(0,0) = 0$
 and that each corner vertex of
$h(0,0) = 0$
 and that each corner vertex of 
 $\mathfrak {P}$
 lies on
$\mathfrak {P}$
 lies on 
 $(n^{-1} \cdot \mathbb {Z})^2$
, it follows that
$(n^{-1} \cdot \mathbb {Z})^2$
, it follows that 
 $h (x, t) \in n^{-1} \cdot \mathbb {Z}$
 whenever
$h (x, t) \in n^{-1} \cdot \mathbb {Z}$
 whenever 
 $(x, t) \in (n^{-1} \cdot \mathbb {Z})^2$
. This confirms Item B.2 of the lemma. Then the second part of Item B.2 of the lemma follows from the fact that h is constant along each edge of
$(x, t) \in (n^{-1} \cdot \mathbb {Z})^2$
. This confirms Item B.2 of the lemma. Then the second part of Item B.2 of the lemma follows from the fact that h is constant along each edge of 
 $\partial \mathfrak {P}$
 with slope
$\partial \mathfrak {P}$
 with slope 
 $\infty $
 or
$\infty $
 or 
 $1$
, and again the fact that each corner vertex of
$1$
, and again the fact that each corner vertex of 
 $\mathfrak {P}$
 lies on
$\mathfrak {P}$
 lies on 
 $(n^{-1} \cdot \mathbb {Z})^2$
.
$(n^{-1} \cdot \mathbb {Z})^2$
.
 The below lemma states, at lattice points 
 $v \in (n^{-1} \cdot \mathbb {Z})^2$
, that
$v \in (n^{-1} \cdot \mathbb {Z})^2$
, that 
 $n \cdot m(v)$
 and
$n \cdot m(v)$
 and 
 $n \cdot M(v)$
 are integers.
$n \cdot M(v)$
 are integers.
Lemma B.3. For any point 
 $v\in \overline {\mathfrak P}\cap (n^{-1} \cdot {\mathbb Z})^2$
, we have
$v\in \overline {\mathfrak P}\cap (n^{-1} \cdot {\mathbb Z})^2$
, we have 
 $n \cdot m(v) \in \mathbb {Z}$
 and
$n \cdot m(v) \in \mathbb {Z}$
 and 
 $n \cdot M(v)\in {\mathbb Z}$
.
$n \cdot M(v)\in {\mathbb Z}$
.
Proof. The proof follows from analyzing the two formulas (B.1). The proofs are the same for m and M, so we will only give the proof that 
 $n \cdot M(v)\in {\mathbb Z}$
. By Equation (B.1),
$n \cdot M(v)\in {\mathbb Z}$
. By Equation (B.1), 
 $$ \begin{align} M(v)=\min_{u\in \partial \mathfrak{P}} \bigg( \max_{p\in \overline {\mathcal T}}\langle p, v-u\rangle+h(u) \bigg). \end{align} $$
$$ \begin{align} M(v)=\min_{u\in \partial \mathfrak{P}} \bigg( \max_{p\in \overline {\mathcal T}}\langle p, v-u\rangle+h(u) \bigg). \end{align} $$
 Let the minimum over 
 $u \in \partial \mathfrak {P}$
 in Equation (B.3) be attained at
$u \in \partial \mathfrak {P}$
 in Equation (B.3) be attained at 
 $u^* \in \partial \mathfrak {P}$
 (if there are multiple such
$u^* \in \partial \mathfrak {P}$
 (if there are multiple such 
 $u^*$
, then we select one arbitrarily). Further observe that the maximum over
$u^*$
, then we select one arbitrarily). Further observe that the maximum over 
 ${p\in \overline {\mathcal T}}$
 in Equation (B.3) is attained at a vertex
${p\in \overline {\mathcal T}}$
 in Equation (B.3) is attained at a vertex 
 $p^* \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
 of the triangle
$p^* \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
 of the triangle 
 $\mathcal T$
. Without loss of generality (for the proofs in the remaining cases are entirely analogous), we assume that
$\mathcal T$
. Without loss of generality (for the proofs in the remaining cases are entirely analogous), we assume that 
 $$ \begin{align*} p^*=(0,0), \qquad \text{so that}\ \qquad M(v) = h(u^*). \end{align*} $$
$$ \begin{align*} p^*=(0,0), \qquad \text{so that}\ \qquad M(v) = h(u^*). \end{align*} $$
Then,
 $$ \begin{align*} \displaystyle\max \Big\{ \big\langle (1,0), v-u^* \big\rangle, \big\langle (1,-1), v-u^* \big\rangle \Big\} \le \langle p^*, v-u^* \rangle = 0 \end{align*} $$
$$ \begin{align*} \displaystyle\max \Big\{ \big\langle (1,0), v-u^* \big\rangle, \big\langle (1,-1), v-u^* \big\rangle \Big\} \le \langle p^*, v-u^* \rangle = 0 \end{align*} $$
so that
 $$ \begin{align} v-u^* \in \big\{ (x, y) \in \mathbb{R}_{\le 0} \times \mathbb{R} : x - y \le 0 \big\}. \end{align} $$
$$ \begin{align} v-u^* \in \big\{ (x, y) \in \mathbb{R}_{\le 0} \times \mathbb{R} : x - y \le 0 \big\}. \end{align} $$
 Now, let us consider several cases, depending on the side 
 $\ell = \ell (u^*)$
 of
$\ell = \ell (u^*)$
 of 
 $\partial \mathfrak {P}$
 that
$\partial \mathfrak {P}$
 that 
 $u^*$
 lies on. If the slope of
$u^*$
 lies on. If the slope of 
 $\ell $
 is either
$\ell $
 is either 
 $1$
 or
$1$
 or 
 $\infty $
, then Item B.2 of Lemma B.2 yields
$\infty $
, then Item B.2 of Lemma B.2 yields 
 $M(v)=h(u^*)\in n^{-1} \cdot {\mathbb Z}$
. Otherwise,
$M(v)=h(u^*)\in n^{-1} \cdot {\mathbb Z}$
. Otherwise, 
 $\ell $
 has slope
$\ell $
 has slope 
 $0$
 (that is,
$0$
 (that is, 
 $\ell $
 is a horizontal edge of
$\ell $
 is a horizontal edge of 
 $\partial \mathfrak {P}$
), and
$\partial \mathfrak {P}$
), and 
 $u^*$
 is not a corner vertex of
$u^*$
 is not a corner vertex of 
 $\mathfrak {P}$
. Observe in this case that
$\mathfrak {P}$
. Observe in this case that 
 $u^* \cdot (0, 1) \in n^{-1} \cdot \mathbb {Z}$
, that is, the second coordinate of
$u^* \cdot (0, 1) \in n^{-1} \cdot \mathbb {Z}$
, that is, the second coordinate of 
 $u^*$
 is in
$u^*$
 is in 
 $n^{-1} \cdot \mathbb {Z}$
 (as the second coordinate of any horizontal edge of
$n^{-1} \cdot \mathbb {Z}$
 (as the second coordinate of any horizontal edge of 
 $\partial \mathfrak {P}$
 is in
$\partial \mathfrak {P}$
 is in 
 $n^{-1} \cdot \mathbb {Z}$
).
$n^{-1} \cdot \mathbb {Z}$
).
 We claim that either 
 $v - u^* \in \mathbb {R}_{\ge 0} \cdot (0, 1)$
 or
$v - u^* \in \mathbb {R}_{\ge 0} \cdot (0, 1)$
 or 
 $v-u^* \in \mathbb {R}_{\ge 0} \cdot (-1, 1)$
. Otherwise, by Equation (B.4), there would exist some
$v-u^* \in \mathbb {R}_{\ge 0} \cdot (-1, 1)$
. Otherwise, by Equation (B.4), there would exist some 
 $u' \in \big \{ u + \mathbb {R}_{< 0} \cdot (1,0) \big \} \cap \ell $
 (that is, on
$u' \in \big \{ u + \mathbb {R}_{< 0} \cdot (1,0) \big \} \cap \ell $
 (that is, on 
 $\ell $
 and to the left of u) such that
$\ell $
 and to the left of u) such that 
 $v - u' \in \big \{ (x, y) \in \mathbb {R}_{\le 0} \times \mathbb {R} : x-y \le 0 \big \}$
. Since
$v - u' \in \big \{ (x, y) \in \mathbb {R}_{\le 0} \times \mathbb {R} : x-y \le 0 \big \}$
. Since 
 $\big \langle p, (-1, 0) \big \rangle \le 0$
 for each
$\big \langle p, (-1, 0) \big \rangle \le 0$
 for each 
 $p \in \overline {\mathcal {T}}$
, it follows that
$p \in \overline {\mathcal {T}}$
, it follows that 
 $\max _{p \in \overline {\mathcal {T}}} \langle p, v-u' \rangle \le \max _{p \in \overline {\mathcal {T}}} \langle p, v-u^* \rangle = 0$
. Together with the fact that
$\max _{p \in \overline {\mathcal {T}}} \langle p, v-u' \rangle \le \max _{p \in \overline {\mathcal {T}}} \langle p, v-u^* \rangle = 0$
. Together with the fact that 
 $h(u') = h(u^*) + (1, 0) \cdot (u' - u) < h(u^*)$
 (by Item B.2 of Lemma B.2), this yields
$h(u') = h(u^*) + (1, 0) \cdot (u' - u) < h(u^*)$
 (by Item B.2 of Lemma B.2), this yields 
 $$ \begin{align*} \max_{p \in \overline{\mathcal{T}}} \langle p, v-u' \rangle + h(u') < \max_{p \in \overline{\mathcal{T}}} \langle p, v-u^* \rangle + h(u^*), \end{align*} $$
$$ \begin{align*} \max_{p \in \overline{\mathcal{T}}} \langle p, v-u' \rangle + h(u') < \max_{p \in \overline{\mathcal{T}}} \langle p, v-u^* \rangle + h(u^*), \end{align*} $$
which is a contradiction. Hence, 
 $v - u^* \in \mathbb {R}_{\ge 0} \cdot (0, 1)$
 or
$v - u^* \in \mathbb {R}_{\ge 0} \cdot (0, 1)$
 or 
 $v - u^* \in \mathbb {R}_{\ge 0} \cdot (-1, 1)$
.
$v - u^* \in \mathbb {R}_{\ge 0} \cdot (-1, 1)$
.
 In the first situation 
 $v-u^* \in \mathbb {R}_{\ge 0} \cdot (0, 1)$
, the first coordinates of v and
$v-u^* \in \mathbb {R}_{\ge 0} \cdot (0, 1)$
, the first coordinates of v and 
 $u^*$
 coincide. Thus, the first coordinate of v is in
$u^*$
 coincide. Thus, the first coordinate of v is in 
 $n^{-1} \cdot \mathbb {Z}$
; the same therefore holds for
$n^{-1} \cdot \mathbb {Z}$
; the same therefore holds for 
 $u^*$
. Together with the fact that
$u^*$
. Together with the fact that 
 $u^* \cdot (0, 1) \in n^{-1} \cdot \mathbb {Z}$
, it follows that
$u^* \cdot (0, 1) \in n^{-1} \cdot \mathbb {Z}$
, it follows that 
 $u^* \in (n^{-1} \cdot \mathbb {Z})^2$
, from which Item B.2 of Lemma B.2 yields
$u^* \in (n^{-1} \cdot \mathbb {Z})^2$
, from which Item B.2 of Lemma B.2 yields 
 $M(v) = h(u^*) \in n^{-1} \cdot {\mathbb Z}$
. In the second situation
$M(v) = h(u^*) \in n^{-1} \cdot {\mathbb Z}$
. In the second situation 
 $v - u^* \in \mathbb {R}_{\ge 0} \cdot (0, 1)$
, we have
$v - u^* \in \mathbb {R}_{\ge 0} \cdot (0, 1)$
, we have 
 $u^* \cdot (1, 1) = v \cdot (1, 1) \in n^{-1} \cdot \mathbb {Z}$
. Again together with the fact that
$u^* \cdot (1, 1) = v \cdot (1, 1) \in n^{-1} \cdot \mathbb {Z}$
. Again together with the fact that 
 $u^* \cdot (0, 1) \in n^{-1} \cdot \mathbb {Z}$
, this yields
$u^* \cdot (0, 1) \in n^{-1} \cdot \mathbb {Z}$
, this yields 
 $u^* \in (n^{-1} \cdot \mathbb {Z})^2$
, and so Item B.2 of Lemma B.2 gives
$u^* \in (n^{-1} \cdot \mathbb {Z})^2$
, and so Item B.2 of Lemma B.2 gives 
 $M(v) = h(u^*) \in n^{-1} \cdot \mathbb {Z}$
. This finishes the proof of Lemma B.3.
$M(v) = h(u^*) \in n^{-1} \cdot \mathbb {Z}$
. This finishes the proof of Lemma B.3.
Now, we can establish Proposition 2.4.
Proof of Proposition 2.4.
 We begin by establishing the first statement of the proposition; observe that it suffices to address the case when 
 $(x,t) \in \mathfrak P\setminus \overline {\mathfrak L}$
, for then the extension to the case
$(x,t) \in \mathfrak P\setminus \overline {\mathfrak L}$
, for then the extension to the case 
 $(x,t) \in \partial \mathfrak {L}$
 would follow from the continuity of
$(x,t) \in \partial \mathfrak {L}$
 would follow from the continuity of 
 $H^*$
. To that end, recall from Lemma 2.3 that
$H^*$
. To that end, recall from Lemma 2.3 that 
 $\nabla H^* (x, t) \in \big \{ (0,0), (1,0), (1,-1) \big \}$
, as
$\nabla H^* (x, t) \in \big \{ (0,0), (1,0), (1,-1) \big \}$
, as 
 $(x, t) \in \mathfrak {P} \setminus \overline {\mathfrak {L}}$
. Since
$(x, t) \in \mathfrak {P} \setminus \overline {\mathfrak {L}}$
. Since 
 $\nabla H^*$
 is continuous at
$\nabla H^*$
 is continuous at 
 $(x, t)$
, there exists a small neighborhood
$(x, t)$
, there exists a small neighborhood 
 $\mathcal N(x, t) \subset \mathfrak {P}$
 of
$\mathcal N(x, t) \subset \mathfrak {P}$
 of 
 $(x,t)$
 such that
$(x,t)$
 such that 
 $\nabla H^*$
 is constant on
$\nabla H^*$
 is constant on 
 $\mathcal N(x, t)$
, on which it takes one of the values
$\mathcal N(x, t)$
, on which it takes one of the values 
 $\big \{ (0,0), (1,0), (1,-1) \big \}$
. We assume in what follows that
$\big \{ (0,0), (1,0), (1,-1) \big \}$
. We assume in what follows that 
 $\nabla H^*(u)=(0,0)$
 for each
$\nabla H^*(u)=(0,0)$
 for each 
 $u\in \mathcal N(x, t)$
, for the other cases can be proven in an entirely analogous way.
$u\in \mathcal N(x, t)$
, for the other cases can be proven in an entirely analogous way.
 Let S denote the interior of the connected component of 
 $\big \{u: H^*(u)=H^*(x, t) \big \}$
 containing
$\big \{u: H^*(u)=H^*(x, t) \big \}$
 containing 
 $(x, t)$
; then S is nonempty, and we may assume (after taking a subset of
$(x, t)$
; then S is nonempty, and we may assume (after taking a subset of 
 $\mathcal {N}(x,t)$
 if necessary) that
$\mathcal {N}(x,t)$
 if necessary) that 
 $\mathcal N(x, t)\subseteq S$
. Take any direction
$\mathcal N(x, t)\subseteq S$
. Take any direction 
 $\omega \in \mathbb {R}^2$
 such that
$\omega \in \mathbb {R}^2$
 such that 
 $$ \begin{align} \omega \cdot p>0, \quad \text{for all } p \in \overline{\mathcal T}\setminus \big\{ (0,0) \big\}. \end{align} $$
$$ \begin{align} \omega \cdot p>0, \quad \text{for all } p \in \overline{\mathcal T}\setminus \big\{ (0,0) \big\}. \end{align} $$
This is equivalent to the argument of 
 $\omega $
 being in
$\omega $
 being in 
 $(-\pi /2, \pi /4)$
; we may assume in what follows that the argument of
$(-\pi /2, \pi /4)$
; we may assume in what follows that the argument of 
 $\omega $
 is in a small neighborhood of
$\omega $
 is in a small neighborhood of 
 $-\pi /8$
. Then
$-\pi /8$
. Then 
 $H^*$
 is nondecreasing in the
$H^*$
 is nondecreasing in the 
 $\omega $
 direction, and S is bounded (in the
$\omega $
 direction, and S is bounded (in the 
 $\omega $
 direction) between two Lipschitz curves
$\omega $
 direction) between two Lipschitz curves 
 $\mathcal C_{\mathrm {top}}$
 and
$\mathcal C_{\mathrm {top}}$
 and 
 $\mathcal C_{\mathrm {btm}}$
. We can assume that the left and right boundary of S (in
$\mathcal C_{\mathrm {btm}}$
. We can assume that the left and right boundary of S (in 
 $\omega $
 direction) are given by points l and r, respectively (as if they are given by segments in
$\omega $
 direction) are given by points l and r, respectively (as if they are given by segments in 
 $\omega $
 direction, we can slightly perturb
$\omega $
 direction, we can slightly perturb 
 $\omega $
 to avoid such nongeneric situation). In this way, the two curves
$\omega $
 to avoid such nongeneric situation). In this way, the two curves 
 $\mathcal C_{\mathrm {top}}$
 and
$\mathcal C_{\mathrm {top}}$
 and 
 $\mathcal C_{\mathrm {btm}}$
 are from l to r.
$\mathcal C_{\mathrm {btm}}$
 are from l to r.
 First, observe that 
 $\mathcal C_{\mathrm {top}}, \mathcal C_{\mathrm {btm}} \not \subset \mathfrak P$
. Indeed, the second statement of Proposition B.1 would otherwise imply that
$\mathcal C_{\mathrm {top}}, \mathcal C_{\mathrm {btm}} \not \subset \mathfrak P$
. Indeed, the second statement of Proposition B.1 would otherwise imply that 
 $\mathcal C_{\mathrm {top}}$
 is convex and
$\mathcal C_{\mathrm {top}}$
 is convex and 
 $\mathcal C_{\mathrm {btm}}$
 is concave in the
$\mathcal C_{\mathrm {btm}}$
 is concave in the 
 $\omega $
 direction. Hence,
$\omega $
 direction. Hence, 
 $\mathcal {C}_{\mathrm {top}}$
 lies below the line connecting l to r in the
$\mathcal {C}_{\mathrm {top}}$
 lies below the line connecting l to r in the 
 $\omega $
 direction, while
$\omega $
 direction, while 
 $\mathcal {C}_{\mathrm {top}}$
 lies above this line, which is impossible since
$\mathcal {C}_{\mathrm {top}}$
 lies above this line, which is impossible since 
 $\mathcal {C}_{\mathrm {top}}$
 bounds S from above and
$\mathcal {C}_{\mathrm {top}}$
 bounds S from above and 
 $\mathcal {C}_{\mathrm {\operatorname {\mathrm {btm}}}}$
 bounds S from below in the
$\mathcal {C}_{\mathrm {\operatorname {\mathrm {btm}}}}$
 bounds S from below in the 
 $\omega $
 direction. Thus, we must instead have
$\omega $
 direction. Thus, we must instead have 
 $\mathcal C_{\mathrm {top}} \cup \mathcal C_{\mathrm {btm}}\not \subset \mathfrak P$
, meaning that there exists some point
$\mathcal C_{\mathrm {top}} \cup \mathcal C_{\mathrm {btm}}\not \subset \mathfrak P$
, meaning that there exists some point 
 $q \in \mathcal (\mathcal C_{\mathrm {top}}\cup \mathcal C_{\mathrm {btm}})\cap \partial \mathfrak P\neq \emptyset $
, with
$q \in \mathcal (\mathcal C_{\mathrm {top}}\cup \mathcal C_{\mathrm {btm}})\cap \partial \mathfrak P\neq \emptyset $
, with 
 $q \notin \mathcal {C}_{\mathrm {top}} \cap \mathcal {C}_{\mathrm {btm}}$
. If q belongs to a vertical boundary edge or a boundary edge with slope
$q \notin \mathcal {C}_{\mathrm {top}} \cap \mathcal {C}_{\mathrm {btm}}$
. If q belongs to a vertical boundary edge or a boundary edge with slope 
 $1$
, then
$1$
, then 
 $H^*(x, t)=h(q)\in n^{-1} \cdot {\mathbb Z}$
 by Item B.2 of Lemma B.2.
$H^*(x, t)=h(q)\in n^{-1} \cdot {\mathbb Z}$
 by Item B.2 of Lemma B.2.
 Otherwise, q belongs to a horizontal edge 
 $\ell $
 of
$\ell $
 of 
 $\partial \mathfrak P$
, but it is not a corner vertex of
$\partial \mathfrak P$
, but it is not a corner vertex of 
 $\mathfrak P$
. Without loss of generality, we assume (by rotating
$\mathfrak P$
. Without loss of generality, we assume (by rotating 
 $\mathfrak {P}$
 if necessary) that
$\mathfrak {P}$
 if necessary) that 
 $q\in \mathcal C_{\mathrm {btm}}$
, meaning that
$q\in \mathcal C_{\mathrm {btm}}$
, meaning that 
 $q \notin \mathcal {C}_{\mathrm {top}}$
. We can take a small
$q \notin \mathcal {C}_{\mathrm {top}}$
. We can take a small 
 $\varepsilon>0$
 such that the short interval
$\varepsilon>0$
 such that the short interval 
 $\big [q-(\varepsilon ,0), q+(\varepsilon ,0) \big ]$
 also belongs to the same edge
$\big [q-(\varepsilon ,0), q+(\varepsilon ,0) \big ]$
 also belongs to the same edge 
 $\ell $
 of
$\ell $
 of 
 $\partial \mathfrak P$
. Since
$\partial \mathfrak P$
. Since 
 $q \in \mathcal {C}_{\mathrm {btm}} \setminus (\mathcal {C}_{\mathrm {btm}} \cap \mathcal {C}_{\mathrm {top}})$
, the point q is in the interior of
$q \in \mathcal {C}_{\mathrm {btm}} \setminus (\mathcal {C}_{\mathrm {btm}} \cap \mathcal {C}_{\mathrm {top}})$
, the point q is in the interior of 
 $\mathcal {C}_{\mathrm {btm}}$
, and so we can take
$\mathcal {C}_{\mathrm {btm}}$
, and so we can take 
 $\varepsilon $
 small enough so that there exists some
$\varepsilon $
 small enough so that there exists some 
 $r\geq 0$
 such that
$r\geq 0$
 such that 
 $q+(\varepsilon ,0)+r\omega \in S$
; see Figure A1 for a depiction. This yields a contradiction, because
$q+(\varepsilon ,0)+r\omega \in S$
; see Figure A1 for a depiction. This yields a contradiction, because 
 $$ \begin{align*} H^* \big(q+(\varepsilon,0)+r\omega \big) \geq H^* \big(q+(\varepsilon,0) \big)=h^* \big(q+(\varepsilon,0) \big)>h^*(q)=H^* \big(q+(\varepsilon,0)+r\omega \big), \end{align*} $$
$$ \begin{align*} H^* \big(q+(\varepsilon,0)+r\omega \big) \geq H^* \big(q+(\varepsilon,0) \big)=h^* \big(q+(\varepsilon,0) \big)>h^*(q)=H^* \big(q+(\varepsilon,0)+r\omega \big), \end{align*} $$
where in the first statement we used the fact that, in the 
 $\omega $
 direction,
$\omega $
 direction, 
 $H^*$
 is nondecreasing; in the second, we used that
$H^*$
 is nondecreasing; in the second, we used that 
 $\big [ q-(\varepsilon ,0), q+(\varepsilon ,0) \big ]$
 belongs to a horizontal boundary edge of
$\big [ q-(\varepsilon ,0), q+(\varepsilon ,0) \big ]$
 belongs to a horizontal boundary edge of 
 $\mathfrak P$
; in the third, we used Item B.2 of Lemma B.2, and in the fourth, we used that
$\mathfrak P$
; in the third, we used Item B.2 of Lemma B.2, and in the fourth, we used that 
 $q, q+(\varepsilon ,0)+r\omega \in \overline {S}$
. This confirms that
$q, q+(\varepsilon ,0)+r\omega \in \overline {S}$
. This confirms that 
 $H^* (x, t) \in n^{-1} \cdot \mathbb {Z}$
 and finishes the proof of the first statement in Proposition 2.4.
$H^* (x, t) \in n^{-1} \cdot \mathbb {Z}$
 and finishes the proof of the first statement in Proposition 2.4.

Figure A1 If 
 $q\in \mathcal C_{\mathrm {btm}}\cap \partial {\mathfrak P}$
 belongs to a horizontal boundary edge of
$q\in \mathcal C_{\mathrm {btm}}\cap \partial {\mathfrak P}$
 belongs to a horizontal boundary edge of 
 ${\mathfrak P}$
, then we can take
${\mathfrak P}$
, then we can take 
 $\varepsilon $
 small enough so that there exists some
$\varepsilon $
 small enough so that there exists some 
 $r\geq 0$
 such that
$r\geq 0$
 such that 
 $q+(\varepsilon ,0)+r\omega \in S$
.
$q+(\varepsilon ,0)+r\omega \in S$
.
 To establish the second statement, we set 
 $\Lambda = \big \{u\in \overline {\mathfrak P}: H^*(u)=m(u) \text { or } H^*(u)=M(u) \big \}$
. If
$\Lambda = \big \{u\in \overline {\mathfrak P}: H^*(u)=m(u) \text { or } H^*(u)=M(u) \big \}$
. If 
 $(x,t) \in \Lambda $
 then it follows from Lemma B.3 that
$(x,t) \in \Lambda $
 then it follows from Lemma B.3 that 
 $n \cdot H^*(x,t)\in {\mathbb Z}$
. Otherwise
$n \cdot H^*(x,t)\in {\mathbb Z}$
. Otherwise 
 $(x,t)\in \mathfrak P\setminus (\overline {\mathfrak L}\cup \Lambda )$
, and so the first statement in Proposition B.1 implies that
$(x,t)\in \mathfrak P\setminus (\overline {\mathfrak L}\cup \Lambda )$
, and so the first statement in Proposition B.1 implies that 
 $\nabla H^*$
 is continuous at
$\nabla H^*$
 is continuous at 
 $(x,t)$
. Setting
$(x,t)$
. Setting 
 $\nabla H^* (x,t) = (s, r) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
, the first statement of Proposition 2.4 implies that
$\nabla H^* (x,t) = (s, r) \in \big \{ (0, 0), (1, 0), (1, -1) \big \}$
, the first statement of Proposition 2.4 implies that 
 $n (H^*(x,t)-sx-rt)\in {\mathbb Z}$
. By our assumption
$n (H^*(x,t)-sx-rt)\in {\mathbb Z}$
. By our assumption 
 $(x,t) \in (n^{-1} \cdot \mathbb {Z})^2$
, so
$(x,t) \in (n^{-1} \cdot \mathbb {Z})^2$
, so 
 $n(sx+rt)\in {\mathbb Z}$
 and we conclude that
$n(sx+rt)\in {\mathbb Z}$
 and we conclude that 
 $n \cdot H^*(x,t)\in {\mathbb Z}$
. This finishes the second statement of Proposition 2.4.
$n \cdot H^*(x,t)\in {\mathbb Z}$
. This finishes the second statement of Proposition 2.4.
Acknowledgements
The work of Amol Aggarwal was partially supported by a Clay Research Fellowship. The work of Jiaoyang Huang was partially supported by the Simons Foundation as a Junior Fellow at the Simons Society of Fellows and NSF grant DMS-2054835. The authors heartily thank Shirshendu Ganguly, Vadim Gorin and Lingfu Zhang for very helpful comments on this paper, as well as Erik Duse for highly enlightening discussions on [Reference Kenyon and OkounkovKO07, Reference Astala, Duse, Prause and ZhongADPZ20].
Competing interests
The authors have no competing interest to declare.
 
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 









































