## Introduction

### Terms & Two properties

First, a $xD/yD$-DP means that the there are $O(N^x)$ subproblems and each subproblems rely on $O(N^y)$ subproblems to calculate.

Then, we define two kinds of property for a two-variable function $f(i, j)$:

$c$ $d$
$a$ $f(a, c)$ $f(a, d)$
$b$ $f(b, c)$ $f(b, d)$
• Monge condition : $\forall a\lt b\le c\lt d,$
• Convex Monge condition : $f(a, c) + f(b, d) \le f(a, d) + f(b, c)$
• Concave Monge condition : $f(a, c) + f(b, d) \ge f(a, d) + f(b, c)$
• Totally monotone : $\forall a\lt b\le c\lt d,$
• Convex totally monotone : $f(a, c) \ge f(b, c) \implies f(a, d) \ge f(b, d)$
• Concave totally monotone : $f(a, c) \le f(b, c) \implies f(a, d) \le f(b, d)$

We can derive Totally monotone from Monge condition i.e.

• Convex Monge condition $\implies$ Convex totally monotone
• Concave Monge condition $\implies$ Concave totally monotone

This implies that we can check Totally monotone (usually what we want) by checking Monge condition, which is usually easier to check. Note that we can't derive Monge condition from Totally monotone.

### 1D/1D

This type of DP looks like the following: $$dp_j = \min_{0\le i\lt j} f(i, j), i, j \in \mathbb{N}_0, i\lt j$$ Usually $f(i, j)$ is something like $dp_i + cost(i+1, j)$ or $cost(i, j)$. This type of DP usually needs $O(N^2)$ time to calculate.

#### Convex

From Convex totally monotone of $f$ we can derive the following: $$H_j=\mathop{\arg\min}_{0\le i\lt j} f(i, j) \implies H_{j} \le H_{j+1}$$ $H_j$ can be viewed as the best transition point for $dp_j$. In other words, the property above implies that the best transition point for $dp_{j+1}$ is not lesser than that of $dp_j$.

The property above implies that if we write down the best transition point for $j=1, 2, \dots$, then it will be non-decreasing. For example, it may look like this: $$11111222222222233444444446666...$$ As every $i$ have a interval which $i$ is the best transition point, we can maintain the best transition point for $j=1, 2, \dots$ and update them every time we done the calculation for a new DP i.e. we can maintain something like this: $$\text{After calculation of }dp_1: 11111111111111111111111111\dots \\ \text{After calculation of }dp_2: 11111111112222222222222222\dots \\ \text{After calculation of }dp_3: 11111111112222222233333333\dots \\ \vdots$$ This can be done with a deque and binary search. The total time complexity is $O(N\log N)$.

struct Node {
ll p, l, r; // p is the best transition point for dp[l], dp[l+1], ..., dp[r]
};

deque<Node> dq;
dp[0] = 0;
dq.push_back({0, 1, n});
for(int i = 1 ; i <= n ; ++i) {
dp[i] = f(dq.front().p, i)
// r == i implies that this Node is useless later, so pop it
if(dq.front().r == i) dq.pop_front();
// else update l
else dq.front().l++;

// find l, r for i
// f(i, dq.back().l) < f(dq.back().p, dq.back().l) implies the last Node in deque is useless
while(!dq.empty() && f(i, dq.back().l) < f(dq.back().p, dq.back().l)) dq.pop_back();
// we know that r=n, now we need to find l
// l=i+1 as deque is empty
if(dq.empty()) dq.push_back({i, i + 1, n});
// find l by binary search
else {
int l = dq.back().l, r = dq.back().r;
while(l < r) {
int mid = r - (r - l) / 2;
if(f(i, mid) < f(dq.back().p, mid)) r = mid - 1;
else l = mid;
}
dq.back().r = l;
// l == n means that i is useless
if(l != n) dq.push_back({i, l + 1, n});
}
}


#### Concave

Similar to Convex case above, we can derive the following from Concave totally monotone of $f$: $$H_j=\mathop{\arg\min}_{0\le i\lt j} f(i, j) \implies H_{j} \ge H_{j+1}$$ In other words, the best transition point for $dp_{j+1}$ is not greater then that of $dp_j$. Similar to above, we can maintain best transition point and update them. For example, $$\text{After calculation of }dp_1: 11111111111111111111111111\dots \\ \text{After calculation of }dp_2: 12222222222222222211111111\dots \\ \text{After calculation of }dp_3: 12333333333322222211111111\dots \\ \vdots$$ This can be done with a vector and binary search. The total time complexity is $O(N\log N)$.

struct Node {
ll p, l, r; // p is the best transition point for dp[l], dp[l+1], ..., dp[r]
};

vector<Node> v;
dp[0] = 0;
v.push_back({0, 1, n});
for(int i = 1 ; i <= n ; ++i) {
dp[i] = f(v.back().p, i)
// r == i implies that this Node is useless later, so pop it
if(v.back().r == i) v.pop_back();
// else update l
else v.back().l++;

// find l, r for i
// f(i, v.back().r) < f(v.back().p, v.back().r) implies the last Node in vector is useless
while(!v.empty() && f(i, v.back().r) < f(v.back().p, v.back().r)) v.pop_back();
// we know that l=i+1, now we need to find r
// r=n as vector is empty
if(v.empty()) v.push_back({i, i + 1, n});
// find r by binary search
else {
int l = v.back().l, r = v.back().r;
while(l < r) {
int mid = l + (r - l) / 2;
if(f(i, mid) < f(v.back().p, mid)) l = mid + 1;
else r = mid;
}
v.back().l = l;
// l == i + 1 means that i is useless
if(l != i + 1) v.push_back({i, i + 1, l - 1});
}
}


### 2D/1D

This type of DP looks like the following: $$dp_{i, j}=\min_{i\le k\le j} f(i, j, dp_{i, k}, dp_{k ,j}), i, j\in \mathbb{N}_0, i\lt j$$ $f(i, j, dp_{i, k}, dp_{k, j})$ usually looks like $dp_{i, k} + dp_{k, j} + cost(i, j)$. This type of DP usually needs $O(N^3)$ time to calculate.

#### Convex

From Convex Monge condition of the DP table we can derive the following: $$H_{i, j}=\mathop{\arg\min}_{i\lt k\lt j} f(i, j, dp_{i, k}, dp_{k, j}) \implies H_{i,j-1} \le H_{i, j} \le H_{i+1,j}$$ Thus, if we calculate the DP table in the order of increasing $j-i$, then we can reduce the time complexity to $O(N^2)$!

#### Concave

From Concave Monge condition of the DP table we can derive the following: $$Nothing :p$$ Yes, we can derive nothing :\ However, we can still reduce it to $O(N)$ 1D/1D problems: $$dp_j^{i} = dp_{i, i+j} \implies dp_{i, i+j} = dp_j^{i} = \min_{0\lt k\lt j} f(i, i+k, dp_k^i, dp_{j-k}^{i+k})=\min_{0\lt k\lt j} f^{'}(k, j)$$ For every $i$ we have a 1D/1D problem $dp_j^{i}=\min_{0\lt k\lt j} f^{'}(k, j)$. If $f^{'}$ has Concave/Convex Totally monotone, then we can enumerate $i$ from large to small and improve the time complexity to $O(N^2\log N)$.