Given a cell (parallelopiped) $E= [a_1,b_1] \times...\times [a_n,b_n]$
in $\mathbb{R}^n$, a partition $P$ of $E$ consists of a set of
partitions $P_1,\dots,P_n$ for the intervals $[a_j,b_j]$
($j=1,\dots,n$) satisfying
$$
P_j=\big\{ (x_{j,i})_{i=0,\dots,m_j}\, \big |\, a_j=x_{j,0}
<x_{j,1}<\dots<x_{j,m_j} = b_j\big\} .
$$
Definitions (Multiple integrals)
(i) The maximal interval size of a partition $P$ is
given by
$$
\Delta(P)=\max\big\{ |x_{j,i} - x_{j,i+1}|\, :\, i=0,1,2,...,m_j-1\,
,\: 1\leq j\le m_j\big\}.
$$
(ii) Given a bounded function $f:E\to \mathbb{R}$ we define
\begin{gather*}
M_{j_1,\dots,j_n} = \sup\big\{ f(x)\, :\, x\in [x_{1, j_1-1}, x_{1,
j_1}]\times\cdots\times [x_{n,j_n-1}, x_{n ,j_n}]\big\}\\
m_{j_1,j_2,\dots,j_n} = \inf\big\{f(x)\, :\, x\in
[x_{1,j_1-1}+x_{1,j_1}]\times\cdots\times [x_{n,j_{n-1}}, x_{n,j_n}]
\big\}
\end{gather*}
(iii) Lower and upper Riemann sums for $f$ are given by
\begin{gather*}
L(f,P) =\sum_{k=1}^n\sum_{j_k=1}^{m_k}
m_{j_1,...,j_n}\prod_{i=1}^n(x_{i,j_i} - x_{1,j_i-1})\\
U(f,P) =\sum_{k=1}^n\sum_{j_k=1}^{m_k}
M_{j_1,...,j_n}\prod_{i=1}^n(x_{i,j_i}-x_{1,j_i-1})
\end{gather*}
(iv) Lower and upper Riemann integrals are given by
\begin{gather*}
L(f) = \lim_{\Delta(P)\to 0} L(f,P)\text{ and }U(f) =
\lim_{\Delta(P)\to 0} U(f,P)
\end{gather*}
(v) The (bounded) function $f$ is called Riemann
integrable (on $E$) iff it holds that $L(f) = U(f)$.
Fubini's Theorem
Let
$$
f:E=[a_1,b_1]\times\cdots\times [a_n,b_n]\to \mathbb{R}
$$
be integrable. Then
$$
\int_E f(x)\, dx =\int_{a_1}^{b_1} \cdots \int_{a_n}^{b_n}
f(x_1,...,x_n)\,dx_n\cdots dx_1,
$$
where $dx$ is the positively oriented volume element of
$\mathbb{R}^n$.
Definitions (Integrability)
Let $D$ be a bounded domain on $\mathbb{R}^n$ and let $f:D\to
\mathbb{R}$ be bounded. $f$ is said to be integrable (Riemann
integrable) on $D$ if and only if
$$
F(x) =\begin{cases}
f(x),&x\in D\\0,&x\not\in D
\end{cases}$$
is integrable on $[M,M]^n$ for one (and thus any) $M>0$ such that $D
\subset [-M,M]^n$.
Let $D=\{ (x,y)\in \mathbb{R}^2\, |\, \sqrt{y}\leq x\leq 1,\: 0\leq
y\leq 1\}$ and observe that
\begin{multline*}
\int_0^1 \int_{\sqrt{y}}^1 e^{x^3}\, dxdy= \int_D e^{x^3} dxdy =
\int_0^1 \int_0^{x^2}e^{x^3} dy dx = \int_0^1 e^{x^3} x^2 dx =
\frac{1}{3} e^{x^3}\Big | _{x=0}^1 = \frac{1}{3}(e-1).
\end{multline*}
Change of Variables Formula
Theorem (Change of Variables)
Let $\Phi: D\to G\subset\mathbb{R}^n$ be a differentiable functions
which is 1-1 and onto. Assume that with $\det\phi'(x)\ne0$. If $f$ is
integrable on $G$, then $f \circ\Phi | \mbox{det} \Phi '(x)|$ is
integrable on $D$ and
$$
\int_D f\bigl(\Phi(x)\bigr)\big |\text{det}\Phi'(x)\big |\, dx =
\int_G f(y)\, dy.
$$
Remark
This is a very useful theorem since simplication can often be obtained
by introduction of suitable new coordinates.
Example (Polar Coordinates)
As an example consider polar coordinates in
$\mathbb{R}^2$, which are given by
$$
(x,y)=\bigl(r\cos\theta, r\sin\theta\bigr)=:\Phi(r,\theta)\, ,
$$
so that
$$
d(x,y)=dxdy=|\det \Phi(r,\theta) | dr d\theta=r dr d\theta.
$$
For $a, b, c>0$ define
$$
D(a,b,c) =\big\{(x,y,z)\, :\, \frac{x^2}{a^2} + \frac{y^2}{b^2}+
\frac{z^2}{c^2}<1\big\}.
$$
Prove that the volume of $D(a,b,c)$ amounts to $\frac{4 \pi}{3} abc$.
Set
$$
(x, y, z)=\phi(\bar{x},\bar{y},\bar{z})=( a\bar{x}, b\bar{y},
c\bar{z}).
$$
and observe that $\det \phi'= abc$. It follows that
\begin{eqnarray*}
\text{Vol}\big[D(a,b,c)\big]=\int_{D(a,b,c)}\,
d(x,y,z)&=&\int_{\mathbb{B}(0,1)}|\det \phi'|\,
d(\bar{x},\bar{y},\bar{z}) \\ &=&\int_{\mathbb{B} (0,1)} abc\,
d(\bar{x},\bar{y},\bar{z})=abc\,
\text{Vol}\big[\mathbb{B}(0,1)\big]=\frac{4\pi}{3} abc.
\end{eqnarray*}
Example (Cylindrical coordinates)
In $\mathbb{R}^3$ one can often make use of so-called cylindrical
coordinates $(r, \theta, z)$. They are defined through
$$
x = r \cos \theta\, ,\: y= r \sin \theta\, ,\: z=z\, .
$$
in terms of the standard "rectangular" coordinates $(x,y,z)\in
\mathbb{R}^3$. In this case one has that
$$
d(x,y,z)=dxdydz=r\, drd\theta dz\, ,
$$
as follows from
$$
\text{det}\:\Phi(r, \theta, z)=
\begin{vmatrix}
\cos\theta & -r \sin \theta & 0 \\
\sin \theta & r \cos \theta & 0 \\
0 & 0 &1
\end{vmatrix}=
\begin{vmatrix}
\cos\theta & - r \sin \theta \\
\sin\theta &r \cos \theta
\end{vmatrix}=r
$$
Example (Spherical coordinates)
Sperical coordinates $(r,\varphi,\theta)$ are given by
$$
(x,y,z)=r\, \bigl( \sin(\varphi)\cos(\theta),\sin(\varphi)
\sin(\theta),\cos(\varphi)\bigr).
$$
In this case it holds that
$$
d(x,y,z)=r^2\sin(\varphi)\, drd\varphi d\theta,
$$
as is seen computing the determinant of
$$
\begin{bmatrix}\frac{\partial x }{\partial r} &
\frac{\partial x}{\partial \varphi} &\frac{\partial x}{\partial \theta} \\
\frac{\partial y }{\partial r}&\frac{\partial y }{\partial \varphi} &\frac{\partial
y }{\partial \theta}\\\frac{\partial z}{\partial r} & \frac{\partial z
}{\partial \varphi} & \frac{\partial z}{\partial \theta}\end{bmatrix}
=\begin{bmatrix}\sin \varphi \cos \theta & r \cos \varphi \cos \theta & -r \sin \varphi
\sin \theta\\ \sin \varphi \sin \theta & r \cos \varphi \sin \theta & r \sin \varphi
\cos \theta \\ \cos \varphi & -r \sin \varphi & 0\end{bmatrix}
$$
Line integrals
Definition (Line Integrals)
We will also need to be able to integrate along lines. Towards that end, let
$\gamma:[a,b]\to \mathbb{R} ^n$ be a (piecewise smooth)
parametrization of a piecewise $C^1$ curve $\Gamma$,
i.e. $\Gamma=\gamma\bigl([a,b]\bigr)$. We then define the line
integral of a vector field $f:\Gamma\to \mathbb{R}^n$ along
the curve $\Gamma$ by
$$
\int_\Gamma f(\gamma)\cdot d\gamma = \int_a^b
\Big[f_1\bigl(\gamma(t)\bigr)\gamma _1'(t)
+\dots+ f_n\bigl(\gamma(t)\bigr) \gamma _n'(t)\Big]\, dt.
$$
For a real-valued function $f:\Gamma\to \mathbb{R}$ we define the
corresponding line integral by
$$
\int _\Gamma f\, d\sigma _\Gamma=\int _a^b
f\bigl(\gamma(t)\bigr)\big |\gamma '(t)\big |\, dt,
$$
so that, e.g., the length of $\Gamma$ amounts to $\int_a^b
\big |\gamma'(t)\big | dt$.
Exercise
Prove that the integrals defined above do not depend on the choice of
parametrization for the curve $\Gamma$.
Surface integrals
More in general we can replace lines by surfaces and introduce corresponing
surface integrals. Let $S$ be a smooth hypersurface in $\mathbb{R}^n$
and let $f:S\to \mathbb{R}$ be a continuous function.
Question
How can we compute/define $\int_S f(x) d\sigma _S$?
Consider first the case of a surface which happens to be the graph of
a function $\varphi:D\subset \mathbb{R} ^{n-1}\to \mathbb{R}$ on a
domain $D$. In this case
$$
S=\Big\{ \bigl( x_1,\dots,x_{n-1},\varphi(x_1,\dots,x_{n-1})\bigr)\, :\,
(x_1,\dots,x_{n-1})\in D\Big\}
$$
and we can define
\begin{multline*}
\int_S f(x) d\sigma(x):=\int_S f\bigl( x_1,\dots,x_{n-1}
,\varphi(x_1,\dots,x_{n-1})\bigr)\,\sqrt{1+|\nabla\varphi(x)|^2}\,
dx_1\cdots
dx_{n-1}\, .
\end{multline*}
More in general, if one has that $S =\Phi(D)$ for some
$$
\Phi= (\phi_1,\dots,\phi_{n}): D\subset\mathbb{R} ^{n-1}\to
\mathbb{R}^n,
$$
then one can define
$$
d\sigma _S(x)=d\sigma_S(x_1,\dots,x_{n-1})=\sqrt{
\sum_{j=1}^n\big[\det (\hat{\phi}_j)'(x)\big]^2}\, dx_1\cdots
dx_{n-1}
$$
where $\hat{\phi_j} = (\phi_1,....,\phi_{j-1},
\phi_{j+1}....,\phi_n)$.
Exercise
Make sense of the first definition above by providing a geometric
rationale for it. Then, with that insight in hand, do the same for the
second.
Example
Find the area of the portion of the unit sphere: $x^2 + y^2 + z^2 = 1$
in $\mathbb{R}^3$ which lies above $z=\sqrt{3}/2$.
We first perform the computation in spherical coordinates
$$
(x,y,z)=\Phi(\theta,\phi)=(\sin \varphi \cos \theta, \sin \varphi \sin \theta, \cos
\varphi),
$$
in which case
$$
z=\frac{\sqrt{3}}{2}\Leftrightarrow\cos\varphi
=\frac{\sqrt{3}}{2}\Leftrightarrow\varphi=\frac{\pi}{6}\, .
$$
Substituting in the above formula, yields
$$
d\sigma(\varphi, \theta) = \sqrt{\big |
\frac{\partial (x,y)}{\partial(\varphi,\theta)}\big |^2
+ \big |\frac{\partial (y,z)}{\partial(\varphi,\theta)}
\big |^2+\big|\frac{\partial(x,z)}{\partial(\varphi,\theta)}
\big |^2 }\, d \varphi d\theta = \sin\varphi \, d \varphi d \theta
$$
and, therefore, that the area amounts to
$$
\int_0^{2\pi} \int_0 ^ {\pi/6} \sin \varphi \,d\varphi d\theta =-2
\pi \cos\varphi \big |_0^{\pi/6}=2 \pi[1- \frac{\sqrt{3}}{2}]\, .
$$
On the other hand, using rectangular coordinates $(x, y, z)$, we have
that
$$
\big\{(x, y, z): x^2 + y^2 +z^2=1 \text{ and }
z\ge\frac{\sqrt{3}}{2}\big\}=\big\{\bigl(x,y,\sqrt{1-x^2 -
y^2}\bigr)\, :\, x^2 + y^2 \leq\frac{1}{4}\big\}.
$$
In this case the area computes to
\begin{eqnarray*}
\int_{\mathbb{B}(0,\frac{1}{2})}\sqrt{1+
|\frac{\partial z}{\partial x}|^2+|
\frac{\partial z}{\partial y}|^2}\, dxdy
&=& \int_D \sqrt{1+ \frac{x^2}{1-x^2-y^2}+\frac{y^2}{1-x^2-y^2}}\,
dxdy=\int_D \sqrt{\frac{1}{1-x^2 - y^2}}\, dxdy\\
&=&\int_0^{2\pi}\int_0^{1/2}\frac{r}{\sqrt{1-r^2}}\, drd\theta
=\pi \int_0^{1/4} \frac{1}{\sqrt{1-t}} \, dt=-\pi\int_1^{3/4}
\frac{1}{\sqrt{t}}\, dt\\ &=& \pi \int_{3/4}^1
\frac{1}{\sqrt{t}}\, dt= 2\pi \sqrt{t}\,\Big | _{\frac{3}{4}}^1=
2\pi(1-\frac{\sqrt{3}}{2}).
\end{eqnarray*}
Example
Let $\mathbb{B}^n(0,R)\subset \mathbb{R}^n$ be the ball of radius $R$
centered at $0$. Then
$$
\int_{\mathbb{B}^n(0,R)} f(x)\, dx= \int_0^R \int_{\mathbb{S}^{n-1}}
f(ry)d\sigma_{\mathbb{S}^{n-1}}(y)r^{n-1}\, dr
$$
where
$\mathbb{S}^{n-1} =\big\{x\in \mathbb{R}^n: |x| = 1\big\}$ and
$d\sigma_{\mathbb{S}^{n-1}}$ is the corresponding surface element. In
particular it follows that
$$
\text{Vol}\bigl(\mathbb{B}^n(0,R)\bigr)=\int_0^R r^{n-1}
\int_{\mathbb{S}^{n-1}}\, d\sigma_{\mathbb{S}^{n-1}}\, dr
= \sigma_{\mathbb{S}^{n-1}}(\mathbb{S}^{n-1})\int_0^R r^{n-1}\, dr
= \sigma_{\mathbb{S}^{n-1}}(\mathbb{S}^{n-1})\frac{R^n}{n}.
$$
When $n=3$, Area$(\mathbb{S}^2) = 4\pi$ so that
Vol$\big[\mathbb{B}^3(0,R)\big]=4\pi\frac{R^3}{3}$.
Differential Forms
In this section we introduce a powerful coordinate independent
language that will allow for an elementary proof of Stokes'
theorem. We are talking about forms.
A $0$-form on $\mathbb{R}^n$ is simply a function $f:\mathbb{R} ^n\to
\mathbb{R}$. It is clearly an object that can be evaluated at
points. Next we define $1$-forms.
Definition($1$-form)
A $1$-form $\omega$ on $\mathbb{R}^n$ is a map
\begin{align*}
\omega :\, &\mathbb{R} ^n\to \mathcal{L}(\mathbb{R} ^n,\mathbb{R})\,
,\\ &x\mapsto\omega(x):\mathbb{R}^n\to \mathbb{R} \, ,\:v\mapsto
\langle\omega(x),v\rangle.
\end{align*}
Remark
Notice that at any $x$, $\omega(x)$ is a linear map defined on vectors
$v\in\mathbb{R}^n$. In this sense, it does very much resemble a
derivative $Df(x)$ of a function $f$, which is, at any point $x$, the
linear map $Df(x):\mathbb{R}^n\to \mathbb{R}$ which appears in the
best affine approximation to the function. In short
\begin{align*}
Df:\, &\mathbb{R}^n\to \mathcal{L}(\mathbb{R}^n,\mathbb{R})\, ,\\
&x\mapsto Df(x):\mathbb{R}^n\to \mathbb{R}\, ,\: v\mapsto
Df(x)v=\partial _vf(x).
\end{align*}
We shall see that $1$-forms can naturally be integrated along
lines. Let now
$$
\frac{\partial}{\partial {x^1}},\dots,
\frac{\partial}{\partial{x^n}}\text{ and }dx^1,\dots,dx^n\, ,
$$
be the standard basis of $\mathbb{R}^n$ and its dual basis,
respectively,
i.e.
$$
\langle dx^i,\frac{\partial}{\partial {x^j}}\rangle=\delta ^i_j\text{
for }i,j=1,\dots,n\, .
$$
Then for any $x\in \mathbb{R}^n$ and any vector $v\in \mathbb{R}^n$ we
have
$$
\langle\omega(x),v\rangle=\langle\omega(x),v^i
\frac{\partial}{\partial{x^i}}\rangle=\langle \omega(x),
\frac{\partial}{\partial{x^i}}\rangle dx^i(v)=:
\big[\omega_i(x)dx^i\big](v),
$$
since $v=v^i\frac{\partial}{\partial {x^i}}$ and $v^i=dx^i(v)$. It
follows that
$$
\omega = \omega _i\, dx^i\, ,
$$
for functions $\omega _i:\mathbb{R}^n\to \mathbb{R}$.
Remark
Here and in the sequel we shall frequently make use of the so-called
summation convention, whereby indices appearing twice,
once as a superscript and once as a subscript, are to be summed over.
Definitions (Differential)
If $f:\mathbb{R}^n\to \mathbb{R}$ is a differentiable function, then
define its differential by the $1$-form
$$
df=\frac{\partial f}{\partial {x^j}}\, dx^j.
$$
and observe that, for any fixed $x\in \mathbb{R}^n$, it holds that
$$
\langle df(x),v\rangle=Df(x)v=\partial _vf(x)\, ,\: v\in
\mathbb{R}^n\, .
$$
Remark
It is natural to expect that $\omega$ be independent of the coordinates used to
describe it as it is, at each point, a linear form defined on vectors, which are
themselves coordinate independent objects.
Let $y^1,\dots,y^n$ be another coordinate system (not necessarily linearly related to
the already given $x^1,\dots,x^n$). Then
\begin{equation*}
\langle \omega,\frac{\partial}{\partial {x^i}}\rangle\,
dx^i=\langle \omega,\frac{\partial y^k}{\partial {x^i}}\frac{\partial}{\partial
{y^k}}\rangle \frac{\partial x^i}{\partial {y^l}}\, dy^l=\frac{\partial
y^k}{\partial {x^i}}\frac{\partial x^i}{\partial {y^l}}
\langle\omega,\frac{\partial }{\partial {y^k}}\rangle \, dy^l=\delta
^k_l\langle\omega,\frac{\partial }{\partial {y^k}}\rangle \, dy^l =\langle\omega,
\frac{\partial }{\partial {y^k}}\rangle \, dy^k\, .
\end{equation*}
Here we thought of $y$ and $x$ as functions of one another, as any $y$ has its
$x$-coordinates and each $x$ its $y$-coordinates. As such these functions are
clearly one the inverse of the other and thus $y'\circ x'=\operatorname{id}_n$ or,
equivalently $\frac{\partial y^k}{\partial {x^i}}\frac{\partial x^i}{\partial
{y^l}}=\delta ^k_l$.
Next we would like to integrate a $1$-form $\omega$ along a curve
$\Gamma$ to obtain its integral $\int _\Gamma \omega$.
Definition (Integral of a $1$-Form)
Working in the coordinates $x^1,\dots, x^n$ we describe the curve $\Gamma$ by
$$
\Gamma=\big\{ \gamma (t)=\bigl( \gamma^1(t),\dots,\gamma^n(t)\bigr)\,
:\, t\in [a,b]\big\},
$$
and then set
$$
\int _\Gamma \omega =\int _a^b\omega_i\bigl(\gamma(t)\bigr)
\underset{d\gamma ^i(t)}{\underbrace{\dot\gamma ^i(t)\, dt}}
$$
Exercise
Prove that the integral just defined remains unchanged if an
orientation preserving reparametrization of $\Gamma$ is used and if
the underlying coordinate system chosen for $\mathbb{R}^n$ is changed.
Theorem (Stokes' Theorem 1)
Notice that it always holds
$$
f(Q)-f(P)=\int _\Gamma df\, ,
$$
if $P$ and $Q$ are the beginning and end point of $\Gamma$,
respectively. You may think of this as the fundamental theorem of
calculus "along a (curved) line". It is also a special case of Stokes
theorem.
Next we would like to generalize the concept of form to include objects
that can be integrated on surfaces and higher dimensional sets.
Definitions ($2$-forms )
A $2$-form is a map
\begin{align*}
\omega :\, &\mathbb{R} ^n\to \mathcal{L}^2_a(\mathbb{R}
^n,\mathbb{R})\, ,\\ &x\mapsto\omega(x):\mathbb{R}^n\times
\mathbb{R}^n\to \mathbb{R} \, ,\:
(v,w)\mapsto\omega(x)\big[v,w\bigr]\, ,
\end{align*}
where $\mathcal{L}^2_a(\mathbb{R} ^n,\mathbb{R})$ denotes the vector
space of alternating bilinear forms, i.e. of forms $a$ satisfying
\begin{multline*}
a[v+\lambda\bar v,w]=a[v,w]+\lambda\, a[\bar v,w]
\text{ and }a[w,v]=-a[v,w],\quad v,w\in \mathbb{R}^n\, ,\: \lambda\in
\mathbb{R}.
\end{multline*}
As we did with $1$-forms, we observe that
\begin{eqnarray*}
\omega(x)[v,w]&=&\omega(x)\big[v^{j_1}\frac{\partial}{\partial
x^{j_1}},w^{j_2}\frac{\partial}{\partial x^{j_2}}\big]=\omega
_{j_1,j_2}(x)v^{j_1}w^{j_2}\\
&=&\sum _{j_1=1}^n\sum _{j_2=j_1+1}^n\omega _{j_1,j_2}(x)\bigl(
v^{j_1}w^{j_2}-v^{j_2}w^{j_1}\bigr)=\sum _{j_1< j_2}\omega
_{j_1,j_2}(x)dx^{j_1}\wedge dx^{j_2}[v,w]\, ,
\end{eqnarray*}
and thus $\omega=\sum _{j_1 < j_2} \omega(x)[\frac{\partial}{\partial
x^{j_1}},\frac{\partial}{\partial x^{j_2}}]\, dx^{j_1}\wedge dx^{j_2}$. Here we
implictly introduced the notation $dx^{j_1}\wedge dx^{j_2}$ for the
alternating $2$-form given by
$$
dx^{j_1}\wedge dx^{j_2}[v,w]=v^{j_1}w^{j_2}-v^{j_2}w^{j_1}\, ,\: v,w\in
\mathbb{R}^n\, .
$$
A $2$-form is also independent of the chosen coordinate system.
Definition
Two-forms can be integrated over $2$-dimensional surfaces in a manner
analogous to how $1$-forms were integrated along lines. If
$\Sigma\subset \mathbb{R}^n$ is a two dimensional surface given by
$$
\Sigma=\big\{ \sigma(t,s)\, :\, t\in[a,b]\, ,\: s\in[c,d]\big\}\, ,
$$
then we define
$$
\int _\Sigma \omega=\int _a^b\int _c^d \sum_{j_1 < j_2}\omega_{j_1,j_2}\bigl(
\sigma(t,s)\bigr)\Big[\frac{\partial\sigma ^{j_1}}{\partial t}\frac{\partial\sigma
^{j_2}}{\partial s}-\frac{\partial\sigma ^{j_1}}{\partial s}\frac{\partial\sigma
^{j_2}}{\partial t}\Big]\, dt\wedge ds\, .
$$
The formula follows from
\begin{gather*}
d\sigma ^{j_1}(t,s)=\frac{\partial\sigma ^{j_1}}{\partial t}\,
dt+\frac{\partial\sigma ^{j_1}}{\partial s}\, ds\, ,\\
d\sigma ^{j_2}(t,s)=\frac{\partial\sigma ^{j_2}}{\partial t}\,
dt+\frac{\partial\sigma ^{j_2}}{\partial s}\, ds\text{ and}\\
d\sigma ^{j_1}(t,s)\wedge d\sigma ^{j_2}(t,s)=\sum_{j_1 < j_2}\Big[\frac{\partial\sigma
^{j_1}}{\partial t}\frac{\partial\sigma^{j_2}}{\partial s}-\frac{\partial\sigma
^{j_1}}{\partial s}\frac{\partial\sigma ^{j_2}}{\partial t}\Big]\, dt\wedge ds\, .
\end{gather*}
Remark
This definition is coordinate (and orientation fixing) parametrization
independent. Also observe that the $d$ operation can be extended to $1$-forms by
\begin{multline*}
d\omega=d\bigl( \omega_i\, dx^i\bigr)=d\omega _i\wedge
dx^i=\Big[\frac{\partial \omega _i}{\partial x^j} \, dx^j\big]\wedge
dx^i =\sum _{i,j}\frac{\partial \omega _i}{\partial x^j} \, dx^j
\wedge dx^i=\sum _{i < j}\big[
\frac{\partial \omega _j}{\partial x^i}-
\frac{\partial \omega _i}{\partial x^j} \big]\, dx^i\wedge dx^j\, .
\end{multline*}
Continuing along these lines we can introduce $k$-forms up to order
$n$ which can be integrated over $k$-dimensional subsets (submanifolds
to be precise) of $\mathbb{R}^n$. In a coordinate representation they
would take the form
$$
\omega=\omega _{j_1,j_2,\dots,j_k}\, dx^{j_1}\wedge\cdots\wedge dx^{j_k}\, .
$$
The Theorems of Vector Calculus
We now have enough language to formulate and prove Stokes theorem.
Theorem (Stokes)
Let $\Omega\subset \mathbb{R}^n$ be a $k$-dimensional manifold with
$(k-1)$-dimensional smooth boundary $\partial\Omega$ and let $\omega$
be any $(k-1)$-form. Then it holds
$$
\int _\Omega d\omega=\int _{\partial\Omega}\omega\, .
$$
Remark
One obtains special cases when $k=n$ and $\Omega\subset \mathbb{R}^n$
is open and bounded with smooth boundary and when $k=0$ and $\Omega$
is a curve.
Step 1. We first prove the result when $\Omega$ is a
rectangle. The same idea yields a proof for a rectangular body of any
dimension modulo issues of orientation that are not readily apparent
in the two dimensional case. Let
$\Omega=R=[a,b]\times[c,d]$ and take a $1$-form $\omega=\omega_1\,
dx^1+\omega _2\, dx^2$. Then
\begin{eqnarray*}
\int _{\partial R}\omega &=&\int _a^b \omega_1(x^1,c)\, dx^1+\int
_c^d\omega_2(b,x^2)\, dx^2+\int _b^a\omega_1(x^1,d)\, dx^1+\int
_d^c\omega_2(a,x^2)\, dx^2\\ &=& \int
_a^b\big[\omega _1(x^1,c)-\omega_1(x^1,d)\big]\, dx^1+\int
_c^d\big[\omega_2(b,x^2)-\omega_2(a,x^2)\big]\, dx^2\\
&=&\int _a^b\int _b^c\Big[
-\frac{\partial }{\partial x_2}w_1(x^1,x^2)+\frac{\partial }{\partial
x_1}w_2(x^1,x^2)\Big]\, dx^1dx^2=\int _R d\omega\, ,
\end{eqnarray*}
which is seen by using the obvious parametrization of $\partial R$ and
observing that
$$
d\bigl( w_1\, dx^1+w_2\, dx^2\bigr)=\bigl( -\frac{\partial }{\partial
x_2}w_1+ \frac{\partial }{\partial x_1}w_2\bigr)\, dx^1\wedge dx^2\, .
$$
While we considered the rectangle to be in $\mathbb{R} ^2$ it can be
easily seen that the claim remains true even if the (flat) rectangle
were inside a higher dimensional space.
Step 2. Next we show that the rectangle can be deformed without consequences.
Take a deformed rectangle and call it $D$. If you call the deformation $\varphi: R\to
D$, you can assume $\varphi(\partial R)=\partial D$ and you can work in the
coordinates $y$ given by
$$
x^1=\varphi(\cdot,y^2)\, ,\: x^2=\varphi(y^1,\cdot)\, ,\: (y^1,y^2)\in R\, .
$$
In these coordinates
\begin{multline*}
\int _{\partial D}\omega=\int _{\partial R}\langle\omega(y),\frac{\partial }{\partial
y_1}\rangle\, dy^1+\langle\omega(y),\frac{\partial }{\partial
y_2}\rangle\, dy^2=\int _R\Big[ -\frac{\partial }{\partial
y_2}w_1(y)+\frac{\partial }{\partial y_1}w_2(y)\Big]\, dy^1\wedge
dy^2= \int _D d\omega\, .
\end{multline*}
Again we observe that the deformation $\varphi$ does not need to have values in
$\mathbb{R}^2$, that is, it does not need to preserve flatness.
Step 3. For a general domain $\Omega$, just decompose it into finitely many
smaller pieces $D_j$ of deformed rectangular shape satisfying
int$(D_i)\cap\,$int$\,(D_j)=\emptyset$ and which occasionally share pieces of their
boundary. You will notice that boundary pieces are always shared by at most two
neighboring rectangoloids and are run through (during the boundary integration) twice
in opposite direction. They therefore cancel out when a sum is taken over all the
pieces and only integration over the boundary of $\Omega$ survives. In short
$$
\int _\Omega d\omega=\sum _{j=1}^N\int _{D_j}d\omega=\sum _{j=1}^N\int _{\partial
D_j}\omega= \int _{\partial \Omega}\omega \, ,
$$
where we of course used step 2 of this proof to obtain the identity in the middle.
Divergence Theorem
Let $\Omega\subset \mathbb{R}^n$ be a bounded set with (piecewise)
smooth boundary $\partial\Omega$ and $f:\Omega\to \mathbb{R}^n$ be a
smooth map (vector field). We denote by $\nu(x)$ the unit outer normal
to $\partial\Omega$ at the point $x\in \partial\Omega$. Then it holds that
$$
\int _\Omega \operatorname{div}(f)\, dx=\int _{\partial \Omega}f\cdot\nu\,
d\sigma_{\partial \Omega}(x)\, .
$$
Define
\begin{equation*}
\omega=\sum_{j=1}^n(-1)^{j+1}f^jdx^1\wedge\cdots\wedge
\widehat{dx^j}\wedge\cdots\wedge dx^n\, ,
\end{equation*}
where the hat means that the particular term is not included in the
wedge product. Then observe that
$$
d\omega=\frac{f^j}{\partial x^j}dx^1\wedge\cdots\wedge
dx^n=\operatorname{div}(f)dx^1\wedge\cdots\wedge dx^n\, .
$$
It can be further verified that
$$
\int _{\partial \Omega}\omega=\int _{\partial \Omega}f\cdot\nu \,
d\sigma_{\partial \Omega}(x)\, ,
$$
in general but we shall only do it for $n=3$. Let
$\Sigma\subset\Omega$ be a piece of the boundary parametrized by
$s(t,u)$ with parameter domain $D$. Then
\begin{multline*}
\int _\Sigma \omega=\int _D f\cdot s_t\times s_u\, dtdu=\int _D f\cdot
\underset{=\nu}{\underbrace{\frac{s_t\times s_u}{|s_t\times s_u|}}}
\underset{=d\sigma _{\Sigma}}{\underbrace{|s_t\times s_u|\, dtdu}}
=\int _\Sigma f\cdot\nu\, d\sigma _\Sigma(x)\, .
\end{multline*}
The details behind the first identity are left as an exercise.
First observe that
$$
\nabla r(x)=2\bigl(\frac{x_1}{a^2}, \frac{x_2}{b^2},
\frac{x_3}{c^2}\bigr)
$$
It follows that
$$
\nu(x)=\frac{\nabla
r(x)}{|\nabla r(x)|}=\frac{1}{\sqrt{ \frac{x_1^2}{a^4}+
\frac{x_2^2}{b^4}+\frac{x_3^2}{c^4}}}
\bigl(\frac{x_1}{a^2}, \frac{x_2}{b^2}, \frac{x_3}{c^2}\bigr)
$$
Setting $f(x)=(a^2 x_1, 0, 0)$ yields
$$
\text{div}(f)(x)=a^2 \text{ and } (f\cdot\nu)(x)=
\frac{x_1^2}{\sqrt{\frac{x_1^2}{a^4}+\frac{x_2^2}{b^4}+
\frac{x_3^2}{c^4}}}
$$
Applying the Divergence Theorem we then obtain
$$
\int_{\partial D}\frac{x_1^2}{\sqrt{ \frac{x_1^2}{a^4}+ \frac{x_2^2}{b^4}+
\frac{x_3^2}{c^4}}}\, d\sigma _{\partial
D}(x)=\int_{D}a^2dx=\frac{4\pi}{3}a^3 bc.
$$
Green's Theorem
Let the same assumptions as in the divergence theorem be met and let
$u,v:\Omega\to\mathbb{R}$ be any two smooth functions. Then
$$
\int _\Omega\big[ u\triangle v-v\triangle u\big]\, dx=\int _{\partial
\Omega}\big[ u \partial _\nu v-v \partial _\nu u\big]\,
d\sigma_{\partial \Omega}(x)\, .
$$