Lecture 11. Vector Calculus

Motivation

Definition of Multiple integrals

Given a cell (parallelopiped) $E= [a_1,b_1] \times...\times [a_n,b_n]$ in $\mathbb{R}^n$, a partition $P$ of $E$ consists of a set of partitions $P_1,\dots,P_n$ for the intervals $[a_j,b_j]$ ($j=1,\dots,n$) satisfying $$ P_j=\big\{ (x_{j,i})_{i=0,\dots,m_j}\, \big |\, a_j=x_{j,0} <x_{j,1}<\dots<x_{j,m_j} = b_j\big\} . $$

Definitions (Multiple integrals)

(i) The maximal interval size of a partition $P$ is given by $$ \Delta(P)=\max\big\{ |x_{j,i} - x_{j,i+1}|\, :\, i=0,1,2,...,m_j-1\, ,\: 1\leq j\le m_j\big\}. $$ (ii) Given a bounded function $f:E\to \mathbb{R}$ we define \begin{gather*} M_{j_1,\dots,j_n} = \sup\big\{ f(x)\, :\, x\in [x_{1, j_1-1}, x_{1, j_1}]\times\cdots\times [x_{n,j_n-1}, x_{n ,j_n}]\big\}\\ m_{j_1,j_2,\dots,j_n} = \inf\big\{f(x)\, :\, x\in [x_{1,j_1-1}+x_{1,j_1}]\times\cdots\times [x_{n,j_{n-1}}, x_{n,j_n}] \big\} \end{gather*} (iii) Lower and upper Riemann sums for $f$ are given by \begin{gather*} L(f,P) =\sum_{k=1}^n\sum_{j_k=1}^{m_k} m_{j_1,...,j_n}\prod_{i=1}^n(x_{i,j_i} - x_{1,j_i-1})\\ U(f,P) =\sum_{k=1}^n\sum_{j_k=1}^{m_k} M_{j_1,...,j_n}\prod_{i=1}^n(x_{i,j_i}-x_{1,j_i-1}) \end{gather*} (iv) Lower and upper Riemann integrals are given by \begin{gather*} L(f) = \lim_{\Delta(P)\to 0} L(f,P)\text{ and }U(f) = \lim_{\Delta(P)\to 0} U(f,P) \end{gather*} (v) The (bounded) function $f$ is called Riemann integrable (on $E$) iff it holds that $L(f) = U(f)$.

Fubini's Theorem

Let $$ f:E=[a_1,b_1]\times\cdots\times [a_n,b_n]\to \mathbb{R} $$ be integrable. Then $$ \int_E f(x)\, dx =\int_{a_1}^{b_1} \cdots \int_{a_n}^{b_n} f(x_1,...,x_n)\,dx_n\cdots dx_1, $$ where $dx$ is the positively oriented volume element of $\mathbb{R}^n$.

Definitions (Integrability)

Let $D$ be a bounded domain on $\mathbb{R}^n$ and let $f:D\to \mathbb{R}$ be bounded. $f$ is said to be integrable (Riemann integrable) on $D$ if and only if $$ F(x) =\begin{cases} f(x),&x\in D\\0,&x\not\in D \end{cases}$$ is integrable on $[M,M]^n$ for one (and thus any) $M>0$ such that $D \subset [-M,M]^n$.

Example

Compute $\int_0^1 \int_{\sqrt{y}}^1 e^{x^3} dx dy$.

Discussion

Change of Variables Formula

Theorem (Change of Variables)

Let $\Phi: D\to G\subset\mathbb{R}^n$ be a differentiable functions which is 1-1 and onto. Assume that with $\det\phi'(x)\ne0$. If $f$ is integrable on $G$, then $f \circ\Phi | \mbox{det} \Phi '(x)|$ is integrable on $D$ and $$ \int_D f\bigl(\Phi(x)\bigr)\big |\text{det}\Phi'(x)\big |\, dx = \int_G f(y)\, dy. $$

Remark

This is a very useful theorem since simplication can often be obtained by introduction of suitable new coordinates.

Example (Polar Coordinates)

As an example consider polar coordinates in $\mathbb{R}^2$, which are given by $$ (x,y)=\bigl(r\cos\theta, r\sin\theta\bigr)=:\Phi(r,\theta)\, , $$ so that $$ d(x,y)=dxdy=|\det \Phi(r,\theta) | dr d\theta=r dr d\theta. $$

Example

Prove $\int_{-\infty}^\infty e^{-x^2} dx = \sqrt{\pi}$.

Discussion

Example

For $a, b, c>0$ define $$ D(a,b,c) =\big\{(x,y,z)\, :\, \frac{x^2}{a^2} + \frac{y^2}{b^2}+ \frac{z^2}{c^2}<1\big\}. $$ Prove that the volume of $D(a,b,c)$ amounts to $\frac{4 \pi}{3} abc$.

Discussion

Example (Cylindrical coordinates)

In $\mathbb{R}^3$ one can often make use of so-called cylindrical coordinates $(r, \theta, z)$. They are defined through $$ x = r \cos \theta\, ,\: y= r \sin \theta\, ,\: z=z\, . $$ in terms of the standard "rectangular" coordinates $(x,y,z)\in \mathbb{R}^3$. In this case one has that $$ d(x,y,z)=dxdydz=r\, drd\theta dz\, , $$ as follows from $$ \text{det}\:\Phi(r, \theta, z)= \begin{vmatrix} \cos\theta & -r \sin \theta & 0 \\ \sin \theta & r \cos \theta & 0 \\ 0 & 0 &1 \end{vmatrix}= \begin{vmatrix} \cos\theta & - r \sin \theta \\ \sin\theta &r \cos \theta \end{vmatrix}=r $$

Example (Spherical coordinates)

Sperical coordinates $(r,\varphi,\theta)$ are given by $$ (x,y,z)=r\, \bigl( \sin(\varphi)\cos(\theta),\sin(\varphi) \sin(\theta),\cos(\varphi)\bigr). $$ In this case it holds that $$ d(x,y,z)=r^2\sin(\varphi)\, drd\varphi d\theta, $$ as is seen computing the determinant of $$ \begin{bmatrix}\frac{\partial x }{\partial r} & \frac{\partial x}{\partial \varphi} &\frac{\partial x}{\partial \theta} \\ \frac{\partial y }{\partial r}&\frac{\partial y }{\partial \varphi} &\frac{\partial y }{\partial \theta}\\\frac{\partial z}{\partial r} & \frac{\partial z }{\partial \varphi} & \frac{\partial z}{\partial \theta}\end{bmatrix} =\begin{bmatrix}\sin \varphi \cos \theta & r \cos \varphi \cos \theta & -r \sin \varphi \sin \theta\\ \sin \varphi \sin \theta & r \cos \varphi \sin \theta & r \sin \varphi \cos \theta \\ \cos \varphi & -r \sin \varphi & 0\end{bmatrix} $$

Line integrals

Definition (Line Integrals)

We will also need to be able to integrate along lines. Towards that end, let $\gamma:[a,b]\to \mathbb{R} ^n$ be a (piecewise smooth) parametrization of a piecewise $C^1$ curve $\Gamma$, i.e. $\Gamma=\gamma\bigl([a,b]\bigr)$. We then define the line integral of a vector field $f:\Gamma\to \mathbb{R}^n$ along the curve $\Gamma$ by $$ \int_\Gamma f(\gamma)\cdot d\gamma = \int_a^b \Big[f_1\bigl(\gamma(t)\bigr)\gamma _1'(t) +\dots+ f_n\bigl(\gamma(t)\bigr) \gamma _n'(t)\Big]\, dt. $$ For a real-valued function $f:\Gamma\to \mathbb{R}$ we define the corresponding line integral by $$ \int _\Gamma f\, d\sigma _\Gamma=\int _a^b f\bigl(\gamma(t)\bigr)\big |\gamma '(t)\big |\, dt, $$ so that, e.g., the length of $\Gamma$ amounts to $\int_a^b \big |\gamma'(t)\big | dt$.

Exercise

Prove that the integrals defined above do not depend on the choice of parametrization for the curve $\Gamma$.

Surface integrals

More in general we can replace lines by surfaces and introduce corresponing surface integrals. Let $S$ be a smooth hypersurface in $\mathbb{R}^n$ and let $f:S\to \mathbb{R}$ be a continuous function.

Question

How can we compute/define $\int_S f(x) d\sigma _S$?
Consider first the case of a surface which happens to be the graph of a function $\varphi:D\subset \mathbb{R} ^{n-1}\to \mathbb{R}$ on a domain $D$. In this case $$ S=\Big\{ \bigl( x_1,\dots,x_{n-1},\varphi(x_1,\dots,x_{n-1})\bigr)\, :\, (x_1,\dots,x_{n-1})\in D\Big\} $$ and we can define \begin{multline*} \int_S f(x) d\sigma(x):=\int_S f\bigl( x_1,\dots,x_{n-1} ,\varphi(x_1,\dots,x_{n-1})\bigr)\,\sqrt{1+|\nabla\varphi(x)|^2}\, dx_1\cdots dx_{n-1}\, . \end{multline*}
More in general, if one has that $S =\Phi(D)$ for some $$ \Phi= (\phi_1,\dots,\phi_{n}): D\subset\mathbb{R} ^{n-1}\to \mathbb{R}^n, $$ then one can define $$ d\sigma _S(x)=d\sigma_S(x_1,\dots,x_{n-1})=\sqrt{ \sum_{j=1}^n\big[\det (\hat{\phi}_j)'(x)\big]^2}\, dx_1\cdots dx_{n-1} $$ where $\hat{\phi_j} = (\phi_1,....,\phi_{j-1}, \phi_{j+1}....,\phi_n)$.

Exercise

Make sense of the first definition above by providing a geometric rationale for it. Then, with that insight in hand, do the same for the second.

Example

Find the area of the portion of the unit sphere: $x^2 + y^2 + z^2 = 1$ in $\mathbb{R}^3$ which lies above $z=\sqrt{3}/2$.

Discussion

Example

Let $\mathbb{B}^n(0,R)\subset \mathbb{R}^n$ be the ball of radius $R$ centered at $0$. Then $$ \int_{\mathbb{B}^n(0,R)} f(x)\, dx= \int_0^R \int_{\mathbb{S}^{n-1}} f(ry)d\sigma_{\mathbb{S}^{n-1}}(y)r^{n-1}\, dr $$ where $\mathbb{S}^{n-1} =\big\{x\in \mathbb{R}^n: |x| = 1\big\}$ and $d\sigma_{\mathbb{S}^{n-1}}$ is the corresponding surface element. In particular it follows that $$ \text{Vol}\bigl(\mathbb{B}^n(0,R)\bigr)=\int_0^R r^{n-1} \int_{\mathbb{S}^{n-1}}\, d\sigma_{\mathbb{S}^{n-1}}\, dr = \sigma_{\mathbb{S}^{n-1}}(\mathbb{S}^{n-1})\int_0^R r^{n-1}\, dr = \sigma_{\mathbb{S}^{n-1}}(\mathbb{S}^{n-1})\frac{R^n}{n}. $$ When $n=3$, Area$(\mathbb{S}^2) = 4\pi$ so that Vol$\big[\mathbb{B}^3(0,R)\big]=4\pi\frac{R^3}{3}$.

Differential Forms

In this section we introduce a powerful coordinate independent language that will allow for an elementary proof of Stokes' theorem. We are talking about forms. A $0$-form on $\mathbb{R}^n$ is simply a function $f:\mathbb{R} ^n\to \mathbb{R}$. It is clearly an object that can be evaluated at points. Next we define $1$-forms.

Definition($1$-form)

A $1$-form $\omega$ on $\mathbb{R}^n$ is a map \begin{align*} \omega :\, &\mathbb{R} ^n\to \mathcal{L}(\mathbb{R} ^n,\mathbb{R})\, ,\\ &x\mapsto\omega(x):\mathbb{R}^n\to \mathbb{R} \, ,\:v\mapsto \langle\omega(x),v\rangle. \end{align*}

Remark

Notice that at any $x$, $\omega(x)$ is a linear map defined on vectors $v\in\mathbb{R}^n$. In this sense, it does very much resemble a derivative $Df(x)$ of a function $f$, which is, at any point $x$, the linear map $Df(x):\mathbb{R}^n\to \mathbb{R}$ which appears in the best affine approximation to the function. In short \begin{align*} Df:\, &\mathbb{R}^n\to \mathcal{L}(\mathbb{R}^n,\mathbb{R})\, ,\\ &x\mapsto Df(x):\mathbb{R}^n\to \mathbb{R}\, ,\: v\mapsto Df(x)v=\partial _vf(x). \end{align*}
We shall see that $1$-forms can naturally be integrated along lines. Let now $$ \frac{\partial}{\partial {x^1}},\dots, \frac{\partial}{\partial{x^n}}\text{ and }dx^1,\dots,dx^n\, , $$ be the standard basis of $\mathbb{R}^n$ and its dual basis, respectively, i.e. $$ \langle dx^i,\frac{\partial}{\partial {x^j}}\rangle=\delta ^i_j\text{ for }i,j=1,\dots,n\, . $$ Then for any $x\in \mathbb{R}^n$ and any vector $v\in \mathbb{R}^n$ we have $$ \langle\omega(x),v\rangle=\langle\omega(x),v^i \frac{\partial}{\partial{x^i}}\rangle=\langle \omega(x), \frac{\partial}{\partial{x^i}}\rangle dx^i(v)=: \big[\omega_i(x)dx^i\big](v), $$ since $v=v^i\frac{\partial}{\partial {x^i}}$ and $v^i=dx^i(v)$. It follows that $$ \omega = \omega _i\, dx^i\, , $$ for functions $\omega _i:\mathbb{R}^n\to \mathbb{R}$.

Remark

Here and in the sequel we shall frequently make use of the so-called summation convention, whereby indices appearing twice, once as a superscript and once as a subscript, are to be summed over.

Definitions (Differential)

If $f:\mathbb{R}^n\to \mathbb{R}$ is a differentiable function, then define its differential by the $1$-form $$ df=\frac{\partial f}{\partial {x^j}}\, dx^j. $$ and observe that, for any fixed $x\in \mathbb{R}^n$, it holds that $$ \langle df(x),v\rangle=Df(x)v=\partial _vf(x)\, ,\: v\in \mathbb{R}^n\, . $$

Remark

It is natural to expect that $\omega$ be independent of the coordinates used to describe it as it is, at each point, a linear form defined on vectors, which are themselves coordinate independent objects.

Discussion


Next we would like to integrate a $1$-form $\omega$ along a curve $\Gamma$ to obtain its integral $\int _\Gamma \omega$.

Definition (Integral of a $1$-Form)

Working in the coordinates $x^1,\dots, x^n$ we describe the curve $\Gamma$ by $$ \Gamma=\big\{ \gamma (t)=\bigl( \gamma^1(t),\dots,\gamma^n(t)\bigr)\, :\, t\in [a,b]\big\}, $$ and then set $$ \int _\Gamma \omega =\int _a^b\omega_i\bigl(\gamma(t)\bigr) \underset{d\gamma ^i(t)}{\underbrace{\dot\gamma ^i(t)\, dt}} $$

Exercise

Prove that the integral just defined remains unchanged if an orientation preserving reparametrization of $\Gamma$ is used and if the underlying coordinate system chosen for $\mathbb{R}^n$ is changed.

Theorem (Stokes' Theorem 1)

Notice that it always holds $$ f(Q)-f(P)=\int _\Gamma df\, , $$ if $P$ and $Q$ are the beginning and end point of $\Gamma$, respectively. You may think of this as the fundamental theorem of calculus "along a (curved) line". It is also a special case of Stokes theorem.

Next we would like to generalize the concept of form to include objects that can be integrated on surfaces and higher dimensional sets.

Definitions ($2$-forms )

A $2$-form is a map \begin{align*} \omega :\, &\mathbb{R} ^n\to \mathcal{L}^2_a(\mathbb{R} ^n,\mathbb{R})\, ,\\ &x\mapsto\omega(x):\mathbb{R}^n\times \mathbb{R}^n\to \mathbb{R} \, ,\: (v,w)\mapsto\omega(x)\big[v,w\bigr]\, , \end{align*} where $\mathcal{L}^2_a(\mathbb{R} ^n,\mathbb{R})$ denotes the vector space of alternating bilinear forms, i.e. of forms $a$ satisfying \begin{multline*} a[v+\lambda\bar v,w]=a[v,w]+\lambda\, a[\bar v,w] \text{ and }a[w,v]=-a[v,w],\quad v,w\in \mathbb{R}^n\, ,\: \lambda\in \mathbb{R}. \end{multline*}

As we did with $1$-forms, we observe that \begin{eqnarray*} \omega(x)[v,w]&=&\omega(x)\big[v^{j_1}\frac{\partial}{\partial x^{j_1}},w^{j_2}\frac{\partial}{\partial x^{j_2}}\big]=\omega _{j_1,j_2}(x)v^{j_1}w^{j_2}\\ &=&\sum _{j_1=1}^n\sum _{j_2=j_1+1}^n\omega _{j_1,j_2}(x)\bigl( v^{j_1}w^{j_2}-v^{j_2}w^{j_1}\bigr)=\sum _{j_1< j_2}\omega _{j_1,j_2}(x)dx^{j_1}\wedge dx^{j_2}[v,w]\, , \end{eqnarray*} and thus $\omega=\sum _{j_1 < j_2} \omega(x)[\frac{\partial}{\partial x^{j_1}},\frac{\partial}{\partial x^{j_2}}]\, dx^{j_1}\wedge dx^{j_2}$. Here we implictly introduced the notation $dx^{j_1}\wedge dx^{j_2}$ for the alternating $2$-form given by $$ dx^{j_1}\wedge dx^{j_2}[v,w]=v^{j_1}w^{j_2}-v^{j_2}w^{j_1}\, ,\: v,w\in \mathbb{R}^n\, . $$ A $2$-form is also independent of the chosen coordinate system.

Definition

Two-forms can be integrated over $2$-dimensional surfaces in a manner analogous to how $1$-forms were integrated along lines. If $\Sigma\subset \mathbb{R}^n$ is a two dimensional surface given by $$ \Sigma=\big\{ \sigma(t,s)\, :\, t\in[a,b]\, ,\: s\in[c,d]\big\}\, , $$ then we define $$ \int _\Sigma \omega=\int _a^b\int _c^d \sum_{j_1 < j_2}\omega_{j_1,j_2}\bigl( \sigma(t,s)\bigr)\Big[\frac{\partial\sigma ^{j_1}}{\partial t}\frac{\partial\sigma ^{j_2}}{\partial s}-\frac{\partial\sigma ^{j_1}}{\partial s}\frac{\partial\sigma ^{j_2}}{\partial t}\Big]\, dt\wedge ds\, . $$ The formula follows from \begin{gather*} d\sigma ^{j_1}(t,s)=\frac{\partial\sigma ^{j_1}}{\partial t}\, dt+\frac{\partial\sigma ^{j_1}}{\partial s}\, ds\, ,\\ d\sigma ^{j_2}(t,s)=\frac{\partial\sigma ^{j_2}}{\partial t}\, dt+\frac{\partial\sigma ^{j_2}}{\partial s}\, ds\text{ and}\\ d\sigma ^{j_1}(t,s)\wedge d\sigma ^{j_2}(t,s)=\sum_{j_1 < j_2}\Big[\frac{\partial\sigma ^{j_1}}{\partial t}\frac{\partial\sigma^{j_2}}{\partial s}-\frac{\partial\sigma ^{j_1}}{\partial s}\frac{\partial\sigma ^{j_2}}{\partial t}\Big]\, dt\wedge ds\, . \end{gather*}

Remark

This definition is coordinate (and orientation fixing) parametrization independent. Also observe that the $d$ operation can be extended to $1$-forms by \begin{multline*} d\omega=d\bigl( \omega_i\, dx^i\bigr)=d\omega _i\wedge dx^i=\Big[\frac{\partial \omega _i}{\partial x^j} \, dx^j\big]\wedge dx^i =\sum _{i,j}\frac{\partial \omega _i}{\partial x^j} \, dx^j \wedge dx^i=\sum _{i < j}\big[ \frac{\partial \omega _j}{\partial x^i}- \frac{\partial \omega _i}{\partial x^j} \big]\, dx^i\wedge dx^j\, . \end{multline*} Continuing along these lines we can introduce $k$-forms up to order $n$ which can be integrated over $k$-dimensional subsets (submanifolds to be precise) of $\mathbb{R}^n$. In a coordinate representation they would take the form $$ \omega=\omega _{j_1,j_2,\dots,j_k}\, dx^{j_1}\wedge\cdots\wedge dx^{j_k}\, . $$

The Theorems of Vector Calculus

We now have enough language to formulate and prove Stokes theorem.

Theorem (Stokes)

Let $\Omega\subset \mathbb{R}^n$ be a $k$-dimensional manifold with $(k-1)$-dimensional smooth boundary $\partial\Omega$ and let $\omega$ be any $(k-1)$-form. Then it holds $$ \int _\Omega d\omega=\int _{\partial\Omega}\omega\, . $$

Remark

One obtains special cases when $k=n$ and $\Omega\subset \mathbb{R}^n$ is open and bounded with smooth boundary and when $k=0$ and $\Omega$ is a curve.

"Proof" of Stokes' Theorem

Divergence Theorem

Let $\Omega\subset \mathbb{R}^n$ be a bounded set with (piecewise) smooth boundary $\partial\Omega$ and $f:\Omega\to \mathbb{R}^n$ be a smooth map (vector field). We denote by $\nu(x)$ the unit outer normal to $\partial\Omega$ at the point $x\in \partial\Omega$. Then it holds that $$ \int _\Omega \operatorname{div}(f)\, dx=\int _{\partial \Omega}f\cdot\nu\, d\sigma_{\partial \Omega}(x)\, . $$

Proof

Example

Let $D:=D(a,b,c)=\big\{x\in \mathbb{R}^3\, :\, r(x):=\frac{x_1^2}{a^2} + \frac{x_2^2}{b^2} + \frac{x_3^2}{c^2} -1 <0\big\}$ and evaluate $$ \int_{\partial D} \frac{x_1^2}{\sqrt{ \frac{x_1^2}{a^4}+ \frac{x_2^2}{b^4}+\frac{x_3^2}{c^4}}}\, d\sigma _{\partial D}(x) $$

Discussion

Green's Theorem

Let the same assumptions as in the divergence theorem be met and let $u,v:\Omega\to\mathbb{R}$ be any two smooth functions. Then $$ \int _\Omega\big[ u\triangle v-v\triangle u\big]\, dx=\int _{\partial \Omega}\big[ u \partial _\nu v-v \partial _\nu u\big]\, d\sigma_{\partial \Omega}(x)\, . $$

Discussion