Stokes' theorem (original) (raw)

Theorem in vector calculus

An illustration of Stokes' theorem, with surface Σ, its boundary ∂Σ and the normal vector n. The direction of positive circulation of the bounding contour ∂Σ, and the direction n of positive flux through the surface Σ, are related by a right-hand-rule (i.e., the right hand the fingers circulate along ∂Σ and the thumb is directed along n).

Stokes' theorem,[1] also known as the Kelvin–Stokes theorem[2][3] after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem,[4] is a theorem in vector calculus on R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}}. Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence:

The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface.

Stokes' theorem is a special case of the generalized Stokes theorem.[5][6] In particular, a vector field on R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}} can be considered as a 1-form in which case its curl is its exterior derivative, a 2-form.

Let Σ {\displaystyle \Sigma } {\displaystyle \Sigma } be a smooth oriented surface in R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}} with boundary ∂ Σ ≡ Γ {\displaystyle \partial \Sigma \equiv \Gamma } {\displaystyle \partial \Sigma \equiv \Gamma }. If a vector field F ( x , y , z ) = ( F x ( x , y , z ) , F y ( x , y , z ) , F z ( x , y , z ) ) {\displaystyle \mathbf {F} (x,y,z)=(F_{x}(x,y,z),F_{y}(x,y,z),F_{z}(x,y,z))} {\displaystyle \mathbf {F} (x,y,z)=(F_{x}(x,y,z),F_{y}(x,y,z),F_{z}(x,y,z))} is defined and has continuous first order partial derivatives in a region containing Σ {\displaystyle \Sigma } {\displaystyle \Sigma }, then ∬ Σ ( ∇ × F ) ⋅ d Σ = ∮ ∂ Σ F ⋅ d Γ . {\displaystyle \iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \mathrm {d} \mathbf {\Sigma } =\oint _{\partial \Sigma }\mathbf {F} \cdot \mathrm {d} \mathbf {\Gamma } .} {\displaystyle \iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \mathrm {d} \mathbf {\Sigma } =\oint _{\partial \Sigma }\mathbf {F} \cdot \mathrm {d} \mathbf {\Gamma } .}More explicitly, the equality says that ∬ Σ ( ( ∂ F z ∂ y − ∂ F y ∂ z ) d y d z + ( ∂ F x ∂ z − ∂ F z ∂ x ) d z d x + ( ∂ F y ∂ x − ∂ F x ∂ y ) d x d y ) = ∮ ∂ Σ ( F x d x + F y d y + F z d z ) . {\displaystyle {\scriptstyle {\begin{aligned}&\iint _{\Sigma }\left(\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\,\mathrm {d} y\,\mathrm {d} z+\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\,\mathrm {d} z\,\mathrm {d} x+\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\,\mathrm {d} x\,\mathrm {d} y\right)\\&=\oint _{\partial \Sigma }{\Bigl (}F_{x}\,\mathrm {d} x+F_{y}\,\mathrm {d} y+F_{z}\,\mathrm {d} z{\Bigr )}.\end{aligned}}}} {\displaystyle {\scriptstyle {\begin{aligned}&\iint _{\Sigma }\left(\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\,\mathrm {d} y\,\mathrm {d} z+\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\,\mathrm {d} z\,\mathrm {d} x+\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\,\mathrm {d} x\,\mathrm {d} y\right)\\&=\oint _{\partial \Sigma }{\Bigl (}F_{x}\,\mathrm {d} x+F_{y}\,\mathrm {d} y+F_{z}\,\mathrm {d} z{\Bigr )}.\end{aligned}}}}

The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary. Surfaces such as the Koch snowflake, for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in Lebesgue theory cannot be defined for a non-Lipschitz surface. One (advanced) technique is to pass to a weak formulation and then apply the machinery of geometric measure theory; for that approach see the coarea formula. In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of R 2 {\displaystyle \mathbb {R} ^{2}} {\displaystyle \mathbb {R} ^{2}}.

A more detailed statement will be given for subsequent discussions. Let γ : [ a , b ] → R 2 {\displaystyle \gamma :[a,b]\to \mathbb {R} ^{2}} {\displaystyle \gamma :[a,b]\to \mathbb {R} ^{2}} be a piecewise smooth Jordan plane curve. The Jordan curve theorem implies that γ {\displaystyle \gamma } {\displaystyle \gamma } divides R 2 {\displaystyle \mathbb {R} ^{2}} {\displaystyle \mathbb {R} ^{2}} into two components, a compact one and another that is non-compact. Let D {\displaystyle D} {\displaystyle D} denote the compact part; then D {\displaystyle D} {\displaystyle D} is bounded by γ {\displaystyle \gamma } {\displaystyle \gamma }. It now suffices to transfer this notion of boundary along a continuous map to our surface in R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}}. But we already have such a map: the parametrization of Σ {\displaystyle \Sigma } {\displaystyle \Sigma }.

Suppose ψ : D → R 3 {\displaystyle \psi :D\to \mathbb {R} ^{3}} {\displaystyle \psi :D\to \mathbb {R} ^{3}} is piecewise smooth at the neighborhood of D {\displaystyle D} {\displaystyle D}, with Σ = ψ ( D ) {\displaystyle \Sigma =\psi (D)} {\displaystyle \Sigma =\psi (D)}.[note 1] If Γ {\displaystyle \Gamma } {\displaystyle \Gamma } is the space curve defined by Γ ( t ) = ψ ( γ ( t ) ) {\displaystyle \Gamma (t)=\psi (\gamma (t))} {\displaystyle \Gamma (t)=\psi (\gamma (t))}[note 2] then we call Γ {\displaystyle \Gamma } {\displaystyle \Gamma } the boundary of Σ {\displaystyle \Sigma } {\displaystyle \Sigma }, written ∂ Σ {\displaystyle \partial \Sigma } {\displaystyle \partial \Sigma }.

With the above notation, if F {\displaystyle \mathbf {F} } {\displaystyle \mathbf {F} } is any smooth vector field on R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}}, then[7][8] ∮ ∂ Σ F ⋅ d Γ = ∬ Σ ∇ × F ⋅ d Σ . {\displaystyle \oint _{\partial \Sigma }\mathbf {F} \,\cdot \,\mathrm {d} {\mathbf {\Gamma } }=\iint _{\Sigma }\nabla \times \mathbf {F} \,\cdot \,\mathrm {d} \mathbf {\Sigma } .} {\displaystyle \oint _{\partial \Sigma }\mathbf {F} \,\cdot \,\mathrm {d} {\mathbf {\Gamma } }=\iint _{\Sigma }\nabla \times \mathbf {F} \,\cdot \,\mathrm {d} \mathbf {\Sigma } .}

Here, the " ⋅ {\displaystyle \cdot } {\displaystyle \cdot }" represents the dot product in R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}}.

Special case of a more general theorem

[edit]

Stokes' theorem can be viewed as a special case of the following identity:[9] ∮ ∂ Σ ( F ⋅ d Γ ) g = ∬ Σ [ d Σ ⋅ ( ∇ × F − F × ∇ ) ] g , {\displaystyle \oint _{\partial \Sigma }(\mathbf {F} \,\cdot \,\mathrm {d} {\mathbf {\Gamma } })\,\mathbf {g} =\iint _{\Sigma }\left[\mathrm {d} \mathbf {\Sigma } \cdot \left(\nabla \times \mathbf {F} -\mathbf {F} \times \nabla \right)\right]\mathbf {g} ,} {\displaystyle \oint _{\partial \Sigma }(\mathbf {F} \,\cdot \,\mathrm {d} {\mathbf {\Gamma } })\,\mathbf {g} =\iint _{\Sigma }\left[\mathrm {d} \mathbf {\Sigma } \cdot \left(\nabla \times \mathbf {F} -\mathbf {F} \times \nabla \right)\right]\mathbf {g} ,}where g {\displaystyle \mathbf {g} } {\displaystyle \mathbf {g} } is any smooth vector or scalar field in R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}}. When g {\displaystyle \mathbf {g} } {\displaystyle \mathbf {g} } is a uniform scalar field, the standard Stokes' theorem is recovered.

The proof of the theorem consists of 4 steps. We assume Green's theorem, so what is of concern is how to boil down the three-dimensional complicated problem (Stokes' theorem) to a two-dimensional rudimentary problem (Green's theorem).[10] When proving this theorem, mathematicians normally deduce it as a special case of a more general result, which is stated in terms of differential forms, and proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not presuppose any knowledge beyond a familiarity with basic vector calculus and linear algebra.[8] At the end of this section, a short alternative proof of Stokes' theorem is given, as a corollary of the generalized Stokes' theorem.

First step of the elementary proof (parametrization of integral)

[edit]

As in § Theorem, we reduce the dimension by using the natural parametrization of the surface. Let ψ and γ be as in that section, and note that by change of variables ∮ ∂ Σ F ( x ) ⋅ d Γ = ∮ γ F ( ψ ( γ ) ) ⋅ d ψ ( γ ) = ∮ γ F ( ψ ( y ) ) ⋅ J y ( ψ ) d γ {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {\Gamma } }=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {\gamma } ))\cdot \,\mathrm {d} {\boldsymbol {\psi }}(\mathbf {\gamma } )}=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot J_{\mathbf {y} }({\boldsymbol {\psi }})\,\mathrm {d} \gamma }} {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {\Gamma } }=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {\gamma } ))\cdot \,\mathrm {d} {\boldsymbol {\psi }}(\mathbf {\gamma } )}=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot J_{\mathbf {y} }({\boldsymbol {\psi }})\,\mathrm {d} \gamma }}where J_y_ψ stands for the Jacobian matrix of ψ at y = γ(t).

Now let {eu, ev} be an orthonormal basis in the coordinate directions of R2.[note 3]

Recognizing that the columns of Jy ψ are precisely the partial derivatives of ψ at y, we can expand the previous equation in coordinates as ∮ ∂ Σ F ( x ) ⋅ d Γ = ∮ γ F ( ψ ( y ) ) J y ( ψ ) e u ( e u ⋅ d y ) + F ( ψ ( y ) ) J y ( ψ ) e v ( e v ⋅ d y ) = ∮ γ ( ( F ( ψ ( y ) ) ⋅ ∂ ψ ∂ u ( y ) ) e u + ( F ( ψ ( y ) ) ⋅ ∂ ψ ∂ v ( y ) ) e v ) ⋅ d y {\displaystyle {\begin{aligned}\oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {\Gamma } }&=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\mathbf {e} _{u}(\mathbf {e} _{u}\cdot \,\mathrm {d} \mathbf {y} )+\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\mathbf {e} _{v}(\mathbf {e} _{v}\cdot \,\mathrm {d} \mathbf {y} )}\\&=\oint _{\gamma }{\left(\left(\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(\mathbf {y} )\right)\mathbf {e} _{u}+\left(\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(\mathbf {y} )\right)\mathbf {e} _{v}\right)\cdot \,\mathrm {d} \mathbf {y} }\end{aligned}}} {\displaystyle {\begin{aligned}\oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {\Gamma } }&=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\mathbf {e} _{u}(\mathbf {e} _{u}\cdot \,\mathrm {d} \mathbf {y} )+\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\mathbf {e} _{v}(\mathbf {e} _{v}\cdot \,\mathrm {d} \mathbf {y} )}\\&=\oint _{\gamma }{\left(\left(\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(\mathbf {y} )\right)\mathbf {e} _{u}+\left(\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(\mathbf {y} )\right)\mathbf {e} _{v}\right)\cdot \,\mathrm {d} \mathbf {y} }\end{aligned}}}

Second step in the elementary proof (defining the pullback)

[edit]

The previous step suggests we define the function P ( u , v ) = ( F ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ u ( u , v ) ) e u + ( F ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ v ( u , v ) ) e v {\displaystyle \mathbf {P} (u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\right)\mathbf {e} _{u}+\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\right)\mathbf {e} _{v}} {\displaystyle \mathbf {P} (u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\right)\mathbf {e} _{u}+\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\right)\mathbf {e} _{v}}

Now, if the scalar value functions P u {\displaystyle P_{u}} {\displaystyle P_{u}} and P v {\displaystyle P_{v}} {\displaystyle P_{v}} are defined as follows, P u ( u , v ) = ( F ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ u ( u , v ) ) {\displaystyle {P_{u}}(u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\right)} {\displaystyle {P_{u}}(u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\right)} P v ( u , v ) = ( F ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ v ( u , v ) ) {\displaystyle {P_{v}}(u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\right)} {\displaystyle {P_{v}}(u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\right)}then, P ( u , v ) = P u ( u , v ) e u + P v ( u , v ) e v . {\displaystyle \mathbf {P} (u,v)={P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v}.} {\displaystyle \mathbf {P} (u,v)={P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v}.}

This is the pullback of F along ψ, and, by the above, it satisfies ∮ ∂ Σ F ( x ) ⋅ d l = ∮ γ P ( y ) ⋅ d l = ∮ γ ( P u ( u , v ) e u + P v ( u , v ) e v ) ⋅ d l {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {l} }=\oint _{\gamma }{\mathbf {P} (\mathbf {y} )\cdot \,\mathrm {d} \mathbf {l} }=\oint _{\gamma }{({P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v})\cdot \,\mathrm {d} \mathbf {l} }} {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {l} }=\oint _{\gamma }{\mathbf {P} (\mathbf {y} )\cdot \,\mathrm {d} \mathbf {l} }=\oint _{\gamma }{({P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v})\cdot \,\mathrm {d} \mathbf {l} }}

We have successfully reduced one side of Stokes' theorem to a 2-dimensional formula; we now turn to the other side.

Third step of the elementary proof (second equation)

[edit]

First, calculate the partial derivatives appearing in Green's theorem, via the product rule: ∂ P u ∂ v = ∂ ( F ∘ ψ ) ∂ v ⋅ ∂ ψ ∂ u + ( F ∘ ψ ) ⋅ ∂ 2 ψ ∂ v ∂ u ∂ P v ∂ u = ∂ ( F ∘ ψ ) ∂ u ⋅ ∂ ψ ∂ v + ( F ∘ ψ ) ⋅ ∂ 2 ψ ∂ u ∂ v {\displaystyle {\begin{aligned}{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial v}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}+(\mathbf {F} \circ {\boldsymbol {\psi }})\cdot {\frac {\partial ^{2}{\boldsymbol {\psi }}}{\partial v\,\partial u}}\\[5pt]{\frac {\partial P_{v}}{\partial u}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial u}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}+(\mathbf {F} \circ {\boldsymbol {\psi }})\cdot {\frac {\partial ^{2}{\boldsymbol {\psi }}}{\partial u\,\partial v}}\end{aligned}}} {\displaystyle {\begin{aligned}{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial v}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}+(\mathbf {F} \circ {\boldsymbol {\psi }})\cdot {\frac {\partial ^{2}{\boldsymbol {\psi }}}{\partial v\,\partial u}}\[5pt]{\frac {\partial P_{v}}{\partial u}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial u}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}+(\mathbf {F} \circ {\boldsymbol {\psi }})\cdot {\frac {\partial ^{2}{\boldsymbol {\psi }}}{\partial u\,\partial v}}\end{aligned}}}

Conveniently, the second term vanishes in the difference, by equality of mixed partials. So,[note 4] ∂ P v ∂ u − ∂ P u ∂ v = ∂ ( F ∘ ψ ) ∂ u ⋅ ∂ ψ ∂ v − ∂ ( F ∘ ψ ) ∂ v ⋅ ∂ ψ ∂ u = ∂ ψ ∂ v ⋅ ( J ψ ( u , v ) F ) ∂ ψ ∂ u − ∂ ψ ∂ u ⋅ ( J ψ ( u , v ) F ) ∂ ψ ∂ v (chain rule) = ∂ ψ ∂ v ⋅ ( J ψ ( u , v ) F − ( J ψ ( u , v ) F ) T ) ∂ ψ ∂ u {\displaystyle {\begin{aligned}{\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial u}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}-{\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial v}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\\[5pt]&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot (J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} ){\frac {\partial {\boldsymbol {\psi }}}{\partial u}}-{\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\cdot (J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} ){\frac {\partial {\boldsymbol {\psi }}}{\partial v}}&&{\text{(chain rule)}}\\[5pt]&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot \left(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} -{(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}^{\mathsf {T}}\right){\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\end{aligned}}} {\displaystyle {\begin{aligned}{\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial u}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}-{\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial v}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\[5pt]&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot (J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} ){\frac {\partial {\boldsymbol {\psi }}}{\partial u}}-{\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\cdot (J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} ){\frac {\partial {\boldsymbol {\psi }}}{\partial v}}&&{\text{(chain rule)}}\[5pt]&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot \left(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} -{(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}^{\mathsf {T}}\right){\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\end{aligned}}}

But now consider the matrix in that quadratic form—that is, J ψ ( u , v ) F − ( J ψ ( u , v ) F ) T {\displaystyle J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} -(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )^{\mathsf {T}}} {\displaystyle J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} -(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )^{\mathsf {T}}}. We claim this matrix in fact describes a cross product. Here the superscript " T {\displaystyle {}^{\mathsf {T}}} {\displaystyle {}^{\mathsf {T}}}" represents the transposition of matrices.

To be precise, let A = ( A i j ) i j {\displaystyle A=(A_{ij})_{ij}} {\displaystyle A=(A_{ij})_{ij}} be an arbitrary 3 × 3 matrix and let a = [ a 1 a 2 a 3 ] = [ A 32 − A 23 A 13 − A 31 A 21 − A 12 ] {\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}A_{32}-A_{23}\\A_{13}-A_{31}\\A_{21}-A_{12}\end{bmatrix}}} {\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}A_{32}-A_{23}\\A_{13}-A_{31}\\A_{21}-A_{12}\end{bmatrix}}}

Note that xa × x is linear, so it is determined by its action on basis elements. But by direct calculation ( A − A T ) e 1 = [ 0 a 3 − a 2 ] = a × e 1 ( A − A T ) e 2 = [ − a 3 0 a 1 ] = a × e 2 ( A − A T ) e 3 = [ a 2 − a 1 0 ] = a × e 3 {\displaystyle {\begin{aligned}\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{1}&={\begin{bmatrix}0\\a_{3}\\-a_{2}\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{1}\\\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{2}&={\begin{bmatrix}-a_{3}\\0\\a_{1}\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{2}\\\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{3}&={\begin{bmatrix}a_{2}\\-a_{1}\\0\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{3}\end{aligned}}} {\displaystyle {\begin{aligned}\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{1}&={\begin{bmatrix}0\\a_{3}\\-a_{2}\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{1}\\\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{2}&={\begin{bmatrix}-a_{3}\\0\\a_{1}\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{2}\\\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{3}&={\begin{bmatrix}a_{2}\\-a_{1}\\0\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{3}\end{aligned}}}Here, {e1, e2, e3} represents an orthonormal basis in the coordinate directions of R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}}.[note 5]

Thus (A − _A_T)x = a × x for any x.

Substituting ( J ψ ( u , v ) F ) {\displaystyle {(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}} {\displaystyle {(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}} for A, we obtain ( ( J ψ ( u , v ) F ) − ( J ψ ( u , v ) F ) T ) x = ( ∇ × F ) × x , for all x ∈ R 3 {\displaystyle \left({(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}-{(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}^{\mathsf {T}}\right)\mathbf {x} =(\nabla \times \mathbf {F} )\times \mathbf {x} ,\quad {\text{for all}}\,\mathbf {x} \in \mathbb {R} ^{3}} {\displaystyle \left({(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}-{(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}^{\mathsf {T}}\right)\mathbf {x} =(\nabla \times \mathbf {F} )\times \mathbf {x} ,\quad {\text{for all}}\,\mathbf {x} \in \mathbb {R} ^{3}}

We can now recognize the difference of partials as a (scalar) triple product: ∂ P v ∂ u − ∂ P u ∂ v = ∂ ψ ∂ v ⋅ ( ∇ × F ) × ∂ ψ ∂ u = ( ∇ × F ) ⋅ ∂ ψ ∂ u × ∂ ψ ∂ v {\displaystyle {\begin{aligned}{\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot (\nabla \times \mathbf {F} )\times {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}=(\nabla \times \mathbf {F} )\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\times {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\end{aligned}}} {\displaystyle {\begin{aligned}{\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot (\nabla \times \mathbf {F} )\times {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}=(\nabla \times \mathbf {F} )\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\times {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\end{aligned}}}

On the other hand, the definition of a surface integral also includes a triple product—the very same one! ∬ Σ ( ∇ × F ) ⋅ d Σ = ∬ D ( ∇ × F ) ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ u ( u , v ) × ∂ ψ ∂ v ( u , v ) d u d v {\displaystyle {\begin{aligned}\iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \,d\mathbf {\Sigma } &=\iint _{D}{(\nabla \times \mathbf {F} )({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\times {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\,\mathrm {d} u\,\mathrm {d} v}\end{aligned}}} {\displaystyle {\begin{aligned}\iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \,d\mathbf {\Sigma } &=\iint _{D}{(\nabla \times \mathbf {F} )({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\times {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\,\mathrm {d} u\,\mathrm {d} v}\end{aligned}}}

So, we obtain ∬ Σ ( ∇ × F ) ⋅ d Σ = ∬ D ( ∂ P v ∂ u − ∂ P u ∂ v ) d u d v {\displaystyle \iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \,\mathrm {d} \mathbf {\Sigma } =\iint _{D}\left({\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}\right)\,\mathrm {d} u\,\mathrm {d} v} {\displaystyle \iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \,\mathrm {d} \mathbf {\Sigma } =\iint _{D}\left({\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}\right)\,\mathrm {d} u\,\mathrm {d} v}

Fourth step of the elementary proof (reduction to Green's theorem)

[edit]

Combining the second and third steps, and then applying Green's theorem completes the proof. Green's theorem asserts the following: for any region D bounded by the Jordans closed curve γ and two scalar-valued smooth functions P u ( u , v ) , P v ( u , v ) {\displaystyle P_{u}(u,v),P_{v}(u,v)} {\displaystyle P_{u}(u,v),P_{v}(u,v)} defined on D;

∮ γ ( P u ( u , v ) e u + P v ( u , v ) e v ) ⋅ d l = ∬ D ( ∂ P v ∂ u − ∂ P u ∂ v ) d u d v {\displaystyle \oint _{\gamma }{({P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v})\cdot \,\mathrm {d} \mathbf {l} }=\iint _{D}\left({\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}\right)\,\mathrm {d} u\,\mathrm {d} v} {\displaystyle \oint _{\gamma }{({P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v})\cdot \,\mathrm {d} \mathbf {l} }=\iint _{D}\left({\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}\right)\,\mathrm {d} u\,\mathrm {d} v}

We can substitute the conclusion of STEP2 into the left-hand side of Green's theorem above, and substitute the conclusion of STEP3 into the right-hand side.Q.E.D.

Proof via differential forms

[edit]

The functions R 3 → R 3 {\displaystyle \mathbb {R} ^{3}\to \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}\to \mathbb {R} ^{3}} can be identified with the differential 1-forms on R 3 {\displaystyle \mathbb {R} ^{3}} {\displaystyle \mathbb {R} ^{3}} via the map F x e 1 + F y e 2 + F z e 3 ↦ F x d x + F y d y + F z d z . {\displaystyle F_{x}\mathbf {e} _{1}+F_{y}\mathbf {e} _{2}+F_{z}\mathbf {e} _{3}\mapsto F_{x}\,\mathrm {d} x+F_{y}\,\mathrm {d} y+F_{z}\,\mathrm {d} z.} {\displaystyle F_{x}\mathbf {e} _{1}+F_{y}\mathbf {e} _{2}+F_{z}\mathbf {e} _{3}\mapsto F_{x}\,\mathrm {d} x+F_{y}\,\mathrm {d} y+F_{z}\,\mathrm {d} z.}

Write the differential 1-form associated to a function F as ωF. Then one can calculate that ⋆ ω ∇ × F = d ω F , {\displaystyle \star \omega _{\nabla \times \mathbf {F} }=\mathrm {d} \omega _{\mathbf {F} },} {\displaystyle \star \omega _{\nabla \times \mathbf {F} }=\mathrm {d} \omega _{\mathbf {F} },}where ★ is the Hodge star and d {\displaystyle \mathrm {d} } {\displaystyle \mathrm {d} } is the exterior derivative. Thus, by generalized Stokes' theorem,[11] ∮ ∂ Σ F ⋅ d γ = ∮ ∂ Σ ω F = ∫ Σ d ω F = ∫ Σ ⋆ ω ∇ × F = ∬ Σ ∇ × F ⋅ d Σ {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} \cdot \,\mathrm {d} \mathbf {\gamma } }=\oint _{\partial \Sigma }{\omega _{\mathbf {F} }}=\int _{\Sigma }{\mathrm {d} \omega _{\mathbf {F} }}=\int _{\Sigma }{\star \omega _{\nabla \times \mathbf {F} }}=\iint _{\Sigma }{\nabla \times \mathbf {F} \cdot \,\mathrm {d} \mathbf {\Sigma } }} {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} \cdot \,\mathrm {d} \mathbf {\gamma } }=\oint _{\partial \Sigma }{\omega _{\mathbf {F} }}=\int _{\Sigma }{\mathrm {d} \omega _{\mathbf {F} }}=\int _{\Sigma }{\star \omega _{\nabla \times \mathbf {F} }}=\iint _{\Sigma }{\nabla \times \mathbf {F} \cdot \,\mathrm {d} \mathbf {\Sigma } }}

Irrotational fields

[edit]

In this section, we will discuss the irrotational field (lamellar vector field) based on Stokes' theorem.

Definition 2-1 (irrotational field). A smooth vector field F on an open U ⊆ R 3 {\displaystyle U\subseteq \mathbb {R} ^{3}} {\displaystyle U\subseteq \mathbb {R} ^{3}} is irrotational (lamellar vector field) if ∇ × F = 0.

This concept is very fundamental in mechanics; as we'll prove later, if F is irrotational and the domain of F is simply connected, then F is a conservative vector field.

Helmholtz's theorem

[edit]

In this section, we will introduce a theorem that is derived from Stokes' theorem and characterizes vortex-free vector fields. In classical mechanics and fluid dynamics it is called Helmholtz's theorem.

Theorem 2-1 (Helmholtz's theorem in fluid dynamics).[5][3]: 142 Let U ⊆ R 3 {\displaystyle U\subseteq \mathbb {R} ^{3}} {\displaystyle U\subseteq \mathbb {R} ^{3}} be an open subset with a lamellar vector field F and let _c_0, _c_1: [0, 1] → U be piecewise smooth loops. If there is a function H: [0, 1] × [0, 1] → U such that

Then, ∫ c 0 F d c 0 = ∫ c 1 F d c 1 {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=\int _{c_{1}}\mathbf {F} \,\mathrm {d} c_{1}} {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=\int _{c_{1}}\mathbf {F} \,\mathrm {d} c_{1}}

Some textbooks such as Lawrence[5] call the relationship between _c_0 and _c_1 stated in theorem 2-1 as "homotopic" and the function H: [0, 1] × [0, 1] → U as "homotopy between _c_0 and _c_1". However, "homotopic" or "homotopy" in above-mentioned sense are different (stronger than) typical definitions of "homotopic" or "homotopy"; the latter omit condition [TLH3]. So from now on we refer to homotopy (homotope) in the sense of theorem 2-1 as a tubular homotopy (resp. tubular-homotopic).[note 6]

Proof of Helmholtz's theorem

[edit]

The definitions of _γ_1, ..., _γ_4

In what follows, we abuse notation and use " ⊕ {\displaystyle \oplus } {\displaystyle \oplus }" for concatenation of paths in the fundamental groupoid and " ⊖ {\displaystyle \ominus } {\displaystyle \ominus }" for reversing the orientation of a path.

Let D = [0, 1] × [0, 1], and split ∂D into four line segments γ j. γ 1 : [ 0 , 1 ] → D ; γ 1 ( t ) = ( t , 0 ) γ 2 : [ 0 , 1 ] → D ; γ 2 ( s ) = ( 1 , s ) γ 3 : [ 0 , 1 ] → D ; γ 3 ( t ) = ( 1 − t , 1 ) γ 4 : [ 0 , 1 ] → D ; γ 4 ( s ) = ( 0 , 1 − s ) {\displaystyle {\begin{aligned}\gamma _{1}:[0,1]\to D;\quad &\gamma _{1}(t)=(t,0)\\\gamma _{2}:[0,1]\to D;\quad &\gamma _{2}(s)=(1,s)\\\gamma _{3}:[0,1]\to D;\quad &\gamma _{3}(t)=(1-t,1)\\\gamma _{4}:[0,1]\to D;\quad &\gamma _{4}(s)=(0,1-s)\end{aligned}}} {\displaystyle {\begin{aligned}\gamma _{1}:[0,1]\to D;\quad &\gamma _{1}(t)=(t,0)\\\gamma _{2}:[0,1]\to D;\quad &\gamma _{2}(s)=(1,s)\\\gamma _{3}:[0,1]\to D;\quad &\gamma _{3}(t)=(1-t,1)\\\gamma _{4}:[0,1]\to D;\quad &\gamma _{4}(s)=(0,1-s)\end{aligned}}}so that ∂ D = γ 1 ⊕ γ 2 ⊕ γ 3 ⊕ γ 4 {\displaystyle \partial D=\gamma _{1}\oplus \gamma _{2}\oplus \gamma _{3}\oplus \gamma _{4}} {\displaystyle \partial D=\gamma _{1}\oplus \gamma _{2}\oplus \gamma _{3}\oplus \gamma _{4}}

By our assumption that _c_0 and _c_1 are piecewise smooth homotopic, there is a piecewise smooth homotopy H: DM Γ i ( t ) = H ( γ i ( t ) ) i = 1 , 2 , 3 , 4 Γ ( t ) = H ( γ ( t ) ) = ( Γ 1 ⊕ Γ 2 ⊕ Γ 3 ⊕ Γ 4 ) ( t ) {\displaystyle {\begin{aligned}\Gamma _{i}(t)&=H(\gamma _{i}(t))&&i=1,2,3,4\\\Gamma (t)&=H(\gamma (t))=(\Gamma _{1}\oplus \Gamma _{2}\oplus \Gamma _{3}\oplus \Gamma _{4})(t)\end{aligned}}} {\displaystyle {\begin{aligned}\Gamma _{i}(t)&=H(\gamma _{i}(t))&&i=1,2,3,4\\\Gamma (t)&=H(\gamma (t))=(\Gamma _{1}\oplus \Gamma _{2}\oplus \Gamma _{3}\oplus \Gamma _{4})(t)\end{aligned}}}

Let S be the image of D under H. That ∬ S ∇ × F d S = ∮ Γ F d Γ {\displaystyle \iint _{S}\nabla \times \mathbf {F} \,\mathrm {d} S=\oint _{\Gamma }\mathbf {F} \,\mathrm {d} \Gamma } {\displaystyle \iint _{S}\nabla \times \mathbf {F} \,\mathrm {d} S=\oint _{\Gamma }\mathbf {F} \,\mathrm {d} \Gamma }follows immediately from Stokes' theorem. F is lamellar, so the left side vanishes, i.e. 0 = ∮ Γ F d Γ = ∑ i = 1 4 ∮ Γ i F d Γ {\displaystyle 0=\oint _{\Gamma }\mathbf {F} \,\mathrm {d} \Gamma =\sum _{i=1}^{4}\oint _{\Gamma _{i}}\mathbf {F} \,\mathrm {d} \Gamma } {\displaystyle 0=\oint _{\Gamma }\mathbf {F} \,\mathrm {d} \Gamma =\sum _{i=1}^{4}\oint _{\Gamma _{i}}\mathbf {F} \,\mathrm {d} \Gamma }

As H is tubular(satisfying [TLH3]), Γ 2 = ⊖ Γ 4 {\displaystyle \Gamma _{2}=\ominus \Gamma _{4}} {\displaystyle \Gamma _{2}=\ominus \Gamma _{4}} and Γ 2 = ⊖ Γ 4 {\displaystyle \Gamma _{2}=\ominus \Gamma _{4}} {\displaystyle \Gamma _{2}=\ominus \Gamma _{4}}. Thus the line integrals along Γ2(s) and Γ4(s) cancel, leaving 0 = ∮ Γ 1 F d Γ + ∮ Γ 3 F d Γ {\displaystyle 0=\oint _{\Gamma _{1}}\mathbf {F} \,\mathrm {d} \Gamma +\oint _{\Gamma _{3}}\mathbf {F} \,\mathrm {d} \Gamma } {\displaystyle 0=\oint _{\Gamma _{1}}\mathbf {F} \,\mathrm {d} \Gamma +\oint _{\Gamma _{3}}\mathbf {F} \,\mathrm {d} \Gamma }

On the other hand, _c_1 = Γ1, c 3 = ⊖ Γ 3 {\displaystyle c_{3}=\ominus \Gamma _{3}} {\displaystyle c_{3}=\ominus \Gamma _{3}}, so that the desired equality follows almost immediately.

Conservative forces

[edit]

Above Helmholtz's theorem gives an explanation as to why the work done by a conservative force in changing an object's position is path independent. First, we introduce the Lemma 2-2, which is a corollary of and a special case of Helmholtz's theorem.

Lemma 2-2.[5][6] Let U ⊆ R 3 {\displaystyle U\subseteq \mathbb {R} ^{3}} {\displaystyle U\subseteq \mathbb {R} ^{3}} be an open subset, with a Lamellar vector field F and a piecewise smooth loop _c_0: [0, 1] → U. Fix a point pU, if there is a homotopy H: [0, 1] × [0, 1] → U such that

Then, ∫ c 0 F d c 0 = 0 {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=0} {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=0}

Above Lemma 2-2 follows from theorem 2–1. In Lemma 2-2, the existence of H satisfying [SC0] to [SC3] is crucial;the question is whether such a homotopy can be taken for arbitrary loops. If U is simply connected, such H exists. The definition of simply connected space follows:

Definition 2-2 (simply connected space).[5][6] Let M ⊆ R n {\displaystyle M\subseteq \mathbb {R} ^{n}} {\displaystyle M\subseteq \mathbb {R} ^{n}} be non-empty and path-connected. M is called simply connected if and only if for any continuous loop, c: [0, 1] → M there exists a continuous tubular homotopy H: [0, 1] × [0, 1] → M from c to a fixed point pc; that is,

The claim that "for a conservative force, the work done in changing an object's position is path independent" might seem to follow immediately if the M is simply connected. However, recall that simple-connection only guarantees the existence of a continuous homotopy satisfying [SC1-3]; we seek a piecewise smooth homotopy satisfying those conditions instead.

Fortunately, the gap in regularity is resolved by the Whitney's approximation theorem.[6]: 136, 421 [12] In other words, the possibility of finding a continuous homotopy, but not being able to integrate over it, is actually eliminated with the benefit of higher mathematics. We thus obtain the following theorem.

Theorem 2-2.[5][6] Let U ⊆ R 3 {\displaystyle U\subseteq \mathbb {R} ^{3}} {\displaystyle U\subseteq \mathbb {R} ^{3}} be open and simply connected with an irrotational vector field F. For all piecewise smooth loops c: [0, 1] → U ∫ c 0 F d c 0 = 0 {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=0} {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=0}

Maxwell's equations

[edit]

In the physics of electromagnetism, Stokes' theorem provides the justification for the equivalence of the differential form of the Maxwell–Faraday equation and the Maxwell–Ampère equation and the integral form of these equations. For Faraday's law, Stokes' theorem is applied to the electric field, E {\displaystyle \mathbf {E} } {\displaystyle \mathbf {E} }: ∮ ∂ Σ E ⋅ d l = ∬ Σ ∇ × E ⋅ d S . {\displaystyle \oint _{\partial \Sigma }\mathbf {E} \cdot \mathrm {d} {\boldsymbol {l}}=\iint _{\Sigma }\mathbf {\nabla } \times \mathbf {E} \cdot \mathrm {d} \mathbf {S} .} {\displaystyle \oint _{\partial \Sigma }\mathbf {E} \cdot \mathrm {d} {\boldsymbol {l}}=\iint _{\Sigma }\mathbf {\nabla } \times \mathbf {E} \cdot \mathrm {d} \mathbf {S} .}

For Ampère's law, Stokes' theorem is applied to the magnetic field, B {\displaystyle \mathbf {B} } {\displaystyle \mathbf {B} }: ∮ ∂ Σ B ⋅ d l = ∬ Σ ∇ × B ⋅ d S . {\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {l}}=\iint _{\Sigma }\mathbf {\nabla } \times \mathbf {B} \cdot \mathrm {d} \mathbf {S} .} {\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {l}}=\iint _{\Sigma }\mathbf {\nabla } \times \mathbf {B} \cdot \mathrm {d} \mathbf {S} .}

  1. ^ Σ = ψ ( D ) {\displaystyle \Sigma =\psi (D)} {\displaystyle \Sigma =\psi (D)} represents the image set of D {\displaystyle D} {\displaystyle D} by ψ {\displaystyle \psi } {\displaystyle \psi }

  2. ^ Γ {\displaystyle \Gamma } {\displaystyle \Gamma } may not be a Jordan curve if the loop γ {\displaystyle \gamma } {\displaystyle \gamma } interacts poorly with ψ {\displaystyle \psi } {\displaystyle \psi }. Nonetheless, Γ {\displaystyle \Gamma } {\displaystyle \Gamma } is always a loop, and topologically a connected sum of countably many Jordan curves, so that the integrals are well-defined.

  3. ^ In this article, e u = [ 1 0 ] , e v = [ 0 1 ] . {\displaystyle \mathbf {e} _{u}={\begin{bmatrix}1\\0\end{bmatrix}},\mathbf {e} _{v}={\begin{bmatrix}0\\1\end{bmatrix}}.} {\displaystyle \mathbf {e} _{u}={\begin{bmatrix}1\\0\end{bmatrix}},\mathbf {e} _{v}={\begin{bmatrix}0\\1\end{bmatrix}}.}Note that, in some textbooks on vector analysis, these are assigned to different things. For example, in some text book's notation, {eu, ev} can mean the following {tu, tv} respectively. In this article, however, these are two completely different things. t u = 1 h u ∂ φ ∂ u , t v = 1 h v ∂ φ ∂ v . {\displaystyle \mathbf {t} _{u}={\frac {1}{h_{u}}}{\frac {\partial \varphi }{\partial u}}\,,\mathbf {t} _{v}={\frac {1}{h_{v}}}{\frac {\partial \varphi }{\partial v}}.} {\displaystyle \mathbf {t} _{u}={\frac {1}{h_{u}}}{\frac {\partial \varphi }{\partial u}}\,,\mathbf {t} _{v}={\frac {1}{h_{v}}}{\frac {\partial \varphi }{\partial v}}.}Here, h u = ‖ ∂ φ ∂ u ‖ , h v = ‖ ∂ φ ∂ v ‖ , {\displaystyle h_{u}=\left\|{\frac {\partial \varphi }{\partial u}}\right\|,h_{v}=\left\|{\frac {\partial \varphi }{\partial v}}\right\|,} {\displaystyle h_{u}=\left\|{\frac {\partial \varphi }{\partial u}}\right\|,h_{v}=\left\|{\frac {\partial \varphi }{\partial v}}\right\|,}and the " ‖ ⋅ ‖ {\displaystyle \|\cdot \|} {\displaystyle \|\cdot \|}" represents Euclidean norm.

  4. ^ For all a , b ∈ R n {\displaystyle {\textbf {a}},{\textbf {b}}\in \mathbb {R} ^{n}} {\displaystyle {\textbf {a}},{\textbf {b}}\in \mathbb {R} ^{n}}, for all A ; n × n {\displaystyle A;n\times n} {\displaystyle A;n\times n} square matrix, a ⋅ A b = a T A b {\displaystyle {\textbf {a}}\cdot A{\textbf {b}}={\textbf {a}}^{\mathsf {T}}A{\textbf {b}}} {\displaystyle {\textbf {a}}\cdot A{\textbf {b}}={\textbf {a}}^{\mathsf {T}}A{\textbf {b}}} and therefore a ⋅ A b = b ⋅ A T a {\displaystyle {\textbf {a}}\cdot A{\textbf {b}}={\textbf {b}}\cdot A^{\mathsf {T}}{\textbf {a}}} {\displaystyle {\textbf {a}}\cdot A{\textbf {b}}={\textbf {b}}\cdot A^{\mathsf {T}}{\textbf {a}}}.

  5. ^ In this article, e 1 = [ 1 0 0 ] , e 2 = [ 0 1 0 ] , e 3 = [ 0 0 1 ] . {\displaystyle \mathbf {e} _{1}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\mathbf {e} _{2}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\mathbf {e} _{3}={\begin{bmatrix}0\\0\\1\end{bmatrix}}.} {\displaystyle \mathbf {e} _{1}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\mathbf {e} _{2}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\mathbf {e} _{3}={\begin{bmatrix}0\\0\\1\end{bmatrix}}.}Note that, in some textbooks on vector analysis, these are assigned to different things.

  6. ^ There do exist textbooks that use the terms "homotopy" and "homotopic" in the sense of Theorem 2-1.[5] Indeed, this is very convenient for the specific problem of conservative forces. However, both uses of homotopy appear sufficiently frequently that some sort of terminology is necessary to disambiguate, and the term "tubular homotopy" adopted here serves well enough for that end.

  7. ^ Stewart, James (2012). Calculus – Early Transcendentals (PDF) (7th ed.). Brooks/Cole. p. 1122. ISBN 978-0-538-49790-9.

  8. ^ Nagayoshi, Iwahori (1983). 微分積分学 (Bibun sekibungaku) (in Japanese). Shokabo. ISBN 978-4-7853-1039-4. OCLC 673475347.

  9. ^ a b Atsuo, Fujimoto (1979). 現代数学レクチャーズ. C 1, ベクトル解析 (Gendai sūgaku rekuchāzu. C(1), Bekutoru kaiseki) (in Japanese). Baifukan. OCLC 674186011.

  10. ^ Griffiths, David J. (2013). Introduction to electrodynamics (PDF). Always learning (4. ed.). Boston: Pearson. p. 34. ISBN 978-0-321-85656-2.

  11. ^ a b c d e f g Conlon, Lawrence (2008). Differentiable manifolds. Modern Birkhäuser classics (2. ed.). Boston ; Berlin: Birkhäuser. ISBN 978-0-8176-4766-7.

  12. ^ a b c d e Lee, John M. (2012). Introduction to smooth manifolds. Graduate texts in mathematics. Vol. 218 (2nd ed.). New York ; London: Springer. ISBN 978-1-4419-9982-5.

  13. ^ Stewart, James (2010). Essential calculus: early transcendentals. Australia ; United States: Brooks/Cole. ISBN 978-0-538-49739-8.

  14. ^ a b Robert Scheichl, lecture notes for University of Bath mathematics course

  15. ^ Pérez-Garrido, A. (2024-05-01). "Recovering seldom-used theorems of vector calculus and their application to problems of electromagnetism". American Journal of Physics. 92 (5): 354–359. arXiv:2312.17268. Bibcode:2024AmJPh..92e.354P. doi:10.1119/5.0182191. ISSN 0002-9505.

  16. ^ Colley, Susan Jane (2012). Vector calculus (PDF) (4th ed.). Boston: Pearson. pp. 500–3. ISBN 978-0-321-78065-2. OCLC 732967769.

  17. ^ Edwards, Harold M. (1994). Advanced calculus: a differential forms approach (3rd ed.). Boston: Birkhäuser. ISBN 978-0-8176-3707-1.

  18. ^ Pontryagin, L. S. (1959). "Smooth manifolds and their applications in homotopy theory" (PDF). American Mathematical Society Translations. Series 2. 11. Translated by Hilton, P. J. Providence, Rhode Island: American Mathematical Society: 1–114. doi:10.1090/trans2/011/01. ISBN 978-0-8218-1711-7. MR 0115178. See theorems 7 & 8.