447
edits
mNo edit summary |
mNo edit summary |
||
Line 29: | Line 29: | ||
Supposing <math> q_i(t) </math> solves the EL equations for s degrees of freedom, we can analyze properties of the integral across finite time <math> S = \int_{t_1}^{t_2} L(q_1(t), \cdots, q_s(t), \dot{q}_1(t), \cdots, \dot{q}_s(t), t) dt </math>, since substituting the trajectory gives a strict function of time. Integrals/primitives are often viewed as functions of the bounds of integration, however the perspective in mechanics is to make it a function of the trajectory. This means <math> S(q(-)) </math> with t suppressed to indicate it does not depend on time is a scalar function on an infinite dimensional space of paths. We want to understand its derivative, and thus when it has a minimum. There is an easy way to do this without thinking too hard about how to formulate what these spaces are, which we do in a later section to enhance this explanation. Also note that being a function of the trajectory (a functional) means that the underlying set of the trajectory in physical configuration space does not describe the motion completely, since the same path can be traced out by motion at different velocities implying the need for the parametrization by time. Parametrization-dependence is exploited further in differential geometry when defining tangent vectors on abstract manifolds. | Supposing <math> q_i(t) </math> solves the EL equations for s degrees of freedom, we can analyze properties of the integral across finite time <math> S = \int_{t_1}^{t_2} L(q_1(t), \cdots, q_s(t), \dot{q}_1(t), \cdots, \dot{q}_s(t), t) dt </math>, since substituting the trajectory gives a strict function of time. Integrals/primitives are often viewed as functions of the bounds of integration, however the perspective in mechanics is to make it a function of the trajectory. This means <math> S(q(-)) </math> with t suppressed to indicate it does not depend on time is a scalar function on an infinite dimensional space of paths. We want to understand its derivative, and thus when it has a minimum. There is an easy way to do this without thinking too hard about how to formulate what these spaces are, which we do in a later section to enhance this explanation. Also note that being a function of the trajectory (a functional) means that the underlying set of the trajectory in physical configuration space does not describe the motion completely, since the same path can be traced out by motion at different velocities implying the need for the parametrization by time. Parametrization-dependence is exploited further in differential geometry when defining tangent vectors on abstract manifolds. | ||
In two dimensions, rather than infinite, the minimum of a function can be described by an equivalent condition to the derivative being 0. Let <math> F:\mathbb{R}^2\rightarrow \mathbb{R} </math>. Typically we would check the condition <math> \frac{\partial F(x_0,y_0)}{\partial x}=\frac{\partial F(x_0,y_0)}{\partial y}=0 </math> at some point <math> (x_0,y_0)\in \mathbb{R}^2 </math>. Rather than differentiating, we can analyze the finite difference treating the input as a vector: <math> F(\mathbf{x}+\mathbf{h})-F(\mathbf{x}) = G(\mathbf{h})</math> and look at the linear part of <math> G </math>. If <math> F was already linear, then computing its derivative comes simply: <math> F(x,y)=ax+by+c\rightarrow G(h_1, h_2)=ah_1+bh_2 </math>. Note the linear dependence on <math> \mathbf{h} </math>, which will remain even when <math> F </math> has higher order terms: <math> G=ah_1+bh_2+ch_1^2+dh_1h_2+\cdots </math>. The functions in finite dimensions we are used to have derivatives, so their derivatives can be described via the linear part of <math> G(\mathbf{h})=L(\mathbf{h})+R(\mathbf{h}), \, L(\mathbf{h}+\mathbf{h}')=L(\mathbf{h})+L(\mathbf{h}') </math>. In infinite dimensions, we may not always have explicit methods of differentiating, but we can look for the linear part of the difference at shifted inputs. We also have to be sure that the entire linear part is in <math> L </math>, so this puts a condition on <math> R </math>. | In two dimensions, rather than infinite, the minimum of a function can be described by an equivalent condition to the derivative being 0. Let <math> F:\mathbb{R}^2\rightarrow \mathbb{R} </math>. Typically we would check the condition <math> \frac{\partial F(x_0,y_0)}{\partial x}=\frac{\partial F(x_0,y_0)}{\partial y}=0 </math> at some point <math> (x_0,y_0)\in \mathbb{R}^2 </math>. Rather than differentiating, we can analyze the finite difference treating the input as a vector: <math> F(\mathbf{x}+\mathbf{h})-F(\mathbf{x}) = G(\mathbf{h})</math> and look at the linear part of <math> G </math>. If <math> F </math> was already linear, then computing its derivative comes simply: <math> F(x,y)=ax+by+c\rightarrow G(h_1, h_2)=ah_1+bh_2 </math>. Note the linear dependence on <math> \mathbf{h} </math>, which will remain even when <math> F </math> has higher order terms: <math> G=ah_1+bh_2+ch_1^2+dh_1h_2+\cdots </math>. The functions in finite dimensions we are used to have derivatives, so their derivatives can be described via the linear part of <math> G(\mathbf{h})=L(\mathbf{h})+R(\mathbf{h}), \, L(\mathbf{h}+\mathbf{h}')=L(\mathbf{h})+L(\mathbf{h}') </math>. In infinite dimensions, we may not always have explicit methods of differentiating, but we can look for the linear part of the difference at shifted inputs. We also have to be sure that the entire linear part is in <math> L </math>, so this puts a condition on <math> R </math>. | ||
[[File:Gaudi hanging strings.jpg|thumb|right|Hanging strings and weights used by Gaudi to model the shape of La Sagrada Familia. [http://dataphys.org/list/gaudis-hanging-chain-models/ source]]] | [[File:Gaudi hanging strings.jpg|thumb|right|Hanging strings and weights used by Gaudi to model the shape of La Sagrada Familia. [http://dataphys.org/list/gaudis-hanging-chain-models/ source]]] |