ID
stringlengths
9
15
Exercise
stringlengths
27
6k
judge
stringclasses
2 values
Exercise 3.3.15
b) This matrix equals its own conjugate transpose: \[ {\left\lbrack \begin{matrix} 0 & 2 + {3i} \\ 2 - {3i} & 4 \end{matrix}\right\rbrack }^{ * } = \left\lbrack \begin{matrix} 0 & 2 + {3i} \\ 2 - {3i} & 4 \end{matrix}\right\rbrack . \]
No
Exercise 8.28
Exercise 8.28 Let \( \mathbb{T} = ( - \infty ,0\rbrack \cup \mathbb{N} \), where \( ( - \infty ,0\rbrack \) is the real line interval. Find \( l\left( \Gamma \right) \), where \[ \Gamma = \left\{ \begin{array}{l} {x}_{1} = {t}^{3} \\ {x}_{2} = {t}^{2},\;t \in \left\lbrack {-1,0}\right\rbrack \cup \{ 1,2,3\} . \end{array}\right. \] Solution \( \frac{1}{27}\left( {8 - {13}\sqrt{13}}\right) + \sqrt{2} + \sqrt{58} + \sqrt{386} \) .
Yes
Problem 7.4
Problem 7.4. Let \( A \) be an \( N \times N \) matrix with eigenvalues \( {\rho }_{1} \geq {\rho }_{2} \geq \cdots \geq \) \( {\rho }_{N} \) . Consider a \( d \) -dimensional parallelepiped with the vectors \( {\xi }_{0}^{\left( 1\right) },\cdots {\xi }_{0}^{\left( d\right) } \) , as its sides, and let \( {V}_{0}^{\left( d\right) } \) be the volume of this parallelepiped, and \( {V}_{n}^{\left( d\right) } \) be the volume of its image under \( {A}^{n} \) . We know that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{1}{n}\ell n{V}_{n}^{\left( d\right) } = \) \( \ell n{\rho }_{1} + \cdots + \ell n{\rho }_{d} \) . How does the quantity \[ \left| {\frac{1}{n}\ell n{V}_{n}^{d} - \mathop{\sum }\limits_{{i = 1}}^{d}\ell n{\rho }_{i}}\right| \] behave as \( n \rightarrow \infty \) ?
Yes
Example 8.9
Example 8.9. We will now solve the wave equation on a disk and compare the fundamental frequency of a circular drum to that of a square drum. We consider the IBVP \[ \frac{{\partial }^{2}u}{\partial {t}^{2}} - {c}^{2}{\Delta }_{p}u = 0,\left( {r,\theta }\right) \in \Omega, t > 0, \tag{8.49} \] \[ u\left( {r,\theta ,0}\right) = \phi \left( {r,\theta }\right) ,\left( {r,\theta }\right) \in \Omega , \tag{8.50} \] \[ \frac{\partial u}{\partial t}\left( {r,\theta ,0}\right) = \gamma \left( {r,\theta }\right) ,\left( {r,\theta }\right) \in \Omega , \tag{8.51} \] \[ u\left( {A,\theta, t}\right) = 0, - \pi \leq \theta < \pi, t > 0, \] where \( \Omega \) is the disk of radius \( A \), centered at the origin. We write the solution as \[ u\left( {r,\theta, t}\right) = \mathop{\sum }\limits_{{m = 1}}^{\infty }{a}_{m0}\left( t\right) {J}_{0}\left( {{\alpha }_{m0}r}\right) \] \[ + \mathop{\sum }\limits_{{m = 1}}^{\infty }\mathop{\sum }\limits_{{n = 1}}^{\infty }\left( {{a}_{mn}\left( t\right) \cos \left( {n\theta }\right) + {b}_{mn}\left( t\right) \sin \left( {n\theta }\right) }\right) {J}_{n}\left( {{\alpha }_{mn}r}\right) . \] Then, by the usual argument, \[ \frac{{\partial }^{2}u}{\partial {t}^{2}}\left( {r,\theta, t}\right) - {c}^{2}{\Delta }_{p}u\left( {r,\theta, t}\right) = \mathop{\sum }\limits_{{m = 1}}^{\infty }\left( {\frac{{d}^{2}{a}_{m0}}{d{t}^{2}}\left( t\right) + {c}^{2}{\lambda }_{m0}{a}_{m0}\left( t\right) }\right) {J}_{0}\left( {{\alpha }_{m0}r}\right) \] \[ + \mathop{\sum }\limits_{{m = 1}}^{\infty }\mathop{\sum }\limits_{{n = 1}}^{\infty }\left( {\left( {\frac{{d}^{2}{a}_{mn}}{d{t}^{2}}\left( t\right) + {c}^{2}{\lambda }_{mn}{a}_{mn}\left( t\right) }\right) \cos \left( {n\theta }\right) }\right. \] \[ + \left. {\left( {\frac{{d}^{2}{b}_{mn}}{d{t}^{2}}\left( t\right) + {c}^{2}{\lambda }_{mn}{b}_{mn}\left( t\right) }\right) \sin \left( {n\theta }\right) }\right) {J}_{n}\left( {{\alpha }_{mn}r}\right) . \] The wave equation then implies the following ODEs: \[ \frac{{d}^{2}{a}_{mn}}{d{t}^{2}} + {c}^{2}{\lambda }_{mn}{a}_{mn} = 0, m = 1,2,3,\ldots, n = 0,1,2,\ldots , \] \[ \frac{{d}^{2}{b}_{mn}}{d{t}^{2}} + {c}^{2}{\lambda }_{mn}{b}_{mn} = 0, m, n = 1,2,3,\ldots \] From the initial conditions for the PDE, we obtain \[ {a}_{mn}\left( 0\right) = {c}_{mn}, \] \[ \frac{d{a}_{mn}}{dt}\left( 0\right) = {d}_{mn}, m = 1,2,3,\ldots, n = 0,1,2,\ldots , \] \[ {b}_{mn}\left( 0\right) = {e}_{mn}, \] \[ \frac{d{b}_{mn}}{dt}\left( 0\right) = {f}_{mn}, m, n = 1,2,3,\ldots , \] where \( {c}_{mn},{d}_{mn} \) are the (generalized) Fourier coefficients for \( \phi \), and \( {e}_{mn},{f}_{mn} \) are the (generalized) Fourier coefficients for \( \gamma \) . By applying the results of Section 4.7 and using the fact that \( {\lambda }_{mn} = {\alpha }_{mn}^{2} \), we obtain the following formulas for the coefficients of the solution: \[ {a}_{mn}\left( t\right) = {c}_{mn}\cos \left( {c{\alpha }_{mn}t}\right) + \frac{{d}_{mn}}{c{\alpha }_{mn}}\sin \left( {c{\alpha }_{mn}t}\right), m = 1,2,3,\ldots, n = 0,1,2,\ldots , \] \[ {b}_{mn}\left( t\right) = {e}_{mn}\cos \left( {c{\alpha }_{mn}t}\right) + \frac{{f}_{mn}}{c{\alpha }_{mn}}\sin \left( {c{\alpha }_{mn}t}\right), m, n = 1,2,3,\ldots \] The smallest period of any of these coefficients is that of \( {a}_{10} \) : \[ {T}_{10} = \frac{2\pi }{c{\alpha }_{10}} = \frac{2\pi }{c{s}_{01}/A} = \frac{2A\pi }{c{s}_{01}}. \] The corresponding frequency, which is the fundamental frequency of this circular drum, is \[ {F}_{10} = \frac{c{s}_{01}}{2A\pi } = \frac{c}{A}\frac{{s}_{01}}{2\pi } \doteq {0.382740}\frac{c}{A}. \] It would be reasonable to compare a circular drum of radius \( A \) (diameter \( {2A} \) ) with a square drum of side length \( {2A} \) . (Another possibility is to compare the circular drum with a square drum of the same area. This is Exercise 9.) The fundamental frequency of such a square drum is \[ \frac{c\sqrt{\frac{{\pi }^{2}}{{\left( 2A\right) }^{2}} + \frac{{\pi }^{2}}{{\left( 2A\right) }^{2}}}}{2\pi } = \frac{c}{A}\frac{1}{2\sqrt{2}} \doteq {0.353553}\frac{c}{A}. \] The square drum sounds a lower frequency than the circular drum.
Yes
Example 3.3.11
Example 3.3.11. Suppose that \( G \) is a group of order 8 that contains an element \( x \) of order 4 . Let \( y \) be another element in \( G \) that is distinct from any power of \( x \) . With these criteria, we know so far that \( G \) contains the distinct elements \( e, x,{x}^{2},{x}^{3}, y \) . The element \( {xy} \) cannot be \( e\; \) because that would imply \( y = {x}^{3} \) ; \( x\; \) because that would imply \( y = e \) ; \( {x}^{2}\; \) because that would imply \( y = x \) ; \( {x}^{3}\; \) because that would imply \( y = {x}^{2} \) ; \( y\; \) because that would imply \( x = e \) . So \( {xy} \) is a new element of \( G \) . By similar reasonings which we leave to the reader, the elements \( {x}^{2}y \) and \( {x}^{3}y \) are distinct from all the others. Hence, \( G \) must contain the 8 distinct elements \[ \left\{ {e, x,{x}^{2},{x}^{3}, y,{xy},{x}^{2}y,{x}^{3}y}\right\} . \tag{3.4} \] Now let us assume that \( \left| y\right| = 2 \) . Consider the question of the value of \( {yx} \) . By the identical reasoning by cases provided above, \( {yx} \) cannot be \( e, x,{x}^{2},{x}^{3} \), or \( y \) . Thus, there are three cases: (1) \( {yx} = {x}^{3}y \) ; (2) \( {yx} = {xy} \) ; and (3) \( {yx} = {x}^{2}y \) . Case 1. If \( {yx} = {x}^{3}y \), then the group is in fact \( {D}_{4} \), the dihedral group of the square, where \( x \) serves the role of \( r \) and \( y \) serves the role of \( s \) . Case 2. If \( {yx} = {xy} \), then \( G{x}^{s}{y}^{t} = {y}^{t}{x}^{s} \) and so \( G \) is abelian. We leave it up to the reader to show that this group is \( {Z}_{4} \oplus {Z}_{2} \) . Case 3. If \( {yx} = {x}^{2}y \) then \( {yxy} = {x}^{2} \) . Hence, \[ x = {y}^{2}x{y}^{2} = y\left( {yxy}\right) y = y{x}^{2}y = {yx}{y}^{2}{xy} = \left( {yxy}\right) \left( {yxy}\right) = {x}^{4} = e. \] We conclude that \( x = e \), which contradicts the assumption that \( x \) has order 4 . Hence, there exists no group of order 8 with an element \( x \) of order 4 and an element \( y \) of order 2 with \( {yx} = {x}^{2}y \) Assume now that \( \left| y\right| = 3 \) . Consider the element \( {y}^{2} \) in \( G \) . A quick proof by cases shows that \( {y}^{2} \) cannot be any of the eight distinct elements listed in (3.4). Hence, there exists no group of order 8 containing an element of order 4 and one of order 3 . Assume now that \( \left| y\right| = 4 \) . Again, we consider the possible value of \( {y}^{2} \) . If there exists a group with all the conditions we have so far, then \( {y}^{2} \) must be equal to an element in (3.4). Now \( \left| {y}^{2}\right| = 4/2 = 2 \) so \( {y}^{2} \) cannot be \( e, x,{x}^{3} \), or \( y \), which have orders \( 1,4,4,4 \), respectively. Furthermore, \( {y}^{2} \) cannot be equal to \( {xy} \) ,(respectively \( {x}^{2}y \) or \( {x}^{3}y \) ) because that would imply \( x = y \) (respectively \( {x}^{2} = y \) or \( {x}^{3} = y \) ), which is against the assumptions on \( x \) and \( y \) . We have not ruled out the possibility that \( {y}^{2} = {x}^{2} \) . We focus on this latter possibility, namely a group \( G \) containing \( x \) and \( y \) with \( \left| x\right| = 4,\left| y\right| = 4 \) , \( y \notin \left\{ {e, x,{x}^{2},{x}^{3}}\right\} \) and \( {x}^{2} = {y}^{2} \) . If we now consider possible values of the element \( {yx} \), we can quickly eliminate all possibilities except \( {xy} \) and \( {x}^{3}y \) . If \( G = {Z}_{4} \oplus {Z}_{2} = \left\{ {\left( {z, w}\right) \mid {z}^{4} = e}\right. \) and \( \left. {{w}^{2} = e}\right\} \), then setting \( x = \left( {z, e}\right) \) and \( y = \left( {z, w}\right) \), it is easy to check that \( {Z}_{4} \oplus {Z}_{2} \) satisfies \( {x}^{2} = {y}^{2} \) and \( {yx} = {xy} \) . On the other hand, if \( {yx} = x{y}^{3} \), then \( G \) is a nonabelian group in which \( x,{x}^{3}, y,{y}^{3} = {x}^{2}y \) are elements of order 4. However, \( {D}_{4} \) is the only nonabelian group of order 8 that we have encountered so far and in \( {D}_{4} \) only \( r \) and \( {r}^{3} \) have order 4 . Hence, \( G \) must be a new group. We now introduce the new group identified in this example but using the symbols traditionally associated to it.
No
Example 5.5
[Example 5.5 (See [7, Example 2.7]) Let \( x \) be an indeterminate over \( {Z}_{p}, k \mathrel{\text{:=}} \) \( {Z}_{p}\left( {x}^{p}\right) \), and \( K \mathrel{\text{:=}} L \mathrel{\text{:=}} {Z}_{p}\left( x\right) \) . Then, \( K{ \otimes }_{k}L \) is Noetherian, and therefore it is locally complete intersection by Lemma 4.8. Since \( K \cap L \neq k, K{ \otimes }_{k}L \) is not regular by Theorem 5.1(5).]
No
Example 3.3
Example 3.3 Let us prove eqn (3.17) by induction. Because of linearity, it is sufficient to prove this equation for the case where \( {A}_{\left\lbrack p\right\rbrack } \) is a simple \( p \) -vector. First, we slightly change the notation, so that it shall be suitable for this purpose. Let us denote a simple \( p \) -vector by \( {A}_{\left\lbrack 1\cdots p\right\rbrack } \), that is, \[ {A}_{\left\lbrack 1\cdots p\right\rbrack } = {\mathbf{v}}_{1} \land \cdots \land {\mathbf{v}}_{p}. \] In addition, we introduce the notation \[ {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack } = {\mathbf{v}}_{1} \land \cdots \land {\mathbf{v}}_{i - 1} \land {\mathbf{v}}_{i + 1} \land \cdots \land {\mathbf{v}}_{p} \] to denote the \( \left( {p - 1}\right) \) -vector obtained when the vector \( {\mathbf{v}}_{i} \) is taken out of the product. Thus, the ’hat’ here does not denote the grade involution but rather the ornission of the corresponding vector in the wedge product. Using this notation and the definition of the wedge product, we can write \[ \mathbf{u} \land {A}_{\left\lbrack 1\cdots p\right\rbrack } = \frac{1}{p + 1}\left\lbrack {\mathbf{u} \otimes {A}_{\left\lbrack 1\cdots p\right\rbrack } + \mathop{\sum }\limits_{{i = 1}}^{p}{\left( -1\right) }^{i}{\mathbf{v}}_{i} \otimes \left( {\mathbf{u} \land {A}_{\left\lbrack 1\cdots i\cdots p\right\rbrack }}\right) }\right\rbrack . \] Assuming that the inductive hypothesis holds for \( \left( {p - 1}\right) \) -vector, this equation implies that \[ \mathbf{u} \land {A}_{\lbrack 1\cdots \hat{\imath }\cdots p\rbrack } \sim \mathbf{u} \otimes {A}_{\lbrack 1\cdots \hat{\imath }\cdots p\rbrack } - {\mathbf{u}}_{\flat }\rfloor {A}_{\lbrack 1\cdots \hat{\imath }\cdots p\rbrack }. \] Hence, we have \[ {\mathbf{v}}_{i} \otimes \left( {\mathbf{u} \land {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack }}\right) \sim {\mathbf{v}}_{i} \otimes \mathbf{u} \otimes {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack } - {\mathbf{v}}_{i} \otimes \left( {{\mathbf{u}}_{\flat }\rfloor {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack }}\right) \] \[ \sim {2g}\left( {{\mathbf{v}}_{i},\mathbf{u}}\right) {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack } - \mathbf{u} \otimes {\mathbf{v}}_{i} \otimes {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack } - {\mathbf{v}}_{i} \otimes \left( {{\mathbf{u}}_{b}\rfloor {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack }}\right) . \] Now, keeping in mind the definition of the wedge product, we obtain \[ \mathop{\sum }\limits_{{i = 1}}^{p}{\left( -1\right) }^{i + 1}{\mathbf{v}}_{i} \otimes {A}_{\left\lbrack 1\cdots i\cdots p\right\rbrack } = p{A}_{\left\lbrack 1\cdots p\right\rbrack }, \] which yields \[ \mathbf{u} \land {A}_{\left\lbrack 1\cdots p\right\rbrack } \sim \frac{1}{p + 1}\left\lbrack {u \otimes {A}_{\left\lbrack 1\cdots p\right\rbrack } + \mathbf{u} \otimes \left( {p{A}_{\left\lbrack 1\cdots p\right\rbrack }}\right) }\right. \] \[ \left. {-\mathop{\sum }\limits_{{i = 1}}^{p}{\left( -1\right) }^{i + 1}{2g}\left( {{\mathbf{v}}_{i},\mathbf{u}}\right) {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack } - \mathop{\sum }\limits_{{i = 1}}^{p}{\left( -1\right) }^{i}{\mathbf{v}}_{i} \otimes \left( {{\mathbf{u}}_{\flat }\rfloor {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack }}\right) }\right\rbrack . \] However, we can also write \[ \mathop{\sum }\limits_{{i = 1}}^{p}{\left( -1\right) }^{i}{\mathbf{v}}_{i} \otimes \left( {{\mathbf{u}}_{\flat }\rfloor {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots p\right\rbrack }}\right) \] \[ = \mathop{\sum }\limits_{{i = 1}}^{p}{\left( -1\right) }^{i}{\mathbf{v}}_{i} \otimes \left\lbrack {\mathop{\sum }\limits_{{j = 1}}^{{i - 1}}{\left( -1\right) }^{j + 1}g\left( {\mathbf{u},{\mathbf{v}}_{j}}\right) {A}_{\left\lbrack 1\cdots \widehat{\jmath }\cdots \widehat{\imath }\cdots p\right\rbrack } + \mathop{\sum }\limits_{{j = i + 1}}^{p}{\left( -1\right) }^{j}g\left( {\mathbf{u},{\mathbf{v}}_{j}}\right) {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots \widehat{\jmath }\cdots p\right\rbrack }}\right\rbrack \] \[ = \mathop{\sum }\limits_{{j = 1}}^{p}\mathop{\sum }\limits_{{i = j + 1}}^{p}{\left( -1\right) }^{i}{\left( -1\right) }^{j + 1}g\left( {\mathbf{u},{\mathbf{v}}_{j}}\right) {\mathbf{v}}_{i} \otimes {A}_{\left\lbrack 1\cdots \widehat{\jmath }\cdots \widehat{\imath }\cdots p\right\rbrack } \] \[ + \mathop{\sum }\limits_{{j = 1}}^{p}\mathop{\sum }\limits_{{i = 1}}^{{j - 1}}{\left( -1\right) }^{i}{\left( -1\right) }^{j}g\left( {\mathbf{u},{\mathbf{v}}_{j}}\right) {\mathbf{v}}_{i} \otimes {A}_{\left\lbrack 1\cdots \widehat{\imath }\cdots \widehat{\jmath }\cdots p\right\rbrack } \] \[ = \mathop{\sum }\limits_{{j = 1}}^{p}{\left( -1\right) }^{j + 1}g\left( {\mathbf{u},{\mathbf{v}}_{j}}\right) \left\lbrack {\mathop{\sum }\limits_{{i = 1}}^{{j - 1}}{\left( -1\right) }^{i + 1}{\mathbf{v}}_{i} \otimes {A}_{\left\lbrack 1\cdots \widehat{i}\cdots \widehat{j}\cdots p\right\rbrack } + \mathop{\sum }\limits_{{i = j + 1}}^{p}{\left( -1\right) }^{i}{\mathbf{v}}_{i} \otimes {A}_{\left\lbrack 1\cdots \widehat{j}\cdots \widehat{i}\cdots p\right\rbrack }}\right\rbrack \] \[ = \mathop{\sum }\limits_{{j = 1}}^{p}{\left( -1\right) }^{j + 1}g\left( {\mathbf{u},{\mathbf{v}}_{j}}\right) \left( {p - 1}\right) {A}_{\left\lbrack 1\cdots \widehat{\jmath }\cdots p\right\rbrack }, \] where in the last equality we have used again the definition of the exterior product. Finally, we obtain \[ \mathbf{u} \land {A}_{\left\lbrack 1\cdots p\right
No
Exercise 7.2.5
Exercise 7.2.5 Let \( X \) be a spectral domain and let \( L \) be its lattice of compact open subsets. Prove that \( \mathcal{J}{\left( L\right) }^{\text{op }} \) is isomorphic to \( \mathrm{K}\left( X\right) \) . Hint. You can describe an isomorphism directly: Send \( p \in \mathrm{K}\left( X\right) \) to the join-prime element \( \uparrow p \) of \( L \) .
No
Exercise 2.7
Exercise 2.7. Let \( \{ B\left( t\right) : t \geq 0\} \) be a standard Brownian motion on the line, and \( T \) be a stopping time with \( \mathbb{E}\left\lbrack T\right\rbrack < \infty \) . Define an increasing sequence of stopping times by \( {T}_{1} = T \) and \( {T}_{n} = T\left( {B}_{n}\right) + {T}_{n - 1} \) where the stopping time \( T\left( {B}_{n}\right) \) is the same function as \( T \), but associated with the Brownian motion \( \left\{ {{B}_{n}\left( t\right) : t \geq 0}\right\} \) given by \[ {B}_{n}\left( t\right) = B\left( {t + {T}_{n - 1}}\right) - B\left( {T}_{n - 1}\right) . \] (a) Show that, almost surely, \[ \mathop{\lim }\limits_{{n \uparrow \infty }}\frac{B\left( {T}_{n}\right) }{n} = 0 \] (b) Show that \( B\left( T\right) \) is integrable. (c) Show that, almost surely, \[ \mathop{\lim }\limits_{{n \uparrow \infty }}\frac{B\left( {T}_{n}\right) }{n} = \mathbb{E}\left\lbrack {B\left( T\right) }\right\rbrack \] Combining (a) and (c) implies that \( \mathbb{E}\left\lbrack {B\left( T\right) }\right\rbrack = 0 \), which is Wald’s lemma.
No
Exercise 2.5
Exercise 2.5 Imagine two ways other than changing the size of the points (as in Section 2.7.2) to introduce a third variable in the plot.
No
Example 5.6.39
Example 5.6.39 Consider the second-degree equation \[ \frac{3}{2}{x}^{2} + {y}^{2} + \frac{3}{2}{z}^{2} - {xz} + x - 1 = 0. \] The matrix of the quadratic form associated to this equation is the matrix \( A \) of Example 5.5.19. Its eigenvalues are \( 1,1,2 \) . If we transform the equation using the orthogonal transformation \( O \) (see Example 5.5.19), the equation is transformed to \[ {x}^{\prime 2} + {y}^{\prime 2} + 2{z}^{\prime 2} + \frac{1}{\sqrt{2}}{x}^{\prime } - \frac{1}{\sqrt{2}}{y}^{\prime } = 1. \] Completing the square it reduces to \[ {\left( {x}^{\prime } + \frac{1}{2\sqrt{2}}\right) }^{2} + {\left( {y}^{\prime } + \frac{1}{2\sqrt{2}}\right) }^{2} + 2{z}^{\prime 2} = \frac{5}{4}. \] This represents ellipsoid with axes \( \frac{\sqrt{5}}{2},\frac{\sqrt{5}}{2},\sqrt{5} \) . The center is given by \( {x}^{\prime } = \) \( - \frac{1}{2\sqrt{2}},{y}^{\prime } = - \frac{1}{2\sqrt{2}} \), and \( {z}^{\prime } = 0 \) . Since \( X = O{X}^{\prime } \), substituting the values we get that \( x = - \frac{1}{4}, y = - \frac{1}{2\sqrt{2}}, z = - \frac{1}{4} \) . The principal planes are given by \( {x}^{\prime } = - \frac{1}{2\sqrt{2}} = {y}^{\prime } \) , and \( {z}^{\prime } = 0 \) . Using the transformation \( {X}^{\prime } = {O}^{t}X \), we see that the principal planes are \( \frac{x + z}{\sqrt{2}} = \frac{1}{2\sqrt{2}}, y = \frac{1}{2\sqrt{2}} \), and \( \frac{-x + z}{\sqrt{2}} = 0 \) .
No
Problem 4.56
Problem 4.56 Make a full report with a sketch for each of the following functions. Include diagonal asymptotes if any. 1. \( f\left( x\right) = \frac{{x}^{2} + {2x} + 1}{x - 1} \) 4. \( r\left( x\right) = \frac{{x}^{3}}{{x}^{2} + 2} \) 2. \( g\left( x\right) = \frac{2{x}^{2} + {3x} + 1}{x} \) 5. \( s\left( x\right) = \frac{{x}^{2}}{{x}^{2} + 1} \) 3. \( h\left( x\right) = {5x} - 3 + \frac{1}{{x}^{2}} \) 6. \( q\left( x\right) = \frac{{x}^{2}}{{x}^{2} - 4} \)
No
Example 5.1
[Plaintext: MEET ME TODAY Cipher-text: PHHW PH WRGDB \( \left( {k = 3}\right) \)]
No
Exercise 2.6.9
Exercise 2.6.9. Suppose \( E{X}_{i} = 0 \) . Show that if \( \epsilon > 0 \) then \[ \mathop{\liminf }\limits_{{n \rightarrow \infty }}P\left( {{S}_{n} \geq {na}}\right) /{nP}\left( {{X}_{1} \geq n\left( {a + \epsilon }\right) }\right) \geq 1 \] Hint: Let \( {F}_{n} = \left\{ {{X}_{i} \geq n\left( {a + \epsilon }\right) }\right. \) for exactly one \( \left. {i \leq n}\right\} \) .
No
Problem 19
Problem 19. Prove that if a sequence \( 0 \rightarrow A\xrightarrow[]{\varphi }B\xrightarrow[]{\psi }C \) is exact, then so is the sequence \[ 0 \rightarrow \operatorname{Hom}\left( {G, A}\right) \overset{\widetilde{\varphi }}{ \rightarrow }\operatorname{Hom}\left( {G, B}\right) \overset{\widetilde{\psi }}{ \rightarrow }\operatorname{Hom}\left( {G, C}\right) \] for any Abelian group \( G \) . An Abelian group \( G \) is said to be divisible if for any positive integer \( n \) , the map \( G \rightarrow G \) defined by \( g \mapsto {ng} \) is an epimorphism.
No
Example 10.11
Example 10.11 (Inner faithfulness for groups and Lie algebras). If \( H = \mathbb{k}G \) is a group algebra, then Hopf ideals of \( \mathbb{k}G \) are exactly the ideals of the form \( (g - 1 \mid g \in \) \( N \) ), where \( N \) is a normal subgroup of \( G \) (Exercise 9.3.4). It follows that \( \mathcal{H}I \) is the ideal of \( \mathbb{k}G \) that is generated by the set \( I \cap \{ g - 1 \mid g \in G\} \) . Thus, a representation \( V \in \operatorname{Rep}\mathbb{k}G \) is \( G \) -faithful if and only if \( \mathcal{H}\operatorname{Ker}V = 0 \), whereas faithfulness in the general sense of Section 1.2 means that \( \operatorname{Ker}V = 0 \) . Thus, inner faithfulness for \( \mathbb{k}G \) is the same as \( G \) -faithfulness. Similarly, for an ideal \( I \) of the enveloping algebra \( H = \cup \mathfrak{g} \), the intersection \( I \cap \mathfrak{g} \) generates \( \mathcal{H}I \) (Exercise 9.3.5). Hence, \( \mathfrak{g} \) -faithfulness of \( V \in \operatorname{Rep}\mathrm{{Ug}} \) is equivalent to inner faithfulness for \( \mathrm{{Ug}} \) .
No
Example 2
7 For this joint probability matrix with \( \operatorname{Prob}\left( {{x}_{1},{y}_{2}}\right) = {0.3} \), find \( \operatorname{Prob}\left( {{y}_{2} \mid {x}_{1}}\right) \) and \( \operatorname{Prob}\left( {x}_{1}\right) \). \[ P = \left\lbrack \begin{array}{ll} {p}_{11} & {p}_{12} \\ {p}_{21} & {p}_{22} \end{array}\right\rbrack = \left\lbrack \begin{array}{ll} {0.1} & {0.3} \\ {0.2} & {0.4} \end{array}\right\rbrack \;\begin{array}{l} \text{ The entries }{p}_{ij}\text{ add to }1. \\ \text{ Some }i, j\text{ must happen. } \end{array} \]
Yes
Example 1
Example 1 (a) If \( n \) is 3 and \( m \) is 16, then \( {16} = 5\left( 3\right) + 1 \) so \( q \) is 5 and \( r \) is 1 . (b) If \( n \) is 10 and \( m \) is 3, then \( 3 = 0\left( {10}\right) + 3 \) so \( q \) is 0 and \( r \) is 3 . (c) If \( n \) is 5 and \( m \) is -11, then -11 = -3(5) +4 so \( q \) is -3 and \( r \) is 4 .
Yes
Exercise 3.1
Exercise 3.1. Prove the theorem via a direct verification of the Anscombe condition (3.2). For the law of large numbers it was sufficient that \( N\left( t\right) \overset{a.s.}{ \rightarrow } + \infty \) as \( t \rightarrow \infty \) . That this is not enough for a "random-sum central limit theorem" can be seen as follows.
No
Problem 1.275
Problem 1.275 Let \( X \) be a non-empty set. Let \( \mathcal{M} \) be a \( \sigma \) -algebra in \( X \) . Let \( \mathcal{M} \) be an infinite set. Show that a. \( X \) is an infinite set, b. there exists a sequence \( \left\{ {{B}_{1},{B}_{2},\ldots }\right\} \) of members in \( \mathcal{M} \) such that \( \varnothing \subsetneq \cdots \subsetneq {B}_{2} \subsetneq {B}_{1} \subsetneq X \) and, for each \( n,{\left\{ A \cap {B}_{n}\right\} }_{A \in \mathcal{M}} \) is infinite. (Here, \( \subsetneq \) stands for 'is a proper subset of'.)
No
Example 2
4. The following are ten measurements of \( {\mu }^{\prime } \) and eight measurements of \( \mu \) : \[ {\mu }^{\prime } : {17.3},{17.1},{18.2},{17.5},{15.8},{16.9},{17.0},{17.5},{17.8},{17.1} \] \[ \mu : {3.2},{3.2},{3.9},{3.3},{2.7},{3.4},{4.0},{2.9}\text{.} \] Obtain an estimate of \( {\mu }^{\prime } - \mu \) .
Yes
Exercise 10.3
Exercise 10.3 Find a rectangular block (not a cube) and label the sides. Determine values of \( {a}_{1},{a}_{2},\ldots ,{a}_{6} \) that represent your prior probability concerning each side coming up when you throw the block. 1. What is your probability of each side coming up on the first throw? 2. Throw the block 20 times. Compute your probability of each side coming up on the next throw.
No
Example 2.13
Example 2.13 \( {L}^{2}\left( \left\lbrack {-a, a}\right\rbrack \right) \) We know from experience in quantum mechanics that all square integrable functions on an interval \( \left\lbrack {-a, a}\right\rbrack \) have an expansion \( {}^{10} \) \[ f = \mathop{\sum }\limits_{{m = - \infty }}^{\infty }{c}_{m}{e}^{i\frac{m\pi x}{a}} \] in terms of the ’basis’ \( {\left\{ \exp \left( i\frac{m\pi x}{a}\right) \right\} }_{m \in \mathbb{Z}} \) . This expansion is known as the Fourier series of \( f \), and we see that the \( {c}_{m} \), commonly known as the Fourier coefficients, are nothing but the components of the vector \( f \) in the basis \( {\left\{ {e}^{i\frac{m\pi x}{a}}\right\} }_{m \in \mathbb{Z}} \) . \( {}^{10} \) This fact is proved in most real analysis books, see Rudin [13].
No
Example 6.1
Measuring the distance \( d \) between the two points on the plane involves measuring the angle \( \phi \) . This angle is estimated by averaging \( n = 7 \) readings each containing error drawn independently from a single normal distribution with mean zero. The sample standard deviation is found from (6.8) to be \( s = {0.0108} \) radian. The original model of the generation of the error in the estimate of angle, which is (6.7) and which involves the nuisance parameter \( {\sigma }^{2} \), is now replaced by the model \[ {e}_{\mathrm{d}} \leftarrow {0.00408}{\mathrm{t}}_{6}, \tag{6.11} \] which involves no unknown parameters. This distribution is shown in Figure 6.2, along with two possibilities for the unknown actual parent distribution \( \mathrm{N}\left( {{\sigma }^{2}/7}\right) \) . ![0191ad4b-e937-721c-820f-a9aa5285b291_113_869222.jpg](images/0191ad4b-e937-721c-820f-a9aa5285b291_113_869222.jpg) Figure 6.2 The parent distribution attributed to \( {e}_{\mathrm{d}} \), which is the known distribution \( {0.00408}{t}_{6} \) (solid line) and two possibilities for the unknown parent distribution of \( {e}_{\mathrm{d}} \) (dashed lines).
No
Example 5.6
Example 5.6 Let \( {\mathbb{T}}_{1} = \mathbb{Z},{\mathbb{T}}_{2} = {2}^{{\mathbb{N}}_{0}},\left\lbrack {{a}_{1},{b}_{1}}\right\rbrack = \left\lbrack {-1,1}\right\rbrack ,\left\lbrack {{a}_{2},{b}_{2}}\right\rbrack = \left\lbrack {1,4}\right\rbrack \) . We consider \[ I\left( {t}_{2}\right) = {\int }_{-1}^{1}\left( {{t}_{1}^{2} - 2{t}_{1}{t}_{2} + {t}_{2}^{2}}\right) {\Delta }_{1}{t}_{1}. \] We have \[ I\left( {t}_{2}\right) = {\int }_{-1}^{1}\left( {\frac{{t}_{1}^{2} + {t}_{1}{\sigma }_{1}\left( {t}_{1}\right) + {\left( {\sigma }_{1}\left( {t}_{1}\right) \right) }^{2}}{3} - \frac{\left( {1 + 2{t}_{2}}\right) \left( {{t}_{1} + {\sigma }_{1}\left( {t}_{1}\right) }\right) }{2} + \frac{1}{6} + {t}_{2} + {t}_{2}^{2}}\right) {\Delta }_{1}{t}_{1} \] \[ = {\left. \frac{1}{3}{t}_{1}^{3}\right| }_{{t}_{1} = - 1}^{{t}_{1} = 1} - {\left. \frac{1}{2}\left( 1 + 2{t}_{2}\right) {t}_{1}^{2}\right| }_{{t}_{1} = - 1}^{{t}_{1} = 1} + {\left. \left( \frac{1}{6} + {t}_{2} + {t}_{2}^{2}\right) {t}_{1}\right| }_{{t}_{1} = - 1}^{{t}_{1} = 1} \] \[ = \frac{2}{3} + \frac{1}{3} + 2{t}_{2} + 2{t}_{2}^{2} \] \[ = 1 + 2{t}_{2} + 2{t}_{2}^{2} \] Hence, \[ {I}^{{\Delta }_{2}}\left( {t}_{2}\right) = 2 + 2\left( {{\sigma }_{2}\left( {t}_{2}\right) + {t}_{2}}\right) \] \[ = 2 + 2\left( {2{t}_{2} + {t}_{2}}\right) \] \[ = 2 + 6{t}_{2}\text{.} \] On the other hand, using Theorem 5.4, we get \[ {I}^{{\Delta }_{2}}\left( {t}_{2}\right) = {\int }_{-1}^{1}\left( {-2{t}_{1} + {\sigma }_{2}\left( {t}_{2}\right) + {t}_{2}}\right) {\Delta }_{1}{t}_{1} \] \[ = {\int }_{-1}^{1}\left( {-2{t}_{1} + 3{t}_{2}}\right) {\Delta }_{1}{t}_{1} \] \[ = {\int }_{-1}^{1}\left( {-\left( {{t}_{1} + {\sigma }_{1}\left( {t}_{1}\right) }\right) + 1 + 3{t}_{2}}\right) {\Delta }_{1}{t}_{1} \] \[ = - {\left. {t}_{1}^{2}\right| }_{{t}_{1} = - 1}^{{t}_{1} = 1} + {\left. \left( 1 + 3{t}_{2}\right) {t}_{1}\right| }_{{t}_{1} = - 1}^{{t}_{1} = 1} \] \[ = 2 + 6{t}_{2}\text{.} \]
No
Example 5.2
5.2 Prove relations (5.22).
No
Example 6.2
The \( {C}^{1} \) path \( \gamma : \mathbb{R} \rightarrow \mathbb{H} \) defined by \( \gamma \left( t\right) = i{e}^{t} \) travels along the geodesic \( \{ z \in \mathbb{H} : \operatorname{Re}z = 0\} \) . Moreover, \[ {\left| {\gamma }^{\prime }\left( t\right) \right| }_{\gamma \left( t\right) } = \frac{{\left\langle i{e}^{t}, i{e}^{t}\right\rangle }^{1/2}}{\operatorname{Im}\left( {i{e}^{t}}\right) } \] \[ = \frac{{\left\langle \left( 0,{e}^{t}\right) ,\left( 0,{e}^{t}\right) \right\rangle }^{1/2}}{{e}^{t}} = 1 \] Thus, we obtain a path \[ \mathbb{R} \ni t \mapsto \left( {\gamma \left( t\right) ,{\gamma }^{\prime }\left( t\right) }\right) \in S\mathbb{H} \] in the unit tangent bundle with \( \left( {\gamma \left( 0\right) ,{\gamma }^{\prime }\left( 0\right) }\right) = \left( {i, i}\right) \) . Now let us take \( \left( {z, v}\right) \in S\mathbb{H} \) . One can show that there exists a unique Möbius transformation \( T \) such that (see Fig. 6.7) \[ T\left( i\right) = z\;\text{ and }\;{T}^{\prime }\left( i\right) i = v, \] which thus takes the geodesic \( i{e}^{t} \) traversing the positive part of the imaginary axis to the geodesic \( \gamma \left( t\right) \) passing through \( z \) with direction \( v \) at this point. More precisely, let \( x, y \in \mathbb{R} \cup \{ \infty \} \) be, respectively, the limits \( \gamma \left( {-\infty }\right) \) and \( \gamma \left( {+\infty }\right) \) . We consider four cases: 1. when \( x, y \in \mathbb{R} \) and \( x < y \), we have \[ T\left( w\right) = \frac{{\alpha yw} + x}{{\alpha w} + 1},\;\text{ where }\alpha = \left| \frac{z - x}{z - y}\right| ; \] 2. when \( x, y \in \mathbb{R} \) and \( x > y \), we have \[ T\left( w\right) = \frac{{yw} - {\alpha x}}{w - \alpha },\;\text{ where }\alpha = \left| \frac{z - y}{z - x}\right| ; \] 3. when \( x \in \mathbb{R} \) and \( y = \infty \), we have \[ T\left( w\right) = {\alpha w} + x,\;\text{ where }\alpha = \operatorname{Im}z; \] 4. when \( x = \infty \) and \( y \in \mathbb{R} \), we have \[ T\left( w\right) = - \alpha /w + y,\;\text{ where }\alpha = \operatorname{Im}z. \] Now we use the map \( T \) (that depends on \( z \) and \( v \) ) to introduce the geodesic flow. Definition 6.7 The geodesic flow \( {\varphi }_{t} : S\mathbb{H} \rightarrow S\mathbb{H} \) is defined by \[ {\varphi }_{t}\left( {z, v}\right) = \left( {\gamma \left( t\right) ,{\gamma }^{\prime }\left( t\right) }\right) \] where \( \gamma \left( t\right) = T\left( {i{e}^{t}}\right) \) . We verify that \( {\varphi }_{t} \) is indeed a flow.
No
Exercise 2.6.1
Exercise 2.6.1. Compute the topological entropy of an expanding endomorphism \( {E}_{m} : {S}^{1} \rightarrow {S}^{1} \) .
Yes
Example 9.25
Example 9.25. Let a signal be a digital image generated by the following 4- dimensional commutative linear representation system \( \sigma = \left( {\left( {{K}^{4},{F}_{\alpha },{F}_{\beta }}\right) ,{x}^{0}, h}\right) \) with a vector index \( \nu = \left( 4\right) \) , where \( {F}_{\alpha } = \left\lbrack \begin{array}{llll} 0 & 0 & 0 & {43} \\ 1 & 0 & 0 & {12} \\ 0 & 1 & 0 & {24} \\ 0 & 0 & 1 & {43} \end{array}\right\rbrack ,{F}_{\beta } = \left\lbrack \begin{matrix} {32} & {145} & {11} & {225} \\ {31} & {235} & {64} & {85} \\ {29} & {196} & {73} & {212} \\ {37} & {174} & {207} & {57} \end{matrix}\right\rbrack \) , \( {x}^{0} = {\left\lbrack 1,0,0,0\right\rbrack }^{T}, h = \left\lbrack {{12},9,{15},3}\right\rbrack \) . Then the approximate realization problem is solved by the following algorithm: <table><thead><tr><th>characteristic polynomial for</th><th colspan="6">values of variable and polynomial</th></tr></thead><tr><td rowspan="2">\( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) \right) } \)</td><td>4</td><td>5</td><td>226</td><td>231</td><td></td><td></td></tr><tr><td>7</td><td>23</td><td>6</td><td>228</td><td></td><td></td></tr><tr><td rowspan="2">\( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) ,\left( 0,2\right) \right) } \)</td><td>4</td><td>14</td><td>16</td><td>229</td><td>233</td><td>240</td></tr><tr><td>1</td><td>2</td><td>229</td><td>10</td><td>10</td><td>237</td></tr><tr><td rowspan="2">\( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) ,\left( 0,2\right) ,\left( 0,3\right) \right) } \)</td><td>15</td><td>239</td><td></td><td></td><td></td><td></td></tr><tr><td>232</td><td>230</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="2">\( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) ,\left( 0,2\right) ,\left( 0,3\right) ,\left( 0,4\right) \right) } \)</td><td>0</td><td>3</td><td>17</td><td>223</td><td>235</td><td>240</td></tr><tr><td>0</td><td>237</td><td>6</td><td>226</td><td>2</td><td>224</td></tr><tr><td rowspan="2">\( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) ,\left( 0,2\right) ,\left( 1,0\right) \right) } \)</td><td>2</td><td>14</td><td>229</td><td>230</td><td>234</td><td></td></tr><tr><td>223</td><td>1</td><td>2</td><td>4</td><td>4</td><td></td></tr></table> Numbers in the upper stand (or the lower stand) denote the values of variable in the characteristic polynomial (or the polynomial values corresponding to the value in the upper stand). Fig. 9.6 In Example (9.25), the left is a \( {50} \times {51} \) sized image of the original image \( {a}_{\sigma } \) for a given 3-dimensional commutative linear representation system \( \sigma \) . The right is a \( {50} \times {51} \) sized image of an approximate system for the given \( \sigma \) . 1) As we can find a relatively no change located near zero of the characteristic polynomial from \( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) \right) } \) into \( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) ,\left( 0,2\right) \right) } \), we determine the number \( {\nu }_{1} \) of dimension which is 3 . 2) As we can find a relatively no change located near zero of the characteristic polynomial from \( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) \right) } \) to \( {H}_{\left( \left( 0,0\right) ,\left( 0,1\right) ,\left( 1,0\right) \right) } \), we determine the number \( {\nu }_{2} \) of dimension which is 0 . Then the non-linear integer programing for digital images produces a commutative linear representative system \( {\sigma }_{s} = \left( {{K}^{3},{F}_{\alpha s},{F}_{\beta s},{x}_{s}^{0},{h}_{s}}\right) \) with a vector index \( \nu = \left( 3\right) \) which has the least mean value 76.0 for the absolute value of the difference in the range 3 . Hence, the quasi-reachable standard system \( {\sigma }_{s} = \left( {{K}^{3},{F}_{\alpha s},{F}_{\beta s},{x}_{s}^{0},{h}_{s}}\right) \) with a vector index \( \nu = \left( 3\right) \) can be obtained as follows: \[ {F}_{\alpha s} = \left\lbrack \begin{array}{lll} 0 & 0 & {214} \\ 1 & 0 & {31} \\ 0 & 1 & {122} \end{array}\right\rbrack ,{F}_{\beta s} = \left\lbrack \begin{matrix} {169} & {146} & {41} \\ {78} & {46} & {90} \\ {66} & {177} & {191} \end{matrix}\right\rbrack ,{x}^{0} = {\left\lbrack 1,0,0\right\rbrack }^{T}, h = \left\lbrack {{10},{12},{15}}\right\rbrack . \] Based on Fig. 9.6, it is felt that the image generated by the approximated by the approximated system shows some patterns characterized by the original image.
Yes
Example 7.4
Example 7.4 The stationary covariance function of \( X\left( t\right) \) in Example 7.3 with \( k = 2 \) has the expression \[ c\left( {t, s}\right) = \frac{2}{\rho \left( {\rho + {2\alpha }}\right) }{e}^{-\rho \left( {t - s}\right) } + \frac{2}{{\rho }^{2} - 4{\alpha }^{2}}\left( {{e}^{-{2\alpha }\left( {t - s}\right) } - {e}^{-\rho \left( {t - s}\right) }}\right) ,\;t > s, \tag{7.12} \] and can be obtained by an extension of the mean and covariance equations in (7.6) to the case in which the driving noise is colored or by direct calculations. \( \diamondsuit \)
Yes
Example 4.22
Example 4.22. Let \( A \) be the matrix \( \mathrm{A} = \) 1 2 3 \( \begin{array}{lll} 4 & 5 & 6 \end{array} \) To create a \( 3 \times 2 \) tiling using \( A \) as a tile, we write \( B = \operatorname{repmat}\left( {A,3,2}\right) \) , which results in \( \mathrm{B} = \) <table><tr><td>1</td><td>2</td><td>3</td><td>1</td><td>2</td><td>3</td></tr><tr><td>4</td><td>5</td><td>6</td><td>4</td><td>5</td><td>6</td></tr><tr><td>1</td><td>2</td><td>3</td><td>1</td><td>2</td><td>3</td></tr><tr><td>4</td><td>5</td><td>6</td><td>4</td><td>5</td><td>6</td></tr><tr><td>1</td><td>2</td><td>3</td><td>1</td><td>2</td><td>3</td></tr><tr><td>4</td><td>5</td><td>6</td><td>4</td><td>5</td><td>6</td></tr></table>
No
Exercise 1.3.11
Exercise 1.3.11. ([28], Proposition 3.4) Let \( M \) be an \( R \) -module, and \( S = \) \( \{ I \subseteq R \mid I = \operatorname{ann}\left( m\right) \), some \( m \in M\} \) . Prove that a maximal element of \( S \) is prime. \( \diamond \)
No
Exercise 4.4.5
Exercise 4.4.5. Let \( {A}_{t} = t - {T}_{N\left( t\right) - 1} \) be the "age" at time \( t \), i.e., the amount of time since the last renewal. If we fix \( x > 0 \) then \( H\left( t\right) = P\left( {{A}_{t} > x}\right) \) satisfies the renewal equation \[ H\left( t\right) = \left( {1 - F\left( t\right) }\right) \cdot {1}_{\left( x,\infty \right) }\left( t\right) + {\int }_{0}^{t}H\left( {t - s}\right) {dF}\left( s\right) \] so \( P\left( {{A}_{t} > x}\right) \rightarrow \frac{1}{\mu }{\int }_{\left( x,\infty \right) }\left( {1 - F\left( t\right) }\right) {dt} \), which is the limit distribution for the residual lifetime \( {B}_{t} = {T}_{N\left( t\right) } - t \) . Remark. The last result can be derived from Example 4.4.4 by noting that if \( t > x \) then \( P\left( {{A}_{t} \geq x}\right) = P\left( {{B}_{t - x} > x}\right) = P\left( {\text{no renewal in}\left( {t - x, t}\right\rbrack }\right) \) . To check the placement of the strict inequality, recall \( {N}_{t} = \inf \left\{ {k : {T}_{k} > t}\right\} \) so we always have \( {A}_{s} \geq 0 \) and \( {B}_{s} > 0 \) .
No
Exercise 7.1.4
Exercise 7.1.4. By taking the product of two of three topologies \( {\mathbb{R}}_{ \leftrightarrow },{\mathbb{R}}_{ \rightarrow },{\mathbb{R}}_{ \leftarrow } \), we get three topologies on \( {\mathbb{R}}^{2} \) . Which subspaces are Hausdorff? 1. \( \{ \left( {x, y}\right) : x + y \in \mathbb{Z}\} \) . 2. \( \{ \left( {x, y}\right) : {xy} \in \mathbb{Z}\} \) . 3. \( \left\{ {\left( {x, y}\right) : {x}^{2} + {y}^{2} \leq 1}\right\} \) .
No
Example 5.3.8
Example 5.3.8 It is easy to check that for \( q = 1 \) the formula in Theorem 5.3.7 becomes the formula in Theorem 5.3.2. For \( q = 2 \), the formula in Theorem 5.3.7 becomes \[ {\left\lbrack n\right\rbrack }_{m + 2}{\widehat{\chi }}_{m,2,{1}^{n - m - 2}}^{\lambda } = \left\lbrack {{c}_{3}^{\lambda }\left( 2\right) - 3{c}_{2}^{\lambda }\left( 2\right) }\right\rbrack \mathop{\sum }\limits_{{j = 2}}^{{m + 1}}s\left( {m + 1, j}\right) {c}_{j}^{\lambda }\left( m\right) \] \[ - m\mathop{\sum }\limits_{{j = 2}}^{{m + 2}}{c}_{j}^{\lambda }\left( m\right) \left\lbrack {{2s}\left( {m + 1, j - 1}\right) - \left( {m + 1}\right) s\left( {m + 1, j}\right) }\right\rbrack . \] In particular, for \( m = q = 2 \) we get \[ {\left\lbrack n\right\rbrack }_{m + 2}{\widehat{\chi }}_{2,2,{1}^{n - 4}}^{\lambda } = {\left\lbrack {c}_{3}^{\lambda }\left( 2\right) - 3{c}_{2}^{\lambda }\left( 2\right) \right\rbrack }^{2} - {26}{c}_{2}^{\lambda }\left( 2\right) + {18}{c}_{3}^{\lambda }\left( 2\right) - 4{c}_{4}^{\lambda }\left( 2\right) \] \[ = 4{d}_{1}{\left( \lambda \right) }^{2} - {12}{d}_{2}\left( \lambda \right) + {4n}\left( {n - 1}\right) . \]
Yes
Example 15
Example 15. Let \( V \) be the space of all polynomial functions from \( R \) into \( R \) of the form \[ f\left( x\right) = {c}_{0} + {c}_{1}x + {c}_{2}{x}^{2} + {c}_{3}{x}^{3} \] that is, the space of polynomial functions of degree three or less. The differentiation operator \( D \) of Example 2 maps \( V \) into \( V \), since \( D \) is ’degree decreasing.’ Let \( \mathcal{B} \) be the ordered basis for \( V \) consisting of the four functions \( {f}_{1},{f}_{2},{f}_{3},{f}_{4} \) defined by \( {f}_{j}\left( x\right) = {x}^{j - 1} \) . Then \[ \left( {D{f}_{1}}\right) \left( x\right) = 0,\;D{f}_{1} = 0{f}_{1} + 0{f}_{2} + 0{f}_{3} + 0{f}_{4} \] \[ \left( {D{f}_{2}}\right) \left( x\right) = 1,\;D{f}_{2} = 1{f}_{1} + 0{f}_{2} + 0{f}_{3} + 0{f}_{4} \] \[ \left( {D{f}_{3}}\right) \left( x\right) = {2x},\;D{f}_{3} = 0{f}_{1} + 2{f}_{2} + 0{f}_{3} + 0{f}_{4} \] \[ \left( {D{f}_{4}}\right) \left( x\right) = 3{x}^{2},\;D{f}_{4} = 0{f}_{1} + 0{f}_{2} + 3{f}_{3} + 0{f}_{4} \] so that the matrix of \( D \) in the ordered basis \( \mathcal{B} \) is ![0191a5d8-a60c-755e-8fbf-1288dc425e74_99_432860.jpg](images/0191a5d8-a60c-755e-8fbf-1288dc425e74_99_432860.jpg)
No
Problem 99
Problem 99. Prove that \( {o}_{{2j} + 1} = {\beta }^{ * }{o}_{2j} \) .
No
Example 7.4
Example 7.4. Consider the finite-state automaton shown in Figure 7.2, whose associated alphabet is \( \{ a, b\} \) . The vertices represented by double circles are the accept states, and the start state is indicated by the letter S. The language for this FSA consists of all words in \( \{ a, b{\} }^{ * } \) that contain an even number of \( b \) ’s.
No
Example 6
[Example 6 (Non-elementary substructure). In the language of rings, \( \mathbb{R} \) is a substructure of \( \mathbb{C} \), since it is a subfield. On the other hand, in notation of Example 4, \( \varphi \left( \mathbb{R}\right) = {\mathbb{R}}_{0}^{ + } \) and \( \varphi \left( \mathbb{C}\right) = \mathbb{C} \) so \( \varphi \left( \mathbb{R}\right) \neq \varphi \left( \mathbb{C}\right) \cap \mathbb{R} \) and therefore \( \mathbb{R} \npreceq \mathbb{C}. \)]
No
Exercise 4.4.32
Show that \( {\int }_{0}^{t}\operatorname{sgn}\left( {B\left( s\right) }\right) {dB}\left( s\right) \) is a Brownian motion.
No
Example 5.4.6
Example 5.4.6. Consider the operator \( D : {C}^{1}\left( {\left\lbrack {a, b}\right\rbrack ,\mathbb{R}}\right) \rightarrow {C}^{0}\left( {\left\lbrack {a, b}\right\rbrack ,\mathbb{R}}\right) \) defined by taking the derivative \( D\left( f\right) = {f}^{\prime } \) . This is not a ring homomorphism. It is true that \( D\left( {f + g}\right) = D\left( f\right) + D\left( g\right) \) but the product rule is \( D\left( {fg}\right) = D\left( f\right) g + {fD}\left( g\right) \), which in general is not equal to \( D\left( f\right) D\left( g\right) \) .
No
Example 1.4.6
Example 1.4.6 Standard Matrices of Reflections --- Find the standard matrices of the linear transformations that reflect through the lines in the direction of the following vectors, and depict these reflections geometrically. a) \( \mathbf{u} = \left( {0,1}\right) \in {\mathbb{R}}^{2} \), and b) \( \mathbf{w} = \left( {1,1,1}\right) \in {\mathbb{R}}^{3} \) . ## Solutions: a) Since \( \mathbf{u} \) is a unit vector, the standard matrix of \( {F}_{\mathbf{u}} \) is simply \[ \left\lbrack {F}_{\mathbf{u}}\right\rbrack = 2{\mathbf{{uu}}}^{T} - I = 2\left\lbrack \begin{array}{l} 0 \\ 1 \end{array}\right\rbrack \left\lbrack \begin{array}{ll} 0 & 1 \end{array}\right\rbrack - \left\lbrack \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right\rbrack = \left\lbrack \begin{matrix} - 1 & 0 \\ 0 & 1 \end{matrix}\right\rbrack . \] Geometrically, this matrix acts as a reflection through the \( y \) -axis: ![0191acc4-e192-718e-b434-ae444166e348_60_759761.jpg](images/0191acc4-e192-718e-b434-ae444166e348_60_759761.jpg) We can verify our computation of \( \left\lbrack {F}_{\mathbf{u}}\right\rbrack \) by noting that multiplication by it indeed just flips the sign of the \( x \) -entry of the input vector: \[ \left\lbrack {F}_{\mathbf{u}}\right\rbrack \mathbf{v} = \left\lbrack \begin{matrix} - 1 & 0 \\ 0 & 1 \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {v}_{1} \\ {v}_{2} \end{array}\right\rbrack = \left\lbrack \begin{matrix} - {v}_{1} \\ {v}_{2} \end{matrix}\right\rbrack . \] b) Since \( \mathbf{w} \) is not a unit vector, we have to first normalize it (i.e., divide it by its length) to turn it into a unit vector pointing in the same direction. Well, \( \parallel \mathbf{w}\parallel = \sqrt{{1}^{2} + {1}^{2} + {1}^{2}} = \sqrt{3} \), so we let \( \mathbf{u} = \mathbf{w}/\parallel \mathbf{w}\parallel = \) \( \left( {1,1,1}\right) /\sqrt{3} \) . The standard matrix of \( {F}_{\mathbf{u}} \) is then \[ \left\lbrack {F}_{\mathbf{u}}\right\rbrack = 2{\mathbf{{uu}}}^{T} - I = 2\left\lbrack \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right\rbrack \left\lbrack \begin{array}{lll} 1 & 1 & 1 \end{array}\right\rbrack /3 - \left\lbrack \begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right\rbrack \] \[ = \frac{1}{3}\left\lbrack \begin{matrix} - 1 & 2 & 2 \\ 2 & - 1 & 2 \\ 2 & 2 & - 1 \end{matrix}\right\rbrack . \] This reflection is a bit more difficult to visualize since it acts on \( {\mathbb{R}}^{3} \) instead of \( {\mathbb{R}}^{2} \), but we can at least try: ![0191acc4-e192-718e-b434-ae444166e348_61_517428.jpg](images/0191acc4-e192-718e-b434-ae444166e348_61_517428.jpg) As a bit of a sanity check, we can verify that this matrix leaves w (and thus \( \mathbf{u} \) ) unchanged, as the reflection \( {F}_{\mathbf{u}} \) should: \[ \left\lbrack {F}_{\mathbf{u}}\right\rbrack \mathbf{w} = \frac{1}{3}\left\lbrack \begin{matrix} - 1 & 2 & 2 \\ 2 & - 1 & 2 \\ 2 & 2 & - 1 \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right\rbrack = \left\lbrack \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right\rbrack = \mathbf{w}. \] --- In general, if \( \mathbf{u} \in {\mathbb{R}}^{n} \) is a unit vector then \( {F}_{\mathbf{u}}\left( \mathbf{u}\right) = \mathbf{u} : {F}_{\mathbf{u}} \) leaves the entire line in the direction of \( \mathbf{u} \) alone. --- In general, to project or reflect a vector we should start by finding the standard matrix of the corresponding linear transformation. Once we have that matrix, all we have to do is multiply it by the starting vector in order to find where it ends up after the linear transformation is applied to it. We illustrate this technique with the following example. ---
No
Example 3
Let \( X \) and \( Y \) have the CDFs \[ F\left( x\right) = \frac{{x}^{2}}{4},0 < x < 2\text{ and }G\left( x\right) = \exp \left\{ {\frac{1}{2} - \frac{1}{x}}\right\} ,0 < x < 2, \] respectively. The \( {\mathrm{{WDRD}}}_{\alpha } \) between \( X \) and \( Y \) is plotted in Fig. 1. This shows that the \( {\mathrm{{WDRD}}}_{\alpha } \) is not monotone.
No
Example 5.1
Example 5.1 Choose primitive polynomial \( p\left( x\right) = {x}^{4} + x + 1 \in {Z}_{2}\left\lbrack x\right\rbrack \) . The nonzero elements in \( F = {Z}_{2}\left\lbrack x\right\rbrack /\left( {p\left( x\right) }\right) \) are listed in the table in Example 4.3. Using this field \( F \), we obtain the following generator polynomial \( g\left( x\right) \) for an \( {RS}\left( {{15},2}\right) \) code \( C \) . \[ g\left( x\right) = \left( {x - a}\right) \left( {x - {a}^{2}}\right) \left( {x - {a}^{3}}\right) \left( {x - {a}^{4}}\right) \] \[ = \cdots \] \[ = {x}^{4} + {a}^{13}{x}^{3} + {a}^{6}{x}^{2} + {a}^{3}x + {a}^{10} \] To construct one of the codewords in \( C \), consider \( b\left( x\right) = {a}^{10}{x}^{9} \in F\left\lbrack x\right\rbrack \) . Then \[ b\left( x\right) g\left( x\right) = {a}^{10}{x}^{13} + {a}^{8}{x}^{12} + a{x}^{11} + {a}^{13}{x}^{10} + {a}^{5}{x}^{9} \] is one of the codewords in \( C \) .
No
Example 2.14
During some construction, a network blackout occurs on Monday with probability 0.7 and on Tuesday with probability 0.5 . Then, does it appear on Monday or Tuesday with probability \( {0.7} + {0.5} = {1.2} \) ? Obviously not, because probability should always be between 0 and 1 ! Probabilities are not additive here because blackouts on Monday and Tuesday are not mutually exclusive. In other words, it is not impossible to see blackouts on both days.
No
Exercise 6.18
[Show that if \( \Lambda \) is a hyperbolic set for a flow \( \Phi \), then the stable and unstable subspaces \( {E}^{s}\left( x\right) \) and \( {E}^{u}\left( x\right) \) vary continuously with \( x \in \Lambda \) .]
No
Exercise 8.7.3
[Exercise 8.7.3. Model the problem of finding a nontrivial factor of a given integer as a nonlinear integer optimization problem of the form (8.1). Then explain why the algorithm of this chapter does not imply a polynomial-time algorithm for factoring.]
No
Exercise 7.1.3
Exercise 7.1.3. Which subspaces of the line with two origins in Example 5.5.2 are Hausdorff?
No
Exercise 10
Exercise 10 (Tangents to graphs)
No
Example 2
We next consider a problem for the set \( G \) in Figure 5.6, but now the reflection direction is the unit vector in the direction \( \left( {2,3}\right) \) . See Figures 5.7a, b. Here, several possibilities are of interest. Clearly, this direction cannot be achieved as a convex combination of the vectors \( \left( {1,0}\right) \) and \( \left( {1,1}\right) \) . There are several alternative constructions. For example, \( \left( {2,3}\right) \) is in the convex cone generated by the vectors \( \left( {1,1}\right) \) and \( \left( {1,2}\right) \), and in fact we can take \( {p}^{h}\left( {x, x + \left( {h, h}\right) }\right) = {p}^{h}\left( {x, x + \left( {h,{2h}}\right) }\right) = 1/2 \), as in Figure 5.7a. An alternative is to exploit the possibility of transitions between states in \( \partial {G}_{h}^{ + } \) . For example, we can take \( {p}^{h}\left( {x, x + \left( {h, h}\right) }\right) = 2/3 \) and \( {p}^{h}\left( {x, x + \left( {0, h}\right) }\right) = 1/3 \) , as in Figure 5.7b.
No
Example 4.7
Given \( a > 4 \), consider the map \( f : \left\lbrack {0,1}\right\rbrack \rightarrow \mathbb{R} \) defined by \[ f\left( x\right) = {ax}\left( {1 - x}\right) . \] We have \[ f\left( \left\lbrack {\frac{1}{a},\frac{1}{2}}\right\rbrack \right) = \left\lbrack {1 - \frac{1}{a},\frac{a}{4}}\right\rbrack \supset \left\lbrack {1 - \frac{1}{a},1}\right\rbrack \] and \[ f\left( \left\lbrack {1 - \frac{1}{a},1}\right\rbrack \right) = \left\lbrack {0,1 - \frac{1}{a}}\right\rbrack \supset \left\lbrack {\frac{1}{a},\frac{1}{2}}\right\rbrack . \] Since \[ \left\lbrack {\frac{1}{a},\frac{1}{2}}\right\rbrack \cap \left\lbrack {1 - \frac{1}{a},1}\right\rbrack = \varnothing \] it follows from Proposition 4.2 that \( f \) has a periodic point in \( \left\lbrack {1/a,1/2}\right\rbrack \) with period 2. The criterion in Proposition 4.2 can be used to establish the following particular case of Sharkovsky's theorem (Theorem 4.9).
No
Example 5.3.2
Consider \( X \sim P\left( I\right) \left( {\sigma ,\alpha }\right) \) a classical Pareto distribution. The sequences of absolute Gini indices are \[ {m}_{n} = \frac{\alpha n\sigma }{{\alpha n} - 1},\;n = 1,2,\ldots \] if \( {\alpha n} > 1 \), since the distribution of the minimum is again of the Pareto form, i.e., \( {X}_{1 : n} \sim \mathcal{P}\left( {{\alpha n},\sigma }\right) \) . The sequences of relative Gini indices are \[ {G}_{n} = \frac{n - 1}{{\alpha n} - 1},\;n = 2,3,\ldots \] if \( {\alpha n} > 1 \) and \[ {\widetilde{G}}_{n} = \frac{1}{n\left( {{\alpha n} + \alpha - 1}\right) },\;n = 1,2,\ldots \] if \( \alpha \left( {n + 1}\right) > 1 \) .
Yes
Example 2.3.15
Example 2.3.15 For the lexicographical ordering \( > \) in the variables \( {x}_{1},{x}_{2} \), the set \( {D}_{ > } \) is plotted in Figure 2.6. The figure also shows the line \( {w\delta } = 0 \), where \( w \) is a weight vector representing \( {lp} \) on all monomials of degree \( \leq 4 \) . For more examples see Exercise 2.9.
No
Example 5.5.1
Example 5.5.1 What is the distribution of \( W\left( t\right) + W\left( \tau \right) \), where \( t \leq \tau \) ? Solution. We can write that \[ W\left( t\right) + W\left( \tau \right) = {2W}\left( t\right) + \left\lbrack {W\left( \tau \right) - W\left( t\right) }\right\rbrack \mathrel{\text{:=}} Y + Z, \] where \( Y \) and \( Z \) are independent random variables having Gaussian distributions with zero means and variances given by \[ \operatorname{VAR}\left\lbrack Y\right\rbrack = 4\operatorname{VAR}\left\lbrack {W\left( t\right) }\right\rbrack = 4{\sigma }^{2}t\;\text{ and }\;\operatorname{VAR}\left\lbrack Z\right\rbrack = {\sigma }^{2}\left( {\tau - t}\right) , \] because the Wiener process has stationary increments. It follows that \[ W\left( t\right) + W\left( \tau \right) \sim \mathrm{N}\left( {0,{\sigma }^{2}\left( {{3t} + \tau }\right) }\right) . \]
Yes
Example 9.28
Example 9.28 ( \( E \) -related transformation and Concurrency Theorem). We use the construction in the proof of Fact 5.29 to construct an \( E \) -dependency relation and an \( E \) -concurrent production for the sequentially dependent transformations \( A{G}_{1}\overset{{addClass},{m}_{2}}{ \Rightarrow }A{G}_{2}\overset{{addParameter},{m}_{3}}{ \Rightarrow }A{G}_{3} \) from Example 9.6. First we construct the \( {\mathcal{E}}^{\prime } - {\mathcal{M}}^{\prime } \) pair factorization of the comatch \( {n}_{2} \) and the match \( {m}_{3} \) . The corresponding typed attributed graph \( E \) is an \( E \) -dependency relation, because the pushouts (1) and (2) exist: ![0191b1a6-efd3-7405-aac5-cea51e91ba7c_206_575614.jpg](images/0191b1a6-efd3-7405-aac5-cea51e91ba7c_206_575614.jpg) Now we construct the pushouts (3) and (4) with the pushout objects \( {L}^{ * } \) and \( {R}^{ * } \) and construct the pullback over \( {C}_{1} \rightarrow E \leftarrow {C}_{2} \) with the pullback object \( {K}^{ * } \), and obtain the following \( E \) -concurrent production \( {addClass}{ * }_{E} \) \( {addParameter} = \left( {{L}^{ * } \leftarrow {K}^{ * } \rightarrow {R}^{ * }}\right) ,\mathrm{{where}}{K}^{ * }\left( {\mathrm{{not}}\text{ shown explicitly}}\right) \mathrm{{consists}} \) of a node of type Method without attributes: This construction makes sure that the transformation \[ A{G}_{1}\overset{{addClass},{m}_{2}}{ \rightarrow }A{G}_{2}\overset{{addParameter},{m}_{3}}{ \rightarrow }A{G}_{3} \] is \( E \) -related. Applying Theorem 9.26, we obtain a direct transformation \( A{G}_{1} \Rightarrow A{G}_{3} \) using the constructed \( E \) -concurrent production \( {addClass}{ * }_{E} \) addParameter.
No
Example 5
Example 5 For any real numbers \( {a}_{1},{a}_{2},{a}_{3},{b}_{1},{b}_{2} \), and \( {b}_{3} \), show that \[ \left| {{a}_{1}{b}_{1} + {a}_{2}{b}_{2} + {a}_{3}{b}_{3}}\right| \leq \sqrt{{a}_{1}^{2} + {a}_{2}^{2} + {a}_{3}^{2}}\sqrt{{b}_{1}^{2} + {b}_{2}^{2} + {b}_{3}^{2}}. \]
No
Example 3.8
Example 3.8. The ends of a uniform string of linear density \( \rho \) and length \( l \) are fixed, and all external forces are neglected. Displace the string from equilibrium by shifting the point \( x = {x}_{0} \) by a distance \( h \) at time \( t = 0 \) and then release it with zero initial speed. Find the displacements \( u\left( {x, t}\right) \) of the string for times \( t > 0 \) .
No
Example 3.18
The spectral distribution function of the weakly stationary real-valued process in Example 3.14 is \( S\left( v\right) = \alpha + \mathop{\sum }\limits_{{k = 1}}^{n}\left( {{\sigma }_{k}^{2}/2}\right) \left\lbrack {1\left( {v \geq - {v}_{k}}\right) + 1\left( {v \geq {v}_{k}}\right) }\right\rbrack \) , where \( \alpha \) is an arbitrary constant. This expression of \( S \) can be checked by direct calculations using (3.11). Although \( S \) is not absolutely continuous, it is common in applications to define its spectral densities by \[ s\left( v\right) = \frac{1}{2}\mathop{\sum }\limits_{{k = 1}}^{n}{\sigma }_{k}^{2}\left\lbrack {\delta \left( {v - {v}_{k}}\right) + \delta \left( {v + {v}_{k}}\right) }\right\rbrack \text{ and }g\left( v\right) = \mathop{\sum }\limits_{{k = 1}}^{n}{\sigma }_{k}^{2}\delta \left( {v - {v}_{k}}\right) , \tag{3.14} \] where \( \delta \left( \cdot \right) \) is the Dirac delta function. The representations in (3.14) show that the energy of \( X \) is concentrated at a finite number of frequencies. \( \diamondsuit \)
No
Example 1.7
[Use Gauss's lemma to evaluate the Legendre symbol \( \left( \frac{6}{11}\right) \) . By Gauss’s lemma, \( \left( \frac{6}{11}\right) = {\left( -1\right) }^{\omega } \), where \( \omega \) is the number of integers in the set \[ \{ 1 \cdot 6,2 \cdot 6,3 \cdot 6,4 \cdot 6,5 \cdot 6\} \] whose least residues modulo 11 are negative. Clearly, \[ \left( {6,{12},{18},{24},{30}}\right) {\;\operatorname{mod}\;{11}} \equiv \left( {6,1,7,2,8}\right) \equiv \left( {-5,1, - 4,2, - 3}\right) \left( {\;\operatorname{mod}\;{11}}\right) \] So there are 3 least residues that are negative. Thus, \( \omega = 3 \) . Therefore, \( \left( \frac{6}{11}\right) = {\left( -1\right) }^{3} = - 1 \) . Consequently, the quadratic congruence \( {x}^{2} \equiv 6({\;\operatorname{mod}\;} \) 11) is not solvable.]
Yes
Example 3.1.2
Example 3.1.2 Computing a Coordinate Vector in the Range of a Matrix --- Find a basis \( B \) of the range of the following matrix \( A \) and then compute the coordinate vector \( {\left\lbrack \mathbf{v}\right\rbrack }_{B} \) of the vector \( \mathbf{v} = \left( {2,1, - 3,1,2}\right) \in \operatorname{range}\left( A\right) \) : \[ A = \left\lbrack \begin{matrix} 1 & 0 & 1 \\ 2 & 1 & 1 \\ 0 & 1 & - 1 \\ 2 & 1 & 1 \\ 1 & 0 & 1 \end{matrix}\right\rbrack \] ## Solution: Recall from Example 2.4.6 that one way to find the basis of the range of \( A \) is to take the columns of \( A \) that are leading in one of its row echelon forms: \[ \left\lbrack \begin{matrix} 1 & 0 & 1 \\ 2 & 1 & 1 \\ 0 & 1 & - 1 \\ 2 & 1 & 1 \\ 1 & 0 & 1 \end{matrix}\right\rbrack \xrightarrow[]{\begin{matrix} {R}_{2} - 2{R}_{1} \\ {R}_{4} - 2{R}_{1} \\ {R}_{5} - {R}_{1} \end{matrix}}\left\lbrack \begin{matrix} 1 & 0 & 1 \\ 0 & 1 & - 1 \\ 0 & 1 & - 1 \\ 0 & 1 & - 1 \\ 0 & 0 & 0 \end{matrix}\right\rbrack \xrightarrow[]{\begin{matrix} {R}_{3} - {R}_{2} \\ {R}_{4} - {R}_{2} \end{matrix}}\left\lbrack \begin{matrix} 1 & 0 & 1 \\ 0 & 1 & - 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack . \] --- Be careful here –remember that we take the leading columns from \( A \) itself, not from its row echelon form. --- Since the first two columns of this row echelon form are leading, we choose \( B \) to consist of the first two columns of \( A \) : \[ B = \{ \left( {1,2,0,2,1}\right) ,\left( {0,1,1,1,0}\right) \} . \] To compute \( {\left\lbrack \mathbf{v}\right\rbrack }_{B} \), we then solve the linear system \[ \left( {2,1, - 3,1,2}\right) = {c}_{1}\left( {1,2,0,2,1}\right) + {c}_{2}\left( {0,1,1,1,0}\right) \] for \( {c}_{1} \) and \( {c}_{2} \) : \[ \left\lbrack \begin{matrix} 1 & 0 & 2 \\ 2 & 1 & 1 \\ 0 & 1 & - 3 \\ 2 & 1 & 1 \\ 1 & 0 & 2 \end{matrix}\right\rbrack \xrightarrow[]{\begin{matrix} {R}_{2} - 2{R}_{1} \\ {R}_{4} - 2{R}_{1} \\ {R}_{5} - {R}_{1} \end{matrix}}\left\lbrack \begin{matrix} 1 & 0 & 2 \\ 0 & 1 & - 3 \\ 0 & 1 & - 3 \\ 0 & 1 & - 3 \\ 0 & 0 & 0 \end{matrix}\right\rbrack \xrightarrow[]{\begin{matrix} {R}_{3} - {R}_{2} \\ {R}_{4} - {R}_{2} \end{matrix}}\left\lbrack \begin{matrix} 1 & 0 & 2 \\ 0 & 1 & - 3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack . \] It follows that \( {c}_{1} = 2 \) and \( {c}_{2} = - 3 \), so \( {\left\lbrack \mathbf{v}\right\rbrack }_{B} = \left( {2, - 3}\right) \) . One way to think about the previous example is that the range of \( A \) is just 2-dimensional (i.e., its rank is 2), so representing the vector \( \mathbf{v} = \left( {2,1, - 3,1,2}\right) \) in its range via 5 coordinates is somewhat wasteful-we can fix a basis \( B \) of the range and then represent \( \mathbf{v} \) via just two coordinates, as in \( {\left\lbrack \mathbf{v}\right\rbrack }_{B} = \left( {2, - 3}\right) \) . --- The following theorem essentially says that the function that sends \( \mathbf{v} \) to \( {\left\lbrack \mathbf{v}\right\rbrack }_{B} \) is a linear transformation. --- Furthermore, once we have represented vectors more compactly via coop-dinate vectors, we can work with them naïvely and still get correct answers. That is, a coordinate vector \( {\left\lbrack \mathbf{v}\right\rbrack }_{B} \) can be manipulated (i.e., added and scalar multiplied) in the exact same way as the underlying vector \( \mathbf{v} \) that it represents: ---
Yes
Example 1
Let \( A \) fuzzy set and \( R = \left( {Z,+,\text{.}}\right) {bethering} \) of all integers. Define a mapping \( f : A \rightarrow F\left( {{NR}\left( Z\right) }\right) \) where, for any \( a \in A \) and \( x \in Z \) , \[ {A}_{f}\left( x\right) = \left\{ \begin{array}{l} 0\text{ if }x\text{ is odd } \\ \frac{1}{a}\text{ if }x\text{ is even } \end{array}\right. \] Corresponding \( t \) -norm (*) and \( t \) -conorm (*) are defined as \( a * b = \min \{ a, b\}, a\diamond b = \max \{ a, b\} \) ; then, \( A \) is a fuzzy set as well as a fuzzy normed ring over \( \left\lbrack {\left( {Z,+,.}\right), A}\right\rbrack \) .
No
Example 4.24
Example 4.24 General Simple Random Walk. Now consider the general simple random walk with the following transition probabilities: \[ {p}_{i, i + 1} = {p}_{i}\text{ for }i \geq 0, \] \[ {p}_{i, i - 1} = {q}_{i} = 1 - {p}_{i}\text{ for }i \geq 1, \] \[ {p}_{0,0} = 1 - {p}_{0} \] Assume that \( 0 < {p}_{i} < 1 \) for all \( i \geq 0 \), so that the DTMC is irreducible and aperiodic. The balance equations for this DTMC are: \[ {\pi }_{0} = \left( {1 - {p}_{0}}\right) {\pi }_{0} + {q}_{1}{\pi }_{1} \] \[ {\pi }_{j} = {p}_{j - 1}{\pi }_{j - 1} + {q}_{j + 1}{\pi }_{j + 1},\;j \geq 1. \] It is relatively straightforward to prove by induction that the solution is given by (see Conceptual Exercise 4.16). \[ {\pi }_{i} = {\rho }_{i}{\pi }_{0},\;i \geq 0 \tag{4.40} \] where \[ {\rho }_{0} = 1,\;{\rho }_{i} = \frac{{p}_{0}{p}_{1}\cdots {p}_{i - 1}}{{q}_{1}{q}_{2}\cdots {q}_{i}},\;i \geq 1. \tag{4.41} \] The normalizing equation yields \[ 1 = \mathop{\sum }\limits_{{i = 0}}^{\infty }{\pi }_{i} = {\pi }_{0}\left( {\mathop{\sum }\limits_{{i = 0}}^{\infty }{\rho }_{i}}\right) \] Thus, if \( \sum {\rho }_{i} \) converges, we have \[ {\pi }_{0} = {\left( \mathop{\sum }\limits_{{i = 0}}^{\infty }{\rho }_{i}\right) }^{-1}. \] Thus, from Theorem 4.21, we see that the general simple random walk is positive recurrent if and only if \( \sum {\rho }_{i} \) converges. This is consistent with the results of Example 4.16. When the DTMC is positive recurrent, the limiting distribution is given by \[ {\pi }_{i} = \frac{{\rho }_{i}}{\mathop{\sum }\limits_{{j = 0}}^{\infty }{\rho }_{j}},\;i \geq 0. \tag{4.42} \] This is also the stationary distribution and the limiting occupancy distribution of the DTMC. Note that the DTMC is periodic with period 2 if \( {p}_{0} = 1 \) . In this case the expressions for \( {\pi }_{i} \) remain valid, but now \( \left\{ {{\pi }_{i}, i \geq 0}\right\} \) is not a limiting distribution. Special Case 1: Suppose \( {p}_{N} = 0 \), and \( {p}_{N, N} = 1 - {q}_{N} \), for a given \( N \geq 0 \) . In this case we can restrict our attention to the irreducible DTMC over \( \{ 0,1,2,\cdots, N\} \) . In this case \( {\rho }_{i} = 0 \) for \( i > N \) and the above results reduce to \[ {\pi }_{i} = \frac{{\rho }_{i}}{\mathop{\sum }\limits_{{j = 0}}^{N}{\rho }_{j}},\;0 \leq i \leq N. \] Special Case 2. Suppose \( {p}_{i} = p \) for all \( i \geq 0 \), and \( 0 < p < 1 \), and let \( q = 1 - p \) . In this case the DTMC is irreducible and aperiodic, and we have \[ {\rho }_{i} = {\rho }^{i},\;i \geq 0, \] where \( \rho = p/q \) . Hence \( \sum {\rho }_{i} \) converges if \( p < q \) and diverges if \( p \geq q \) . Combining this with results from Example 4.16, we see that this random walk is (i) positive recurrent if \( p < q \) , (ii) null recurrent if \( p = q \) , (iii) transient if \( p > q \) . In case \( p < q \), the limiting distribution is given by \[ {\pi }_{i} = {\rho }^{i}\left( {1 - \rho }\right) ,\;i \geq 0. \] Thus, in the limit, \( {X}_{n} \) is a modified geometric random variable with parameter \( 1 - \rho \) . ∎
No
Example 24.9
Example 24.9 Let \( f \in {\Gamma }_{0}\left( \mathcal{H}\right) \) . Then \( \partial f \) is \( {3}^{ * } \) monotone.
No
Problem 6.10.5
Problem 6.10.5. Consider the Lie group \[ G = \left\{ {\left( \begin{array}{ll} 1 & 0 \\ x & y \end{array}\right) : x, y \in \mathbb{R}, y > 0}\right\} . \] (1) Prove that its Lie algebra is \( \mathfrak{g} = \langle y\partial /\partial x, y\partial /\partial y\rangle \) . (2) Write the left-invariant metric on \( G \) built with the dual basis to that in (1). (3) Determine the Levi-Civita connection \( \nabla \) of \( g \) . (4) Is \( \left( {G, g}\right) \) a space of constant curvature? (5) Prove (without using (4)) that \( \left( {G, g}\right) \) is an Einstein manifold.
No
Example 6.5
\[ Y\left( t\right) = \left\lbrack \begin{array}{l} s + t \\ X\left( t\right) \end{array}\right\rbrack \in {\mathbb{R}}^{2};Y\left( 0\right) = y = \left( {s, x}\right) \tag{6.2.17} \] where \( X\left( t\right) = x + B\left( t\right) + {\int }_{0}^{t}{\int }_{\mathbb{R}}z\widetilde{N}\left( {{ds},{dz}}\right) \) and \( B\left( 0\right) = 0 \) . Suppose that we are only allowed to give the system impulses \( \zeta \) with values in \( \mathcal{Z} \mathrel{\text{:=}} \left( {0,\infty }\right) \) and that if we apply an impulse control \( v = \left( {{\tau }_{1},{\tau }_{2},\ldots ;{\zeta }_{1},{\zeta }_{2},\ldots }\right) \) to \( Y\left( t\right) \) it gets the form \[ {Y}^{\left( v\right) }\left( t\right) = \left\lbrack \begin{matrix} s + t \\ X\left( t\right) - \mathop{\sum }\limits_{{{\tau }_{k} \leq t}}{\zeta }_{k} \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} s + t \\ {X}^{\left( v\right) }\left( t\right) \end{matrix}\right\rbrack . \tag{6.2.18} \] Suppose that the cost rate \( f\left( {t,\xi }\right) \) if \( {X}^{\left( v\right) }\left( t\right) = \xi \) at time \( t \) is given by \[ f\left( {t,\xi }\right) = {e}^{-{\rho t}}{\xi }^{2} \tag{6.2.19} \] where \( \rho > 0 \) is constant. In an effort to reduce the cost one can apply the impulse control \( v \) in order to reduce the value of \( {X}^{\left( v\right) }\left( t\right) \) . However, suppose the cost of an intervention of size \( \zeta > 0 \) at time \( t \) is \[ K\left( {t,\xi ,\zeta }\right) = K\left( \zeta \right) = c + {\lambda \zeta }, \tag{6.2.20} \] where \( c > 0,\lambda \geq 0 \) are constants. Then the expected total discounted cost associated to a given impulse control is \[ {J}^{\left( v\right) }\left( {s, x}\right) = {E}^{x}\left\lbrack {{\int }_{0}^{\infty }{e}^{-\rho \left( {s + t}\right) }{\left( {X}^{\left( v\right) }\left( t\right) \right) }^{2}{dt} + \mathop{\sum }\limits_{{k = 1}}^{N}{e}^{-\rho \left( {s + {\tau }_{k}}\right) }\left( {c + \lambda {\zeta }_{k}}\right) }\right\rbrack . \tag{6.2.21} \] We seek \( \Phi \left( {s, x}\right) \) and \( {v}^{ * } = \left( {{\tau }_{1}^{ * },{\tau }_{2}^{ * },\ldots ;{\zeta }_{1}^{ * },{\zeta }_{2}^{ * },\ldots }\right) \) such that \[ \Phi \left( {s, x}\right) = \mathop{\inf }\limits_{v}{J}^{\left( v\right) }\left( {s, x}\right) = {J}^{\left( {v}^{ * }\right) }\left( {s, x}\right) . \tag{6.2.22} \] This is an impulse control problem of the type described above, except that it is a minimum problem rather than a maximum problem. Theorem 6.2 still applies, with the corresponding changes. Note that it is not optimal to move \( X\left( t\right) \) downwards if \( X\left( t\right) \) is already below 0 . Hence we may restrict ourselves to consider impulse controls \( v = \) \( \left( {{\tau }_{1},{\tau }_{2},\ldots ;{\zeta }_{1},{\zeta }_{2},\ldots }\right) \) such that \[ \mathop{\sum }\limits_{{j = 1}}^{{\tau }_{k}}{\zeta }_{j} \leq X\left( {\tau }_{k}\right) \text{ for all }k \tag{6.2.23} \] We let \( \mathcal{V} \) denote the set of such impulse controls. We guess that the optimal strategy is to wait until the level of \( X\left( t\right) \) reaches an (unknown) value \( {x}^{ * } > 0 \) . At this time, \( {\tau }_{1} \), we intervene and give \( X\left( t\right) \) an impulse \( {\zeta }_{1} \), which brings it down to a lower value \( \widehat{x} > 0 \) . Then we do nothing until the next time, \( {\tau }_{2} \), that \( X\left( t\right) \) reaches the level \( {x}^{ * } \) etc. This suggests that the continuation region \( D \) in Theorem 6.2 has the form \[ D = \left\{ {\left( {s, x}\right) ;x < {x}^{ * }}\right\} . \tag{6.2.24} \] See Figure 6.3. Let us try a value function \( \varphi \) of the form \[ \varphi \left( {s, x}\right) = {e}^{-{\rho s}}\psi \left( x\right) \tag{6.2.25} \] where \( \psi \) remains to be determined. ![0191b12c-c5c3-7310-8aa8-224971eece06_97_146986.jpg](images/0191b12c-c5c3-7310-8aa8-224971eece06_97_146986.jpg) Fig. 6.3. The optimal impulse control of Example 6.5 Condition (x) of Theorem 6.2 gives that for \( x < {x}^{ * } \) we should have \[ {A\varphi } + f = {e}^{-{\rho s}}\left( {-{\rho \psi }\left( x\right) + \frac{1}{2}{\psi }^{\prime \prime }\left( x\right) + {\int }_{\mathbb{R}}\left\{ {\psi \left( {x + z}\right) - \psi \left( x\right) - z{\psi }^{\prime }\left( x\right) }\right\} \nu \left( {dz}\right) }\right) \] \[ + {e}^{-{\rho s}}{x}^{2} = 0 \] So for \( x < {x}^{ * } \) we let \( \psi \) be a solution \( h\left( x\right) \) of the equation \[ {\int }_{\mathbb{R}}\left\{ {h\left( {x + z}\right) - h\left( x\right) - z{h}^{\prime }\left( x\right) }\right\} \nu \left( {dz}\right) + \frac{1}{2}{h}^{\prime \prime }\left( x\right) - {\rho h}\left( x\right) + {x}^{2} = 0. \tag{6.2.26} \] We see that any function \( h\left( x\right) \) of the form \[ h\left( x\right) = {C}_{1}{e}^{{r}_{1}x} + {C}_{2}{e}^{{r}_{2}x} + \frac{1}{\rho }{x}^{2} + \frac{1 + {\int }_{\mathbb{R}}{z}^{2}\nu \left( {dz}\right) }{{\rho }^{2}} \tag{6.2.27} \] where \( {C}_{1},{C}_{2} \) are arbitrary constants, is a solution of (6.2.26), provided that \( {r}_{1} > 0,{r}_{2} < 0 \) are roots of the equation \[ K\left( r\right) \mathrel{\text{:=}} {\int }_{\mathbb{R}}\left\{ {{e}^{rz} - 1 - {rz}}\right\} \nu \left
No
Example 4.3.1
Example 4.3.1. Solve the quadratic equation \[ 2{x}^{2} - {5x} + 1 = 0 \] by completing the square. Divide through by 2 to make the quadratic monic giving \[ {x}^{2} - \frac{5}{2}x + \frac{1}{2} = 0 \] We now want to write \[ {x}^{2} - \frac{5}{2}x \] as a perfect square plus a number. We get \[ {x}^{2} - \frac{5}{2}x = {\left( x - \frac{5}{4}\right) }^{2} - \frac{25}{16}. \] Thus our quadratic becomes \[ {\left( x - \frac{5}{4}\right) }^{2} - \frac{25}{16} + \frac{1}{2} = 0 \] Rearranging and taking roots gives us \[ x = \frac{5 \pm \sqrt{17}}{4}. \]
Yes
Problem 1
Problem 1. Does there exist a separable (complete separable, \( \sigma \) -compact, compact) metric space \( X \) such that \[ {\operatorname{DIM}}_{\mathrm{{PC}}}X = \operatorname{ind}X - 1\text{?} \tag{8} \] It is clear that if \( X \) satisfies (8), then \( X \) cannot be a finite-dimensional manifold since \( {\operatorname{DIM}}_{\mathrm{{PC}}}{\mathbb{R}}^{n} = \operatorname{ind}{\mathbb{R}}^{n} \) . Moreover, such a space must be at least two-dimensional. Indeed, if ind \( X = 0 \), then \( X \neq \varnothing \) so \( {\operatorname{DIM}}_{\mathcal{F}}X \geq 0 \) for every \( \mathcal{F} \subseteq {\mathbb{R}}^{X} \) . If ind \( X = 1 \), then there is an \( x \in X \) and an open neighborhood \( W \) of \( x \) such that bd \( U \neq \varnothing \) for every open \( U \) with \( x \in U \subseteq W \) . If \( f : X \rightarrow \mathbb{R} \) is the characteristic function of the singleton \( \{ x\} \), then \( f \) is not peripherally continuous implying that \( {\operatorname{DIM}}_{\mathrm{{PC}}}X \geq 1 \) .
No
Example 3.66
Example 3.66 (intersection and product machine-Sakarovitch [12]) The recognizer \( M \) of the strings that contain as substrings both digrams \( {ab} \) and \( {ba} \) is naturally specified through the intersection of languages \( {L}^{\prime } \) and \( {L}^{\prime \prime } \) : \[ {L}^{\prime } = {\left( a \mid b\right) }^{ * }{ab}{\left( a \mid b\right) }^{ * }{L}^{\prime \prime } = {\left( a \mid b\right) }^{ * }{ba}{\left( a \mid b\right) }^{ * } \] The cartesian product \( M \) of the recognizers \( {M}^{\prime } \) and \( {M}^{\prime \prime } \) of the component languages is in Fig. 3.31. As usual, the state pairs of the cartesian product that are not accessible from the initial state pair can be discarded.
No
Example 2
[The language \( {L}_{{fc} = {lc}} \) (see Example 1) of pictures \( p \) whose first column is equal to the last one, is in UREC. Indeed, we can define a tiling system as done before and this is unambiguous. This because there is only one possible counter-image for the first column of a picture \( p \) and there is a unique way to build, from this, the counter-image for the second column of \( p \) and so on up to the last column of \( p \) .]
No
Example 2
Example 2. We know by CLT theorem that \( {Y}_{n} = \bar{X} \) is \( \mathrm{{AN}}\left( {\mu ,{\sigma }^{2}/n}\right) \) . Suppose \( g\left( \bar{X}\right) = \) \( \bar{X}\left( {1 - \bar{X}}\right) \) where \( \bar{X} \) is the sample mean in random sampling from a population with mean \( \mu \) and variance \( {\sigma }^{2} \) . Since \( {g}^{\prime }\left( \mu \right) = 1 - {2\mu } \neq 0 \) for \( \mu \neq 1/2 \), it follows that for \( \mu \neq 1/2 \) , \( {\sigma }^{2} < \infty ,\bar{X}\left( {1 - \bar{X}}\right) \) is \( \operatorname{AN}\left( {\mu \left( {1 - \mu }\right) ,{\left( 1 - 2\mu \right) }^{2}{\sigma }^{2}/n}\right) \) . Thus \[ P\left( {\bar{X}\left( {1 - \bar{X}}\right) \leq y}\right) = P\left( {\frac{\bar{X}\left( {1 - \bar{X}}\right) - \mu \left( {1 - \mu }\right) }{\left| {1 - {2\mu }}\right| \sigma /\sqrt{n}} \leq \frac{y - \mu \left( {1 - \mu }\right) }{\left| {1 - {2\mu }}\right| \sigma /\sqrt{n}}}\right) \] \[ \approx \Phi \left( \frac{y - \mu \left( {1 - \mu }\right) }{\left| {1 - {2\mu }}\right| \sigma /\sqrt{n}}\right) \] for large \( n \) .
No
Example 6
Example 6. Let \( F\left( {X, Y}\right) = p\left( X\right) {Y}^{d} + {Y}^{d - 1} + q\left( X\right) {Y}^{2} + r\left( X\right) \), with \( \deg \left( p\right) = \) \( m \geq 1,\deg \left( q\right) = d + m - 1,\deg \left( r\right) = d + m + 1, d \geq 5 \) . We have \[ \frac{\deg \left( {P}_{1}\right) - \deg \left( {P}_{0}\right) }{1} = \frac{0 - m}{1}\; = - m, \] \[ \frac{\deg \left( {P}_{d - 2}\right) - \deg \left( {P}_{0}\right) }{d - 2} = \frac{d + m - 1 - m}{d - 2} = - \frac{d - 1}{d - 2}, \] \[ \frac{\deg \left( {P}_{d}\right) - \deg \left( {P}_{0}\right) }{d} = \frac{d + m - 2 - m}{d} = \frac{d + 1}{d}. \] We obtain \( e\left( F\right) = \frac{d - 1}{d - 2} \) and \( d - 1 \) and \( d - 2 \) are coprime. On the other hand, \[ e\left( F\right) - \frac{\deg \left( r\right) }{d} = \frac{d - 1}{d - 2} - \frac{d + 1}{d} = \frac{2}{d\left( {d - 2}\right) } \] and we can apply Theorem 2. There are three possible cases: i. The polynomial \( F \) is irreducible in \( k\left\lbrack {X, Y}\right\rbrack \) . ii. The polynomial \( F \) has a divisor whose degree with respect to \( Y \) is a multiple of \( d - 2 \) . So this divisor is od degree \( d - 2 \) with respect to \( Y \) . Therefore \( F \) could have a quadratic divisor with respect to \( Y \) . iii. There exists a factorization \( F = {F}_{1}{F}_{2} \) and the difference of their degrees is a multiple of \( d - 2 \) . If we suppose \( 1 \leq {d}_{1} \leq {d}_{2} \leq d - 1 \) we obtain \( 0 \leq {d}_{2} - {d}_{1} \leq d - 2 \) . It follows that we have \[ {d}_{1} = {d}_{2}\text{ or }{d}_{2} - {d}_{1} = d - 2. \] The last condition is satisfied only if \( {d}_{1} = 1 \) and \( {d}_{2} = d - 1 \) . We conclude that the polynomial \( F \) is irreducible if it does not have quadratic divisors with respect to \( Y \) and satisfies one of the two conditions: a. Its degree \( d \) is odd. b. It does not have linear divisors with respect to \( Y \) .
No
Exercise 6.8.10
[Exercise 6.8.10. Let \( {V}_{n} \) be an armap (not necessarily smooth or simple) with \( \theta < 1 \) and \( E{\log }^{ + }\left| {\xi }_{n}\right| < \infty \) . Show that \( \mathop{\sum }\limits_{{m \geq 0}}{\theta }^{m}{\xi }_{m} \) converges a.s. and defines a stationary distribution for \( {V}_{n} \) .]
No
Example 7
Example 7. Marcinkiewicz-Jackson-de La Vallée-Poussin summation. Let \[ {\theta }_{8}\left( t\right) = \left\{ \begin{array}{ll} 1 - 3{t}^{2}/2 + 3{\left| t\right| }^{3}/4 & \text{ if }\left| t\right| \leq 1 \\ {\left( 2 - \left| t\right| \right) }^{3}/4 & \text{ if }1 < \left| t\right| \leq 2 \\ 0 & \text{ if }\left| t\right| > 2 \end{array}\right. \]
No
Example 4.2
John Slow is driving from Boston to the New York area, a distance of 180 miles at a constant speed, whose value is uniformly distributed between 30 and 60 miles per hour. What is the PDF of the duration of the trip? Let \( X \) be the speed and let \( Y = g\left( X\right) \) be the trip duration: \[ g\left( X\right) = \frac{180}{X}. \] To find the CDF of \( Y \), we must calculate \[ \mathbf{P}\left( {Y \leq y}\right) = \mathbf{P}\left( {\frac{180}{X} \leq y}\right) = \mathbf{P}\left( {\frac{180}{y} \leq X}\right) . \] We use the given uniform PDF of \( X \), which is \[ {f}_{X}\left( x\right) = \left\{ \begin{array}{ll} 1/{30}, & \text{ if }{30} \leq x \leq {60}, \\ 0, & \text{ otherwise } \end{array}\right. \] and the corresponding CDF, which is \[ {F}_{X}\left( x\right) = \left\{ \begin{array}{ll} 0, & \text{ if }x \leq {30} \\ \left( {x - {30}}\right) /{30}, & \text{ if }{30} \leq x \leq {60} \\ 1, & \text{ if }{60} \leq x \end{array}\right. \] Thus, \[ {F}_{Y}\left( y\right) = \mathbf{P}\left( {\frac{180}{y} \leq X}\right) \] \[ = 1 - {Fx}\left( \frac{180}{y}\right) \] \[ = \left\{ \begin{array}{ll} 0, & \text{ if }y \leq {180}/{60} \\ 1 - \frac{\frac{180}{y} - {30}}{30}, & \text{ if }{180}/{60} \leq y \leq {180}/{30} \\ 1, & \text{ if }{180}/{30} \leq y \end{array}\right. \] \[ = \left\{ \begin{array}{ll} 0, & \text{ if }y \leq 3 \\ 2 - \left( {6/y}\right) , & \text{ if }3 \leq y \leq 6 \\ 1, & \text{ if }6 \leq y \end{array}\right. \] (see Fig. 4.1). Differentiating this expression, we obtain the PDF of \( Y \) : \[ {f}_{Y}\left( y\right) = \left\{ \begin{array}{ll} 0. & \text{ if }y < 3. \\ 6/{y}^{2}. & \text{ if }3 < y < 6, \\ 0. & \text{ if }6 < y. \end{array}\right. \]
No
Example 14.59
Example 14.59. Let \( G \) be the metrizable compact compact compact connected abelian group \( {S}^{\mathbb{N}} \) and \( X \) an arbitrary subset of \( \mathbb{N} \) . We form \( {H}_{X} = {S}_{a}^{X} \times {S}^{\mathbb{N} \smallsetminus X} \) considered in the obvious way as a subgroup of \( G = {S}^{\mathbb{N}} \) . Properties. Then \( {H}_{X} \) is an analytic subgroup with Lie algebra \( \mathfrak{g} = {\mathfrak{s}}^{\mathbb{N}} \cong {\mathbb{R}}^{\mathbb{N}} \) . Citation. \( {9.8} \) (iv). Comment. \( G \) has \( {2}^{{\aleph }_{0}} \) different analytic subgroups with the same Lie algebra as the whole group. If we adhere to our definition of an analytic subgroup of a pro-Lie group we have to accept that many analytic subgroups may have the same Lie algebra. It is only the minimal and, wherever they exist, then maximal analytic subgroups with a given Lie algebra that are uniquely determined by their Lie algebra.
No
Exercise 5.7
Exercise 5.7. (i) Suppose a multidimensional market model as described in Section 5.4.2 has an arbitrage. In other words, suppose there is a portfolio value process satisfying \( {X}_{1}\left( 0\right) = 0 \) and \[ \mathbb{P}\left\{ {{X}_{1}\left( T\right) \geq 0}\right\} = 1,\;\mathbb{P}\left\{ {{X}_{1}\left( T\right) > 0}\right\} > 0, \] \( \left( {5.4.23}\right) \) for some positive \( T \) . Show that if \( {X}_{2}\left( 0\right) \) is positive, then there exists a portfolio value process \( {X}_{2}\left( t\right) \) starting at \( {X}_{2}\left( 0\right) \) and satisfying \[ \mathbb{P}\left\{ {{X}_{2}\left( T\right) \geq \frac{{X}_{2}\left( 0\right) }{D\left( T\right) }}\right\} = 1,\;\mathbb{P}\left\{ {{X}_{2}\left( T\right) > \frac{{X}_{2}\left( 0\right) }{D\left( T\right) }}\right\} > 0. \] \( \left( {5.4.24}\right) \) (ii) Show that if a multidimensional market model has a portfolio value process \( {X}_{2}\left( t\right) \) such that \( {X}_{2}\left( 0\right) \) is positive and (5.4.24) holds, then the model has a portfolio value process \( {X}_{1}\left( 0\right) \) such that \( {X}_{1}\left( 0\right) = 0 \) and (5.4.23) holds.
No
Problem 3
[Problem 3. Is it true that the cohomotopical dimension \( \pi - \dim X = \dim X \) for every \( X \) ?]
No
Example 2.6.4
Example 2.6.4. We determine all proper representations of 1 by \( f = {X}^{2} + {Y}^{2} \) . The discriminant of \( f \) is \( \Delta \left( f\right) = - 4 \) . So we compute all \( \Gamma \) -orbits of forms \( \left( {1, B, C}\right) \) of discriminant -4 . By (2.14) any such \( \Gamma \) -orbit contains a form \( \left( {1, B, C}\right) \) with \( 0 \leq B \leq 1 \) . Hence we must find all pairs \( \left( {B, C}\right) \in \{ 0,1\} \times \mathbb{Z} \) such that \( {B}^{2} - {4C} = - 4 \) . Any such \( B \) must be even. Hence, \( B = 0 \) and \( C = 1 \) . So \( \left( {1,0,1}\right) \Gamma = {f\Gamma } \) is the only \( \Gamma \) -orbit of forms \( \left( {1, B, C}\right) \) . Since \( f = f{I}_{2} \) belongs to that \( \Gamma \) -orbit. The first column of \( {I}_{2} \) is \( \left( {1,0}\right) \) . So \( \left( {1,0}\right) \) is a proper representation of 1 by \( f \) . By Example 2.5 .8 the automorphism group of \( f \) is \( \operatorname{Aut}\left( f\right) = \left\{ {{I}_{2}, S,{S}^{2},{S}^{3}}\right\} \) . The \( \operatorname{Aut}\left( f\right) \) -orbit of \( \left( {1,0}\right) \) is \( \{ \left( {1,0}\right) ,\left( {0,1}\right) ,\left( {-1,0}\right) ,\left( {0, - 1}\right) \} \) . These are all proper representations of 1 by \( f \) .
Yes
Exercise 9.30
Exercise 9.30 Check this, and explicitly describe the (co)equalizers in the categories Set, \( \mathcal{T}{op},\mathcal{A}b},{\mathcal{{Mod}}}_{K}, R \) - \( \mathcal{M}{od},\mathcal{M}{od} \) - \( R,\mathcal{G}{rp},\mathcal{C}{mr} \) . Intuitively, the existence of equalizers allows one to define "subobjects" by means of equations, whereas the coequalizers allow one to define "quotient objects" by imposing relations. For example, the (co) kernel of a homomorphism of abelian groups \( f : A \rightarrow B \) can be described as the (co)equalizer of \( f \) and the zero homomorphism in the category \( \mathcal{A}b \) .
No
Problem 13.2.18
Problem 13.2.18 (Optimal stochastic control problem with average cost for a finite stochastic control system with complete observations) Consider the stochastic control system of Definition 13.2.17. The optimal stochastic control problem with average cost is to determine a control law \( {g}^{ * } \in G \) such that \[ {J}_{ac}^{ * } = \mathop{\inf }\limits_{{g \in G}}{J}_{ac}\left( g\right) = {J}_{ac}\left( {g}^{ * }\right) . \] Is an optimal control law time-invariant or time-varying? An example was constructed by S. Ross.
No
Example 7.13
Previously, we stated that the exact value of the integral (7.5) is \( {\int }_{0}^{1}{x}^{2}\sqrt{1 - {x}^{2}}\mathrm{\;d}x = \frac{\pi }{16} \) . If we use the substitution \( x = \sin t,0 \leq t \leq \frac{\pi }{2} \), we will see why: \[ {\int }_{0}^{1}{x}^{2}\sqrt{1 - {x}^{2}}\mathrm{\;d}x = {\int }_{0}^{\frac{\pi }{2}}{\sin }^{2}t\sqrt{1 - {\sin }^{2}t}\left( {\cos t}\right) \mathrm{d}t = {\int }_{0}^{\frac{\pi }{2}}{\sin }^{2}t\left( {1 - {\sin }^{2}t}\right) \mathrm{d}t \] \[ = {\int }_{0}^{\frac{\pi }{2}}{\sin }^{2}t\mathrm{\;d}t - {\int }_{0}^{\frac{\pi }{2}}{\sin }^{4}t\mathrm{\;d}t = {W}_{2} - {W}_{4}, \] where the \( {W}_{n} \) are defined in Sect. 7.1d. Since \( {W}_{2} = \frac{\pi }{4} \) and \( {W}_{4} = \frac{3}{4}{W}_{2} \), we conclude that \[ {\int }_{0}^{1}{x}^{2}\sqrt{1 - {x}^{2}}\mathrm{\;d}x = \frac{\pi }{4} - \frac{3}{4}\frac{\pi }{4} = \frac{\pi }{16}. \]
Yes
Example 1.3
Here are some examples of functions on finite sets - one which is surjective, and one which is not. ![0191b05f-39d9-7158-8173-95092112665e_18_802723.jpg](images/0191b05f-39d9-7158-8173-95092112665e_18_802723.jpg) A surjection from \( \{ 1,2,3,4\} \) to \( \{ a, b, c\} \) ![0191b05f-39d9-7158-8173-95092112665e_18_568670.jpg](images/0191b05f-39d9-7158-8173-95092112665e_18_568670.jpg) Not a surjection from \( \{ 1,2,3 \), Miguel \( \} \) to \( \{ \pi ,\mathrm{b}, \odot \} \)
No
Example 6.2
Example 6.2. Consider the Laplacian on the unit disk \( \mathbb{D} \subset {\mathbb{R}}^{2} \) . In polar coordinates, \[ \Delta = \frac{1}{r}\frac{\partial }{\partial r}\left( {r\frac{\partial }{\partial r}}\right) + \frac{1}{{r}^{2}}\frac{{\partial }^{2}}{\partial {\theta }^{2}}. \] If we substitute \( \phi \left( {r,\theta }\right) = h\left( r\right) {e}^{ik\theta } \) into the eigenvalue equation \( - {\Delta \phi } = {\lambda \phi } \), then the equation for the radial factor is \[ {\left( r\frac{\partial }{\partial r}\right) }^{2}h + \left( {\lambda {r}^{2} - {k}^{2}}\right) h = 0. \] The solutions which are regular as \( r \rightarrow 0 \) are given by \( h\left( r\right) = {J}_{k}\left( {\sqrt{\lambda }r}\right) \), where \( {J}_{k} \) is the standard Bessel function. To satisfy \( h\left( 1\right) = 0 \), we set \( \sqrt{\lambda } = {j}_{k, m} \), where \( \left\{ {0 < {j}_{k,1} < {j}_{k,2} < \ldots }\right\} \) denotes the sequence of zeros of \( {J}_{k} \) . This gives a set of eigenfunctions \[ {\phi }_{k, m}\left( {r,\theta }\right) = {J}_{k}\left( {{j}_{k, m}r}\right) {e}^{ik\theta }. \] An example is shown in Figure 6.2. Fig. 6.2 Dirichlet ![0191afa2-89ba-77d1-9192-1b785a7a8b86_133_553450.jpg](images/0191afa2-89ba-77d1-9192-1b785a7a8b86_133_553450.jpg) eigenfunction on the unit disk This set of eigenfunctions yields a basis for \( {L}^{2}\left( \mathbb{D}\right) \), so that \[ \sigma \left( {-\Delta }\right) = \left\{ {{j}_{k, m}^{2} : k \in \mathbb{Z}, m \in \mathbb{N}}\right\} . \] To prove this, one can use Fubini's theorem and the Fourier basis theorem to reduce the argument to the fact that \( {\left\{ \sqrt{r}{J}_{k}\left( {j}_{k, m}r\right) \right\} }_{k \in \mathbb{Z}, m \in \mathbb{N}} \) is a basis for \( {L}^{2}\left( {0,1}\right) \) . This result is well known from special function theory, although the proof is not exactly elementary.
No
Example 1.3
Let \( k \) be a commutative ring, \( G \) a finite group, and \( A \) a \( G \) - module algebra. Let \( \mathcal{C} = { \oplus }_{\sigma \in G}A{v}_{\sigma } \) be the left free \( A \) -module with basis indexed by \( G \), and let \( {p}_{\sigma } : \mathcal{C} \rightarrow A \) be the projection onto the free component \( A{v}_{\sigma } \) . We make \( \mathcal{C} \) into a right \( A \) -module by putting \[ {v}_{\sigma }a = \sigma \left( a\right) {v}_{\sigma } \] A comultiplication and counit on \( \mathcal{C} \) are defined by putting \[ {\Delta }_{\mathcal{C}}\left( {a{v}_{\sigma }}\right) = \mathop{\sum }\limits_{{\tau \in G}}a{v}_{\tau }{ \otimes }_{A}{v}_{{\tau }^{-1}\sigma }\text{ and }{\varepsilon }_{\mathcal{C}} = {p}_{e}, \] where \( e \) is the unit element of \( G \) . It is straightforward to verify that \( \mathcal{C} \) is an \( A \) - coring. Notice that, in the case where \( A \) is commutative, we have an example of an \( A \) -coring, which is not an \( A \) -coalgebra, since the left and right \( A \) -action on \( \mathcal{C} \) do not coincide. Let us give a description of the right \( \mathcal{C} \) -comodules. Assume that \( M = \left( {M,\rho }\right) \) is a right \( \mathcal{C} \) -comodule. For every \( m \in M \) and \( \sigma \in G \), let \( \bar{\sigma }\left( m\right) = {m}_{\sigma } = \left( {{I}_{M} \otimes }\right. \) A \( \left. {p}_{\sigma }\right) \left( {\rho \left( m\right) }\right) \) . Then we have \[ \rho \left( m\right) = \mathop{\sum }\limits_{{\sigma \in G}}{m}_{\sigma }{ \otimes }_{A}{v}_{\sigma } \] \( \bar{e} \) is the identity, since \( m = \left( {{I}_{M}{ \otimes }_{A}{\varepsilon }_{\mathcal{C}}}\right) \circ \rho \left( m\right) = {m}_{e} \) . Using the coassociativity of the comultiplication, we find \[ \mathop{\sum }\limits_{{\sigma \in G}}\rho \left( {m}_{\sigma }\right) \otimes {v}_{\sigma } = \mathop{\sum }\limits_{{\sigma ,\tau \in G}}{m}_{\sigma }{ \otimes }_{A}{v}_{\tau }{ \otimes }_{A}{v}_{{\tau }^{-1}\sigma } = \mathop{\sum }\limits_{{\rho ,\tau \in G}}{m}_{\tau \rho }{ \otimes }_{A}{v}_{\tau }{ \otimes }_{A}{v}_{\rho }, \] \( {hence\rho }\left( {m}_{\sigma }\right) = \mathop{\sum }\limits_{{\tau \in G}}{m}_{\tau \sigma }{ \otimes }_{A}{v}_{\tau }, \) and \( \overline{\tau }\left( {\overline{\sigma }\left( m\right) }\right) = {m}_{\tau \sigma } = \overline{\tau \sigma }\left( m\right) , \) so \( G \) acts as a group of \( k \) -automorphisms on \( M \) . Moreover, since \( \rho \) is right \( A \) -linear, we have that \[ \rho \left( {ma}\right) = \mathop{\sum }\limits_{{\sigma \in G}}\bar{\sigma }\left( {ma}\right) { \otimes }_{A}{v}_{\sigma } = \mathop{\sum }\limits_{{\sigma \in G}}\bar{\sigma }\left( m\right) { \otimes }_{A}{v}_{\sigma }a = \mathop{\sum }\limits_{{\sigma \in G}}\bar{\sigma }\left( m\right) \sigma \left( a\right) { \otimes }_{A}{v}_{\sigma } \] so \( \bar{\sigma } \) is \( A \) -semilinear (cf. [29, p. 55]): \( \bar{\sigma }\left( {ma}\right) = \bar{\sigma }\left( m\right) \sigma \left( a\right) \), for all \( m \in M \) and \( a \in A \) . Conversely, if \( G \) acts as a group of right \( A \) -semilinear automorphims on \( M \), then the formula \[ \rho \left( m\right) = \mathop{\sum }\limits_{{\sigma \in G}}\bar{\sigma }\left( m\right) { \otimes }_{A}{v}_{\sigma } \] defines a right \( \mathcal{C} \) -comodule structure on \( \mathcal{M} \) .
No
Exercise 1.16
Exercise 1.16 Let \( {\left( {x}_{n}\right) }_{n \in \mathbb{N}} \) be a sequence in a complete metric space \( \left( {\mathcal{X}, d}\right) \) such that \( \mathop{\sum }\limits_{{n \in \mathbb{N}}}d\left( {{x}_{n},{x}_{n + 1}}\right) < + \infty \) . Show that \( {\left( {x}_{n}\right) }_{n \in \mathbb{N}} \) converges and that this is no longer true if we merely assume that \( \mathop{\sum }\limits_{{n \in \mathbb{N}}}{d}^{2}\left( {{x}_{n},{x}_{n + 1}}\right) < + \infty \) .
No
Example 6.4.1
6.4.1 Suppose that \( {\pi \sigma } \in {\mathbb{L}}_{-1\mathrm{{oc}}}^{2,4},{V}_{0} \in {\mathbb{D}}_{1\mathrm{{oc}}}^{1,2},\left( {{\mu }_{t} - {\rho }_{t}}\right) {\pi }_{t} - \frac{{\left( {\pi }_{t}{\sigma }_{t}\right) }^{2}}{2} \in {\mathbb{L}}_{1\mathrm{{oc}}}^{1,2} \) and the value process \( {V}_{t} \) has continuous trajectories. Then, appy the Itô’s formula (3.36) in order to deduce (6.32).
No
Example 2.34
Example 2.34. Any random matrix and the identity are asymptotically free.
No
Exercise 1.5
For following nonlinear ODEs, find a particular solution: (1) \( {x}^{2}{y}^{\prime \prime } - {\left( {y}^{\prime }\right) }^{2} + {2y} = 0 \) , (2) \( x{y}^{\prime \prime \prime } + 3{y}^{\prime \prime } = x{e}^{-{y}^{\prime }} \) , (3) \( {x}^{2}{y}^{\prime \prime } - 2{\left( {y}^{\prime }\right) }^{3} + {6y} = 0 \) , (4) \( {y}^{\prime \prime } + \frac{2}{x}{y}^{\prime } = {y}^{m}, m \neq 3 \) , (5) \( {y}^{\prime \prime \prime } - \frac{15}{{x}^{2}}{y}^{\prime } = 3{y}^{2} \) .
No
Example 1.19
Example 1.19 Let \( X = {\mathbb{R}}^{2} \) . One can write any function \( f : S \rightarrow {\mathbb{R}}^{2} \) in terms of component functions \( {fs} = \left( {{xs},{ys}}\right) \), where the components \( {xs} \) and \( {ys} \) are simply given by the composition ![0191b040-d6f7-7f03-83f6-23d666ea31d2_32_314365.jpg](images/0191b040-d6f7-7f03-83f6-23d666ea31d2_32_314365.jpg) The function \( f \) is continuous if and only if \( x \) and \( y \) are continuous, and it’s good to realize that this way of specifying which functions into \( {\mathbb{R}}^{2} \) are continuous completely determines the topology on \( {\mathbb{R}}^{2} \) . But be careful: functions from \( {\mathbb{R}}^{2} \) and more generally \( {\mathbb{R}}^{n} \) can be confusing, in part because our familiarity with \( {\mathbb{R}}^{n} \) can give unjustified topological importance to the maps \( \mathbb{R} \rightarrow {\mathbb{R}}^{2} \) given by fixing one of the coordinates. So don't make the mistake of thinking that a function \( f : {\mathbb{R}}^{2} \rightarrow S \) is continuous if the maps \( x \mapsto f\left( {x,{y}_{0}}\right) \) and \( y \mapsto f\left( {{x}_{0}, y}\right) \) are continuous for every \( {x}_{0} \) and \( {y}_{0} \), as in the diagram below: ![0191b040-d6f7-7f03-83f6-23d666ea31d2_33_950207.jpg](images/0191b040-d6f7-7f03-83f6-23d666ea31d2_33_950207.jpg) Here’s a counterexample to keep in mind: the function \( f : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) defined by \[ f\left( {x, y}\right) = \left\{ \begin{array}{ll} \frac{xy}{{x}^{2} + {y}^{2}} & \text{ if }\left( {x, y}\right) \neq \left( {0,0}\right) \\ 0 & \text{ if }\left( {x, y}\right) = \left( {0,0}\right) \end{array}\right. \] is not continuous even though for any choice of \( {x}_{0} \) or \( {y}_{0} \), the maps \( f\left( {x,{y}_{0}}\right) \) and \( f\left( {{x}_{0}, y}\right) \) are continuous functions \( \mathbb{R} \rightarrow \mathbb{R} \) .
No
Example 4.1.1
Example 4.1.1 Let \( {\mathcal{F}}_{t} \) be the information available until time \( t \) regarding the evolution of a stock. Assume the price of the stock at time \( t = 0 \) is \( \$ {50} \) per share. The following decisions are stopping times: (a) Sell the stock when it reaches for the first time the price of \( \$ {100} \) per share; (b) Buy the stock when it reaches for the first time the price of \( \$ {10} \) per share; (c) Sell the stock at the end of the year; (d) Sell the stock either when it reaches for the first time \( \$ {80} \) or at the end of the year. (e) Keep the stock either until the initial investment doubles or until the end of the year; The following decision is not a stopping time: \( \left( f\right) \) Sell the stock when it reaches the maximum level it will ever be. Part \( \left( f\right) \) is not a stopping time because it requires information about the future that is not contained in \( {\mathcal{F}}_{t} \) . In part (e) there are two conditions; the latter one has the occurring probability equal to 1 .
No
Problem 6.3
Problem 6.3. An airport bus deposits 25 passengers at 7 stops. Each passenger is as likely to get off at any stop as at any other, and the passengers act independently of one another. The bus makes a stop only if someone wants to get off. Use Markov chain analysis to calculate the probability mass function of the number of bus stops. (answer: \( \left( {0,{0.0000},{0.0000},{0.0000},{0.0046},{0.1392},{0.8562}}\right) \) )
Yes
Example 5.11
Example 5.11. Consider the curve defined by \( {y}^{2} = \left( {x - {r}_{1}}\right) \left( {x - {r}_{2}}\right) \left( {x - {r}_{3}}\right) \) . Let’s assume that all three \( {r}_{i} \) are distinct, say \( {y}^{2} = \left( {x + 1}\right) x\left( {x - 1}\right) - \) that is, \( {y}^{2} = {x}^{3} - x \) . Figure 5.3 depicts its real portion. It is straightforward to check that this degree- 3 curve in \( {\mathbb{P}}^{2}\left( \mathbb{C}\right) \) is nonsingular. Therefore it is a closed oriented 2-manifold having genus \( g = \left( {3 - 1}\right) \left( {3 - 2}\right) /2 = 1 \) , making the curve a topological torus. The real portion appearing in Figure 5.3 reveals only a tiny part of the entire complex curve. To see it all, we'd need to look in real 4-space. Fortunately, the part in one particular real
No
Problem 4.7
Problem 4.7. Let \( f : K \rightarrow \mathbb{R} \) be a discrete Morse function with \( \overrightarrow{f} \) a perfect discrete Morse vector. Show that (i) \( \overrightarrow{f} \) is unique; (ii) \( \overrightarrow{f} \) is optimal.
No
Example 7.3.5
(1) Let \( G = \mathbf{R} \) . Then, as we have already seen (see Example 3.4.10), the dual group is isomorphic as a group to \( \mathbf{R} \) using the map \[ \left\{ \begin{matrix} \mathbf{R} & \rightarrow & \widehat{G} \\ t & \mapsto & {e}_{t}, \end{matrix}\right. \] where \( {e}_{t}\left( x\right) = {e}^{itx} \) . It is an elementary exercise that this is also an homeomorphism, so that \( \widehat{G} \) is isomorphic to \( \mathbf{R} \) as a topological group. The corresponding abstract Fourier transform is defined for \( f \in {L}^{1}\left( \mathbf{R}\right) \) by \[ \widehat{f}\left( t\right) = {\int }_{\mathbf{R}}f\left( x\right) {e}^{-{itx}}{dx} \] and coincides therefore with the "usual" Fourier transform. The Fourier inversion formula, with this normalization, is \[ f\left( x\right) = \frac{1}{2\pi }{\int }_{\mathbf{R}}\widehat{f}\left( t\right) {e}^{itx}{dt} \] so that the dual Haar measure of the Lebesgue measure is \( \frac{1}{2\pi }{dt} \) . (This illustrates that one must be somewhat careful with the normalizations of Haar measure.)
No
Example 1.3.1
Example 1.3.1. Verify that the function \( u\left( x\right) = {e}^{-x} \) is a solution of the ODE \( {u}^{\prime } = - u \) . Solution: Observe that \( {u}^{\prime }\left( x\right) = - {e}^{-x} \) and that \( - u\left( x\right) = - {e}^{-x} \), for any real number \( x \) . Substituting these into the given ODE results in a true statement, for any real number \( x \) . So, \( u\left( x\right) \) is a solution to the given ODE.
No
Example 2.32
[Example 2.32. In the case of \( {S}^{1} \), the map \( f\left( z\right) = {z}^{k} \), where we view \( {S}^{1} \) as the unit circle in \( \mathbb{C} \), has degree \( k \) . This is evident in the case \( k = 0 \) since \( f \) is then constant. The case \( k < 0 \) reduces to the case \( k > 0 \) by composing with \( z \mapsto {z}^{-1} \), which is a reflection, of degree -1. To compute the degree when \( k > 0 \), observe first that for any \( y \in {S}^{1},{f}^{-1}\left( y\right) \) consists of \( k \) points \( {x}_{1},\cdots ,{x}_{k} \) near each of which \( f \) is a local homeomorphism. Near each \( {x}_{i} \) the map \( f \) can be homotoped, stretching by a factor of \( k \) without changing local degree, to become the restriction of a rotation of \( {S}^{1} \) . A rotation has degree +1 since it is homotopic to the identity, and since a rotation is a homeomorphism, its degree equals its local degree at any point. Hence \( \deg f \mid {x}_{i} = 1 \) and \( \deg f = k \) .]
Yes
Example 2.68
[Example 2.68 (Arens algebra \( {L}^{\omega }\left( {0,1}\right) \) ) Let \( \parallel \cdot {\parallel }_{p} \) denote the norm of \( {L}^{p}\left( {0,1}\right) \) with respect to the Lebesgue measure on \( \left\lbrack {0,1}\right\rbrack \) . Let \( p \in \left( {1, + \infty }\right) \) . For \( f, g \in {L}^{p}\left( {0,1}\right) \), we have by the Hölder inequality, \[ \parallel {fg}{\parallel }_{p} \leq \parallel f{\parallel }_{2p}\parallel g{\parallel }_{2p} \tag{2.59} \] The vector space \( {L}^{\omega }\left( {0,1}\right) \mathrel{\text{:=}} { \cap }_{1 < p < \infty }{L}^{p}\left( {0,1}\right) \), equipped with the locally convex topology defined by the norms \( \parallel \cdot {\parallel }_{p}, p \in \left( {1,\infty }\right) \), is a Frechet space. From (2.59) it follows that for \( f, g \in {L}^{\omega }\left( {0,1}\right) \) the pointwise product \( {fg} \) is also in \( {L}^{\omega }\left( {0,1}\right) \) and the multiplication is jointly continuous in the topology of \( {L}^{\omega }\left( {0,1}\right) \) . Further, \( {L}^{\omega }\left( {0,1}\right) \) is a complex unital \( * \) -algebra with involution \( {f}^{ + }\left( t\right) \mathrel{\text{:=}} \overline{f\left( t\right) } \) and we have \( {\begin{Vmatrix}{f}^{ + }\end{Vmatrix}}_{p} = \parallel f{\parallel }_{p} \) for \( f \in {L}^{\omega }\left( {0,1}\right) \) . Thus, \[ {L}^{\omega }\left( {0,1}\right) = { \cap }_{1 < p < \infty }{L}^{p}\left( {0,1}\right) \] is a commutative Frechet \( * \) -algebra, called the Arens algebra. We prove that the \( * \) -algebra \( {L}^{\omega }\left( {0,1}\right) \) has no character. Assume to the contrary, it has a character \( \chi \) . Its restriction to \( C\left( {\left\lbrack {0,1}\right\rbrack ;\mathbb{R}}\right) \) is a character, so by Lemma 2.65 there exists an \( {x}_{0} \in \left\lbrack {0,1}\right\rbrack \) such that \( \chi \left( f\right) = f\left( {x}_{0}\right) \) for \( f \in C\left( {\left\lbrack {0,1}\right\rbrack ;\mathbb{R}}\right) \) . Define \[ g\left( x\right) = \log \left| {x - {x}_{0}}\right|, h\left( x\right) = {\left( \log \left| x - {x}_{0}\right| \right) }^{-1}\text{ for }x \in \left\lbrack {0,1}\right\rbrack, x \neq {x}_{0}, h\left( {x}_{0}\right) = 0. \] Then \( h \in C\left( {\left\lbrack {0,1}\right\rbrack ;\mathbb{R}}\right), g \in {L}^{\omega }\left( {0,1}\right) \), and \( {gh} = 1 \) in \( {L}^{\omega }\left( {0,1}\right) \), so we obtain \[ 1 = \chi \left( 1\right) = \chi \left( {gh}\right) = \chi \left( g\right) \chi \left( h\right) = \chi \left( g\right) h\left( {x}_{0}\right) = 0, \] a contradiction.]
No
Example 4.26
Example 4.26 Find the inflection points and concave up and down ranges of \[ f\left( x\right) = {x}^{3} - {4x} \] ## Solution: We need the second derivative: \[ {f}^{\prime }\left( x\right) = 3{x}^{2} - 4 \] \[ {f}^{\prime \prime }\left( x\right) = {6x} \] Solve \( {6x} = 0 \), and we get that there is an inflection value at \( x = 0 \) and an inflection point at \( \left( {0,0}\right) \) . Let’s make a sign chart plugging in \( {f}^{\prime \prime }\left( {-1}\right) = - 6 \) and \( {f}^{\prime \prime }\left( 1\right) = 6 \) to get: \[ \left( {-\infty }\right) - - - \left( 0\right) + + + \left( \infty \right) \] Another way to understand where a curve is concave up or down is that the shape of the curve holds water where it is concave up and does not where it is concave down. Verify the information about concavity and inflection points on the following graph. ![0191b320-7a43-788e-82f6-f52b7ecc459b_167_699696.jpg](images/0191b320-7a43-788e-82f6-f52b7ecc459b_167_699696.jpg) \( \diamond \) Concavity can also change at vertical asymptotes. So they are included on the second derivative sign chart for concavity. Let's do an example that shows this phenomenon.
No
Example 15
Example 15. Let \( \operatorname{Prog} = \left\{ {P\left( {\widehat{{x}_{1}},\widehat{g\left( {x}_{2}\right) }}\right) \leftarrow {P}^{\prime }\left( {\widehat{{x}_{1}},\widehat{{x}_{2}}}\right) .P\left( {\widehat{f\left( {x}_{1}\right) },\widehat{{x}_{2}}}\right) \leftarrow {P}^{\prime \prime }\left( {\widehat{{x}_{1}},}\right. }\right. \) \( \left. {\left. \widehat{{x}_{2}}\right) \text{.}}\right\} \), and consider \( G = P\left( {f\left( x\right), y}\right) \) . Thus, \( P\left( {f\left( x\right), y}\right) { ⤳ }_{{\sigma }_{1}}{P}^{\prime }\left( {f\left( x\right) ,{x}_{2}}\right) \) with \( {\sigma }_{1} = \left\lbrack {{x}_{1}/f\left( x\right), y/g\left( {x}_{2}\right) }\right\rbrack \) and \( \left. {P\left( {f\left( x\right), y}\right) }\right) { \rightarrow }_{{\sigma }_{2}}{P}^{\prime \prime }\left( {x, y}\right) \) with \( {\sigma }_{2} = \left\lbrack {{x}_{1}/x,{x}_{2}/y}\right\rbrack \) .
No
Exercise 11.7.2
Use the last results to find that the eigenvalues of matrix \( A \), defined by (11.7.22), are expressed by \[ {\alpha }_{ik} = {\beta }_{i} + 2\cos \left( {{k\pi }/{n}_{y}}\right) = - 2\left( {1 + {\sigma }^{2}}\right) \] \[ + 2{\sigma }^{2}\cos \left( {{i\pi }/{n}_{x}}\right) + 2\cos \left( {{k\pi }/{n}_{y}}\right), i = 0,\ldots ,{n}_{x}, i = 0,\ldots ,{n}_{y}. \tag{11.7.27} \] Deduce that \( A \) is singular.
No