ID
stringlengths
9
15
Exercise
stringlengths
27
6k
judge
stringclasses
2 values
Example 7
Let us consider a fractional analogue of the wave equation with space-fractional derivatives: \[ {u}_{tt} = - {}_{x}^{C}{D}_{1}^{\alpha }\left( {{}_{0}{D}_{x}^{\alpha }u}\right) ,\;x \in \left( {0,1}\right) ,\;\alpha \in \left( {0,1}\right) . \tag{46} \] This equation is the Euler-Lagrange equation (36) with the Lagrangian \[ \mathcal{L} = \frac{1}{2}\left\lbrack {{\left( {}_{0}{D}_{x}^{\alpha }u\right) }^{2} - {u}_{t}^{2}}\right\rbrack \tag{47} \] and, in particular, has the symmetries \[ {X}_{1} = \frac{\partial }{\partial t},\;{X}_{2} = u\frac{\partial }{\partial u},\;{X}_{3} = {x}^{\alpha - 1}\frac{\partial }{\partial u},\;{X}_{4} = {x}^{\alpha }\frac{\partial }{\partial u}. \] The conservation law (34) takes the form \[ {D}_{t}{C}^{t} + {D}_{x}{C}^{x} = 0 \tag{48} \] It can be easily verified that with this Lagrangian, the operators \( {X}_{1} \) and \( {X}_{3} \) satisfy the condition (42), and the operator \( {X}_{4} \) satisfies the divergence condition (43) with \( {H}^{x} = \Gamma \left( {\alpha + 1}\right) {}_{0}{I}_{x}^{1 - \alpha }u \) . The operator \( {X}_{2} \) does not satisfy these conditions and, therefore, it cannot be used for finding conservation laws. For a Lagrangian of the form \( \mathcal{L} = \) \( \mathcal{L}\left( {{u}_{t},{}_{0}{D}_{x}^{\alpha }u}\right) \), the components of the conserved vector defined by (45),(40) take the forms \[ {C}^{t} = {\mathcal{N}}^{t}\mathcal{L} = {\xi }^{t}\mathcal{L} + W\frac{\partial \mathcal{L}}{\partial {u}_{t}}, \] \[ {C}^{x} = {\mathcal{N}}^{x}\mathcal{L} - {H}^{x} = {\xi }^{x}\mathcal{L} + {}_{0}{I}_{x}^{1 - \alpha }\left( W\right) \frac{\partial \mathcal{L}}{\partial \left( {{}_{0}{D}_{x}^{\alpha }u}\right) } - {J}_{x + }^{1 - \alpha }\left\{ {W,{D}_{x}\frac{\partial \mathcal{L}}{\partial \left( {{}_{0}{D}_{x}^{\alpha }u}\right) }}\right\} - {H}^{x} \] with \( W = \eta - {\xi }^{0}{u}_{t} - {\xi }^{1}{u}_{x} \) . The operator \( {X}_{1} \) gives \( W = - {u}_{t} \) and provides a conserved vector with the components \[ {C}^{t} = \frac{{u}_{t}^{2}}{2} + \frac{{\left( {}_{0}{D}_{x}^{\alpha }\right) }^{2}u}{2},\;{C}^{x} = - \left( {{}_{0}{D}_{x}^{\alpha }u}\right) \left( {{}_{0}{I}_{x}^{1 - \alpha }{u}_{t}}\right) + {J}_{x + }^{1 - \alpha }\left\{ {{u}_{t},{}_{0}{D}_{x}^{\alpha + 1}u}\right\} . \] The corresponding conservation law is the energy conservation law. For the operator \( {X}_{3} \), one has \( W = {x}^{\alpha - 1} \), and the components of a conserved vector can be written as \[ {C}^{t} = - {x}^{\alpha - 1}{u}_{t},\;{C}^{x} = \Gamma {\left( \alpha \right) }_{0}{D}_{x}^{\alpha }u - {J}_{x + }^{1 - \alpha }\left\{ {{x}^{\alpha - 1},{}_{0}{D}_{x}^{\alpha + 1}u}\right\} . \] The corresponding conservation law is a fractional differential analogue of the momentum conservation law. The operator \( {X}_{4} \) gives \( W = {x}^{\alpha } \) and in view of Remark 2 provides a conserved vector with the components \[ {C}^{t} = - {x}^{\alpha }{u}_{t},\;{C}^{x} = \Gamma \left( {1 + \alpha }\right) \left\lbrack {{x}_{0}{D}_{x}^{\alpha }u - {}_{0}{I}_{x}^{1 - \alpha }u}\right\rbrack - {J}_{x + }^{1 - \alpha }\left\{ {{x}^{\alpha },{}_{0}{D}_{x}^{\alpha + 1}u}\right\} . \] The corresponding conservation law is a consequence of the momentum conservation law. Direct differentiation allows one to easily verify that all the conserved vectors found above satisfy conservation law (48) by virtue of (46).
No
Exercise 10.11
Exercise 10.11. Let \( \delta > 0 \) be given. Consider an interest rate swap paying a fixed interest rate \( K \) and receiving backset LIBOR \( L\left( {{T}_{j - 1},{T}_{j - 1}}\right) \) on a principal of \( 1 \) at each of the payment dates \( {T}_{j} = {\delta j}, j = 1,2,\ldots, n + 1 \) . Show that the value of the swap is \[ {\delta K}\mathop{\sum }\limits_{{j = 1}}^{{n + 1}}B\left( {0,{T}_{j}}\right) - \delta \mathop{\sum }\limits_{{j = 1}}^{{n + 1}}B\left( {0,{T}_{j}}\right) L\left( {0,{T}_{j - 1}}\right) . \tag{10.7.22} \] Remark 10.7.1. The swap rate is defined to be the value of \( K \) that makes the initial value of the swap equal to zero. Thus, the swap rate is \[ K = \frac{\mathop{\sum }\limits_{{j = 1}}^{{n + 1}}B\left( {0,{T}_{j}}\right) L\left( {0,{T}_{j - 1}}\right) }{\mathop{\sum }\limits_{{j = 1}}^{{n + 1}}B\left( {0,{T}_{j}}\right) }. \tag{10.7.23} \]
No
Example 2.9.12
Example 2.9.12 (Determinants and permanents): The permanent of the matrix (we expand similar to the Laplace expansion of determinants) \[ \left( \begin{matrix} a & b & c \\ d & e & f \\ g & h & i \end{matrix}\right) \] is \( a\left( {{ei} + {fh}}\right) + b\left( {{di} + {gf}}\right) + c\left( {{dh} + {ge}}\right) = {aei} + {afh} + {bgf} + {cdh} + \) \( {bdi} + {cge}. \) Its determinant is \( a\left( {{ei} - {fh}}\right) - b\left( {{di} - {gf}}\right) + c\left( {{dh} - {ge}}\right) = \) \( {aei} - {afh} + {bgf} - {bdi} + {cdh} - {cge} \) . In a formal manner, the permanent of the square matrix \[ \left( \begin{matrix} {a}_{11} & {a}_{12} & \ldots & {a}_{1n} \\ {a}_{21} & {a}_{22} & \ldots & {a}_{2n} \\ \vdots & \vdots & \vdots & \vdots \\ {a}_{n1} & {a}_{n2} & \ldots & {a}_{nn} \end{matrix}\right) \] is defined as the number \[ \mathop{\sum }\limits_{p}{a}_{1{p}_{1}}{a}_{2{p}_{2}}\cdots {a}_{n{p}_{n}} \] where the sum is taken over all possible permutations \( p = \left( {{p}_{1},{p}_{2},\ldots ,{p}_{n}}\right) \) on the set \( \{ 1,2,\ldots, n\} \) . Hence the permanent of an \( n \times n \) square matrix possesses exactly \( n \) ! terms (since there are \( n \) ! permutations possible on an \( n \) -set) and is the sum of the products of all possible permutations of the set \( \left\lbrack n\right\rbrack \) . Note that the action of the permutation \( p \) on the element \( i \) is denoted conveniently by \( {p}_{i} \) instead of \( p\left( i\right) \) . \[ \text{"rb'sri"book'vol" - 2018/10/4 - 10:13 - page 145 - #161} \]
Yes
Exercise 2.22
Exercise 2.22 Let \[ A = \left\lbrack \begin{matrix} 1 & 1 & - 1 & - 1 \\ 0 & \varepsilon & 0 & 0 \\ 0 & 0 & \varepsilon & 0 \\ 1 & 0 & 0 & 1 \end{matrix}\right\rbrack ,\;b = \left\lbrack \begin{array}{l} 0 \\ 1 \\ 1 \\ 2 \end{array}\right\rbrack . \] The solution of the linear system \( {Ax} = b \) is \( x = {\left\lbrack 1,{\varepsilon }^{-1},{\varepsilon }^{-1},1\right\rbrack }^{T} \) . (a) Show that this system is well-conditioned but badly scaled, by computing the condition number \( {\kappa }_{C}\left( A\right) = {\begin{Vmatrix}{\left| A\right| }^{-1}\left| A\right| \end{Vmatrix}}_{\infty } \) and the scaling quantity \( \sigma \left( {A, x}\right) \) (see Exercise 2.21). What do you expect from Gaussian elimination when \( \varepsilon \) is substituted by the relative machine precision eps? (b) Solve the system by a Gaussian elimination program with column pivoting for \( \varepsilon = \) eps. How big is the computed backward error \( \widehat{\eta } \) ? (c) Check yourself that one single refinement step delivers a stable result.
No
Example 10.14
Design a combinational circuit for calculating the square of elements of \( \mathrm{{GF}}\left( {2}^{4}\right) \left\{ {{x}^{4} + {f}_{3}{x}^{3} + {f}_{2}{x}^{2} + {f}_{1}x + {f}_{0}}\right\} \), being \( P\left( x\right) = {x}^{4} + {f}_{3}{x}^{3} + {f}_{2}{x}^{2} + \) \( {f}_{1}x + {f}_{0} \) any primitive polynomial of degree 4 .
No
Exercise 10.2
Exercise 10.2. Consider a market with short term interest rate \( {\left( {r}_{t}\right) }_{t \in {\mathbb{R}}_{ + }} \) and two zero-coupon bonds \( P\left( {t,{T}_{1}}\right), P\left( {t,{T}_{2}}\right) \) with maturities \( {T}_{1} = \delta \) and \( {T}_{2} = {2\delta } \), where \( P\left( {t,{T}_{i}}\right) \) is modeled according to \[ \frac{{dP}\left( {t,{T}_{i}}\right) }{P\left( {t,{T}_{i}}\right) } = {r}_{t}{dt} + {\zeta }_{i}\left( t\right) d{B}_{t},\;i = 1,2. \] Consider also the forward LIBOR \( L\left( {t,{T}_{1},{T}_{2}}\right) \) defined by \[ L\left( {t,{T}_{1},{T}_{2}}\right) = \frac{1}{\delta }\left( {\frac{P\left( {t,{T}_{1}}\right) }{P\left( {t,{T}_{2}}\right) } - 1}\right) ,\;0 \leq t \leq {T}_{1}, \] and assume that \( L\left( {t,{T}_{1},{T}_{2}}\right) \) is modeled in the BGM model as \[ \frac{{dL}\left( {t,{T}_{1},{T}_{2}}\right) }{L\left( {t,{T}_{1},{T}_{2}}\right) } = {\gamma d}{B}_{t}^{\left( 2\right) },\;0 \leq t \leq {T}_{1}, \tag{10.25} \] where \( \gamma \) is a deterministic constant, and \[ {B}_{t}^{\left( 2\right) } = {B}_{t} - {\int }_{0}^{t}{\zeta }_{2}\left( s\right) {ds} \] is a standard Brownian motion under the forward measure \( {\mathbb{P}}_{2} \) defined by \[ \frac{d{\mathbb{P}}_{2}}{d\mathbb{P}} = \exp \left( {{\int }_{0}^{{T}_{2}}{\zeta }_{2}\left( s\right) d{B}_{s} - \frac{1}{2}{\int }_{0}^{{T}_{2}}{\left| {\zeta }_{2}\left( s\right) \right| }^{2}{ds}}\right) . \] (1) Compute \( L\left( {t,{T}_{1},{T}_{2}}\right) \) by solving Equation (10.25). (2) Compute the price at time \( t \) : \[ P\left( {t,{T}_{2}}\right) {\mathbb{E}}_{2}\left\lbrack {{\left( L\left( {T}_{1},{T}_{1},{T}_{2}\right) - \kappa \right) }^{ + } \mid {\mathcal{F}}_{t}}\right\rbrack ,\;0 \leq t \leq {T}_{1}, \] of the caplet with strike \( \kappa \), where \( {\mathbb{E}}_{2} \) denotes the expectation under the forward measure \( {\mathbb{P}}_{2} \) .
Yes
Example 15
\[ \text{coin} = \text{true ? false} \] \[ \text{iff} = \text{(invTrue} * \text{snd) ? (invFalse} * \text{snd} * \text{not)} \] \[ \text{iff} = \{ \langle \langle \text{True},\text{True}\rangle ,\text{True}\rangle ,\langle \langle \text{True},\text{False}\rangle ,\text{False}\rangle \text{,} \] \[ \overset{\underrightarrow{} }{\langle \langle \mathtt{{False}},\mathtt{{False}}\rangle ,\mathtt{{True}}\rangle ,\langle \langle \mathtt{{False}},\mathtt{{True}}\rangle ,\mathtt{{False}}\rangle \} } \] \[ {coin} = \{ \langle \langle \rangle ,\mathtt{{True}}\rangle ,\langle \langle \rangle ,\mathtt{{False}}\rangle \} \] \[ \llbracket \text{shared}\rrbracket = \text{coin} \circ \left\lbrack {\mathbf{I},\mathbf{I}}\right\rbrack \circ \text{iff} \] \[ \llbracket \text{indep}\rrbracket = \left\lbrack {1,1}\right\rbrack \circ \left( {\text{coin}\parallel \text{coin}}\right) \circ \text{iff} = \left\lbrack {\text{coin},\text{coin}}\right\rbrack \circ \text{iff} \] \[ \llbracket \text{ shared }\rrbracket = \{ \langle \langle \rangle ,\mathtt{{True}}\rangle \} \;\llbracket \text{ indep }\rrbracket = \{ \langle \langle \rangle ,\mathtt{{True}}\rangle ,\langle \langle \rangle ,\mathtt{{False}}\rangle \} \]
No
Example 7.3.2
Example 7.3.2. We claim that the ring of multipliers of \( \mathbb{Z} \) is \( \mathbb{Z} \) . Clearly, we have \( \mathbb{Z} \subset \mathbb{Z} : \mathbb{Z} \) . Conversely, let \( \alpha \in \mathbb{Z} \) with \( \alpha \mathbb{Z} \subset \mathbb{Z} \) . Then we have in particular \( \alpha = \alpha \cdot 1 \in \mathbb{Z} \) . Hence \( \mathbb{Z} : \mathbb{Z} \subset \mathbb{Z} \) . So we have shown that \( \mathbb{Z} : \mathbb{Z} = \mathbb{Z} \) .
No
Example 6.2
Example 6.2 (Correlation sum of NMR laser data). Let us consider the NMR laser data that we have met on several occasions before. (See Appendix B. 2 for a summary.) After inspecting the data (in the stroboscopic sampling) visually for obvious non-stationarities, looking at phase portraits with different delays and at different portions of the data available, we venture to compute the correlation sum, ![0191aa50-5ead-7c95-9e57-54e6936f6e5b_99_149043.jpg](images/0191aa50-5ead-7c95-9e57-54e6936f6e5b_99_149043.jpg) Figure 6.3 Correlation integral for NMR laser data. \( C\left( {m,\epsilon }\right) \) is plotted versus \( \epsilon \) . A double logarithmic plot was chosen since we are looking for power law behaviour. The different curves were obtained with embedding dimensions \( m = 2 \) (uppermost curve) to \( m = 7 \) (lowest curve). In the range \( {100} < \epsilon < {1000} \) we can indeed find reasonably straight lines as an indicator of self-similar geometry. We also plot the corresponding curves (squares) for \( m = 6,7 \) after the data has been cleaned by the local projective noise reduction scheme, Section 10.3.2. See Example 10.3 for details of the cleaning. Now the scaling range is extended at least down to \( \epsilon = 1 \) . Eq. (6.3), for this data set. We will only interpret the result subject to further consistency checks described later in this chapter. For now we have to choose a delay time for the embedding, a range of interesting embedding dimensions and a correlation time \( {t}_{\min } \) in order to discard temporal neighbours which could affect the result adversely. For this map-like data set, there is no reason to choose a delay time different from 1. Let us compute \( C\left( {m,\epsilon }\right) \) in 2-7 dimensional embeddings. If seven turns out to be insufficient we can repeat the computation for higher values. Most likely we are on the safe side when we discard all pairs closer than 500 steps in time (see Example 6.6 below). The correlation times of the data are much shorter but we can easily afford the resulting loss of \( {2.5}\% \) of the pairs for statistical purposes. Fig. 6.3 shows the correlation sums \( C\left( {m,\epsilon }\right) \) obtained with these choices. As is typical for low dimensional deterministic experimental data, we find something like a power law for \( C\left( {m,\epsilon }\right) \), only within a small range of length scales \( \epsilon \), here in the region \( {100} < \epsilon < {1000} \) . The power law behaviour of \( C\left( {m,\epsilon }\right) \) as the signature of self-similarity can best be found by plotting the slope \( D\left( {m,\epsilon }\right) \) of a double logarithmic plot of \( C\left( {m,\epsilon }\right) \) versus \( \epsilon \), the latter still on a logarithmic scale. This has been done in Fig. 6.4. This representation shows much more clearly the plateau of \( D\left( {m,\epsilon }\right) = \partial \ln C\left( {m,\epsilon }\right) /\partial \ln \epsilon \) which corresponds to the desired power law for \( C \) . We can easily find that the plateau value for \( D \) does not change much with the embedding dimension \( m \) as soon as \( m > 2 \), but there are still some fluctuations present. However, the estimated curve \( D\left( {m,\epsilon }\right) \) is characteristic enough to suggest that the data are a sample taken from a strange attractor of dimension \( D < 2 \) . More convincing evidence will be presented once we have applied nonlinear noise reduction to the data, see Example 10.3. ![0191aa50-5ead-7c95-9e57-54e6936f6e5b_100_989729.jpg](images/0191aa50-5ead-7c95-9e57-54e6936f6e5b_100_989729.jpg) Figure 6.4 Local slopes of the correlation integral (NMR laser data) shown in Fig. 6.3. In this representation the scaling behaviour and also deviations from it are more clearly visible. Again, the different curves represent \( m = 2,\ldots ,7 \), now crented from below. Using the curves after nonlinear noise reduction (squares) we would read off an estimate of \( D = {1.5} \pm {0.1} \) .
Yes
Exercise 2.7
Show that any power of an expanding map is still an expanding map.
No
Example 5
Example 5. Suppose that \( p = x - {13}{y}^{2} - {12}{z}^{3} \) and \( \pi = {x}^{2} - {xy} + {92z} \), determine \( S\left( {p,\pi }\right) \) with respect to the term order \( x{ \succ }_{\text{lex }}y{ \succ }_{\text{lex }}z \) .
No
Example 3.7.3
Example 3.7.3. We first give a rather simple NFS factoring example. Let \( n = {14885} = 5 \cdot {13} \cdot {229} = {122}^{2} + 1 \) . So we put \( f\left( x\right) = {x}^{2} + 1 \) and \( m = {122} \) , such that \[ f\left( x\right) \equiv f\left( m\right) \equiv 0\left( {\;\operatorname{mod}\;n}\right) . \] If we choose \( \left| a\right| ,\left| b\right| \leq {50} \), then we can easily find (by sieving) that <table><thead><tr><th>\( \left( {a, b}\right) \)</th><th>\( \operatorname{Norm}\left( {a + {bi}}\right) \)</th><th>\( a + {bm} \)</th></tr></thead><tr><td>\( \vdots \) \( \left( {-{49},{49}}\right) \)</td><td>: \( {4802} = 2 \cdot {7}^{4} \)</td><td>\( {5929} = {7}^{2} \cdot {11}^{2} \)</td></tr><tr><td>: \( \left( {-{41},1}\right) \)</td><td>: \( {1682} = 2 \cdot {29}^{2} \)</td><td>\( {81} = {3}^{4} \)</td></tr><tr><td>\( \vdots \)</td><td>:</td><td>\( \vdots \)</td></tr></table> (Readers should be able to find many such pairs of \( \left( {{a}_{i},{b}_{i}}\right) \) in the interval, that are smooth up to e.g. 29.) So, we have \[ \left( {{49} + {49i}}\right) \left( {-{41} + i}\right) = {\left( {49} - {21}i\right) }^{2}, \] \[ f\left( {{49} - {21i}}\right) = {49} - {21m} \] \[ = {49} - {21} \cdot {122} \] \[ = - {2513}\text{,} \] \[ \updownarrow \] \( \alpha \) \[ {5929} \cdot {81} = {\left( {2}^{2} \cdot 7 \cdot {11}\right) }^{2} \] \[ = {693}^{2} \] \[ = {693}\text{.} \] \[ \uparrow \] \[ \beta \] Thus, \[ \gcd \left( {\alpha \pm \beta, n}\right) = \gcd \left( {-{2513} \pm {693},{14885}}\right) \] \[ = \left( {{65},{229}}\right) \text{.} \]
Yes
Example 59
[Let \( R \) be a commutative ring, let \( X \) be a set and let \( {R}^{X} \) be the set of functions \( X \rightarrow R \) with ring structure given by point-wise multiplication: \[ \begin{array}{l} \left( {f + g}\right) \left( x\right) \mathrel{\text{:=}} f\left( x\right) + g\left( x\right) ,\left( {fg}\right) \left( x\right) \mathrel{\text{:=}} f\left( x\right) g\left( x\right) ,\text{ for all }f, g \in {R}^{X}, x \in X. \end{array} \] An \( R \) -scalar multiplication can again be given point-wise: \[ \left( {f \cdot \alpha }\right) \left( x\right) \mathrel{\text{:=}} f\left( x\right) \alpha, f \in {R}^{X}, x \in X,\alpha \in R. \] The reader should have no difficulty in verifying that the above definition endows \( {R}^{X} \) with the structure of an \( R \) -algebra.]
No
Problem 3.179
Problem 3.179 For every \( r \in \left( {0,{r}_{0}}\right), T\left( {B\left( {0;r}\right) }\right) \) is Lebesgue measurable.
No
Exercise 10
Exercise 10. Let \( f : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) be a function of class \( {C}^{2} \) and \( x = {e}^{r}\cos t \) , \( y = {e}^{r}\sin t \) (i) Compute \( \frac{{\partial }^{2}f}{\partial {r}^{2}},\frac{{\partial }^{2}f}{\partial r\partial t} \) and \( \frac{{\partial }^{2}}{\partial {t}^{2}} \) ; (ii) Prove that \( \frac{{\partial }^{2}f}{\partial {r}^{2}} + \frac{{\partial }^{2}f}{\partial {t}^{2}} = {e}^{2r}\left( {\frac{{\partial }^{2}f}{\partial {x}^{2}} + \frac{{\partial }^{2}f}{\partial {y}^{2}}}\right) \) .
No
Example 4.7
The standard flow in \( {G}_{1}^{\text{discrete }} \), rescaled by \( {i}^{-1/2} \), gives us a coarse probability space, identical to that of Example 3.6. It is a dyadic coarse factorization. Its refinement is the Brownian continuous factorization. Equipped with the natural time shift, it is a noise.
No
Exercise 8.5
Exercise 8.5. First note that the lower bound is elementary, because \( \tau > 1 \) with positive probability. For the upper bound we proceed in three steps. In the first step, we prove an inequality based on Harris' inequality, see Theorem 5.7. Let \( {f}_{1},{f}_{2} \) be densities on \( \lbrack 0,\infty ) \) . Suppose that the likelihood ratio \( \psi \left( r\right) = \frac{{f}_{2}\left( r\right) }{{f}_{1}\left( r\right) } \) is increasing, and \( h : \lbrack 0,\infty ) \rightarrow \lbrack 0,\infty ) \) is decreasing on \( \lbrack a,\infty ) \) . Then \[ \frac{{\int }_{0}^{\infty }h\left( r\right) {f}_{2}\left( r\right) {dr}}{{\int }_{0}^{\infty }h\left( r\right) {f}_{1}\left( r\right) {dr}} \leq \psi \left( a\right) + \frac{{\int }_{a}^{\infty }{f}_{2}\left( r\right) {dr}}{{\int }_{a}^{\infty }{f}_{1}\left( r\right) {dr}}. \tag{13.3} \] To see this, observe first that \( {\int }_{0}^{a}h\left( r\right) {f}_{2}\left( r\right) {dr} \leq \psi \left( a\right) {\int }_{0}^{a}h\left( r\right) {f}_{1}\left( r\right) {dr} \) . Write \( {T}_{a} = \) \( {\int }_{a}^{\infty }{f}_{1}\left( r\right) {dr} \) . Using Harris’ inequality, we get \[ {\int }_{a}^{\infty }h\left( r\right) {f}_{2}\left( r\right) {dr} = {T}_{a}{\int }_{a}^{\infty }h\left( r\right) \psi \left( r\right) \frac{{f}_{1}\left( r\right) }{{T}_{a}}{dr} \] \[ \leq {T}_{a}{\int }_{a}^{\infty }h\left( r\right) \frac{{f}_{1}\left( r\right) }{{T}_{a}}{dr}{\int }_{a}^{\infty }\psi \left( r\right) \frac{{f}_{1}\left( r\right) }{{T}_{a}}{dr} \] \[ = \frac{1}{{T}_{a}}{\int }_{a}^{\infty }h\left( r\right) {f}_{1}\left( r\right) {dr}{\int }_{a}^{\infty }{f}_{2}\left( r\right) {dr}, \] Combining the two inequalities proves (13.3). As a second step, we show that, for \( {t}_{1} \leq {t}_{2} \) , \[ {\mathbb{P}}_{0}\left\{ {B\left\lbrack {{t}_{2},{t}_{2} + s}\right\rbrack \cap A \neq \varnothing }\right\} \leq {C}_{a}{\mathbb{P}}_{0}\left\{ {B\left\lbrack {{t}_{1},{t}_{1} + s}\right\rbrack \cap A \neq \varnothing }\right\} , \] where \[ {C}_{a} = \frac{{f}_{2}\left( a\right) }{{f}_{1}\left( a\right) } + \frac{1}{{\mathbb{P}}_{0}\left\{ {\left| {B\left( {t}_{1}\right) }\right| > a}\right\} } \leq {e}^{\frac{{\left| a\right| }^{2}}{2{t}_{1}}} + \frac{1}{{\mathbb{P}}_{0}\left\{ {\left| {B\left( {t}_{1}\right) }\right| > a}\right\} } \] and \( {f}_{j} \) is the density of \( \left| {B\left( {t}_{j}\right) }\right| \) . This follows by applying (13.3) with \[ h\left( r\right) = \int {\mathbb{P}}_{y}\{ B\left\lbrack {0, s}\right\rbrack \cap A \neq \varnothing \} d{\varpi }_{0, r}\left( y\right) . \] Finally, to complete the proof, we show that \[ {\mathbb{P}}_{0}\{ B\left( {0,\tau }\right) \cap A \neq \varnothing \} \leq \frac{{C}_{a}}{1 - {e}^{-1/2}}\mathbb{P}\{ B\left\lbrack {0,1}\right\rbrack \cap A \neq \varnothing \} , \] where \( {C}_{a} \leq {e}^{{\left| a\right| }^{2}} + {\mathbb{P}}_{0}\left\{ {\left| {B\left( \frac{1}{2}\right) }\right| > a{\} }^{-1}}\right. \) . To this end, let \( H\left( I\right) = {\mathbb{P}}_{0}\{ B\left( I\right) \cap A \neq \varnothing \} \) , where \( I \) is an interval. Then \( H \) satisfies \( H\left\lbrack {t, t + \frac{1}{2}}\right\rbrack \leq {C}_{a}H\left\lbrack {\frac{1}{2},1}\right\rbrack \) for \( t \geq \frac{1}{2} \) . Hence, we can conclude that \[ \mathbb{E}H\left\lbrack {0,\tau }\right\rbrack \leq H\left\lbrack {0,1}\right\rbrack + \mathop{\sum }\limits_{{j = 2}}^{\infty }{e}^{-j/2}H\left\lbrack {\frac{j}{2},\frac{j + 1}{2}}\right\rbrack \leq {C}_{a}\mathop{\sum }\limits_{{j = 0}}^{\infty }{e}^{-j/2}H\left\lbrack {0,1}\right\rbrack , \] which is the required statement.
No
Example 11.9
Example 11.9 Table 11.4 lists the knapsack multipliers for \( m = {13} \) . When \( \omega \) is relatively prime to \( m \), the transformation \[ {T}_{\omega, m} : z \rightarrow {\omega z}\text{ (modulo }m\text{ ) } \] is a one-to-one mapping on \( {\mathcal{Z}}_{m} \) to \( {\mathcal{Z}}_{m} \) with inverse \[ {T}_{{\omega }^{-1}}, m : z \rightarrow {\omega }^{-1}z\text{ (modulo }m\text{ ). } \] \( {T}_{{\omega }^{-1}}, m \) maps a super-increasing knapsack vector \( \underline{s} \) into the knapsack vector \( \underline{a} \) according to the formula \[ \underline{a} = {T}_{{\omega }^{-1}, m}\left( \underline{s}\right) = \left( {{T}_{{\omega }^{-1}, m}\left( {s}_{0}\right) ,{T}_{{\omega }^{-1}, m}\left( {s}_{1}\right) ,\ldots ,{T}_{{\omega }^{-1}, m}\left( {s}_{n - 1}\right) }\right) . \]
No
Example 3.4
Example 3.4. Let \( {\mathbf{k}}_{1} = {\mathbf{k}}_{2} = \mathbf{k},{O}_{0} = {O}_{1} = O \), and let \( {\mathcal{R}}_{1} \) have a rotational velocity \( \theta \left( t\right) \mathbf{k} \) w.r.t. \( {\mathcal{R}}_{0} \) that is fixed. Let a point \( M \) be fixed in \( {\mathcal{R}}_{1} \), i.e., \( u = {OM} = \alpha {\mathbf{i}}_{1} + \beta {\mathbf{j}}_{1}. \) Obviously \( {\left( \frac{\mathrm{d}u}{\mathrm{\;d}t}\right) }_{{\mathcal{R}}_{1}} = 0 \) . Then \( {\left( \frac{\mathrm{d}u}{\mathrm{\;d}t}\right) }_{{\mathcal{R}}_{0}} = \dot{\theta }\left( t\right) \left\lbrack {\mathbf{k} \times \left( {\alpha {\mathbf{i}}_{1} + \beta {\mathbf{j}}_{1}}\right) = \dot{\theta }\left( t\right) \left\lbrack {\alpha {\mathbf{j}}_{1} - \beta {\mathbf{i}}_{1}}\right\rbrack }\right. \) .
No
Exercise 7.2.7
Find \( \int {\left( 5{t}^{2} + {10}t + 3\right) }^{3}\left( {{5t} + 5}\right) {dt} \) .
No
Example 1.7.1
Example 1.7.1. Let \( X = Y = \{ 1,2,\ldots \} \) with \( \mathcal{A} = \mathcal{B} = \) all subsets and \( {\mu }_{1} = {\mu }_{2} = \) counting measure. For \( m \geq 1 \), let \( f\left( {m, m}\right) = 1 \) and \( f\left( {m + 1, m}\right) = - 1 \), and let \( f\left( {m, n}\right) = 0 \) otherwise. We claim that \[ \mathop{\sum }\limits_{m}\mathop{\sum }\limits_{n}f\left( {m, n}\right) = 1\;\text{ but }\;\mathop{\sum }\limits_{n}\mathop{\sum }\limits_{m}f\left( {m, n}\right) = 0 \] A picture is worth several dozen words: \[ \begin{matrix} & \vdots & \vdots & \vdots & \vdots & & \\ & 0 & 0 & 0 & 1 & \ldots & \\ \uparrow & 0 & 0 & 1 & - 1 & \ldots & \\ n & 0 & 1 & - 1 & 0 & \ldots & \\ & 1 & - 1 & 0 & 0 & \ldots & \\ & & 1 & - 1 & 0 & 0 & \ldots \\ & & m & \rightarrow & & & \end{matrix} \] In words, if we sum the columns first, the first one gives us a 1 and the others 0 , while if we sum the rows each one gives us a 0 .
Yes
Exercise 19.1
Exercise 19.1. Use Figure 19.2 to give another proof of (19.1). (Hint: express \( \left| {AC}\right| \) in terms of \( z \) and note that the two shaded triangles are similar.)
No
Exercise 1.1.3
Exercise 1.1.3 You have a system of \( k \) equations in two variables, \( k \geq 2 \) . Explain the geometric significance of (a) No solution. (b) A unique solution. (c) An infinite number of solutions.
No
Exercise 8.3.3
Exercise 8.3.3. Check the orthonormality of the characters of the irreducible representations of \( {S}_{3} \) and \( {S}_{4} \) . The characters are collected in Table 8.1.
No
Exercise 1
Exercise 1. Prove that \( \parallel \cdot {\parallel }_{\infty } \) is indeed a norm on \( {c}_{0}^{\mathbb{K}}\left( I\right) \) .
No
Example 3.4
Example 3.4. Consider the sequence \( {\ell }_{1} = 2,{\ell }_{2} = 4,{\ell }_{3} = 8,{\ell }_{4} = 1,\ldots \) of the first digits in base 10 of the powers of 2 . We want to find the frequency with which each \( k \in \{ 1,\ldots ,9\} \) occurs in the sequence \( {\ell }_{n} \) . More precisely, we want to show that the limit \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{\operatorname{card}\left\{ {j \in \{ 1,\ldots, n\} : {\ell }_{j} = k}\right\} }{n} \tag{3.18} \] exists for each \( k \), and to compute its value explicitly. Consider the interval translation \( T : \mathbb{T} \rightarrow \mathbb{T} \) given by \[ T\left( x\right) = x + {\log }_{10}2{\;\operatorname{mod}\;1}. \] Since \( {\log }_{10}2 \) is irrational, it follows from (3.17) that \( T \) is uniquely ergodic. On the other hand, \[ {2}^{n} = {10}^{n{\log }_{10}2-\lfloor n{\log }_{10}2\rfloor }{10}^{\lfloor n{\log }_{10}2\rfloor } \] \[ = {10}^{{T}^{n}0}{10}^{\left\lfloor n{\log }_{10}2\right\rfloor }. \] Therefore, \( {\ell }_{n} = k \) if and only if \[ {T}^{n}\left( 0\right) \in \left\lbrack {{\log }_{10}k,{\log }_{10}\left( {k + 1}\right) }\right) . \] Now, let \[ \varphi = {\chi }_{\left\lbrack {\log }_{10}k,{\log }_{10}\left( k + 1\right) \right) }. \] We consider continuous functions \( {a}_{p},{b}_{p} : \mathbb{T} \rightarrow \left\lbrack {0,1}\right\rbrack \) for \( p \in \mathbb{N} \) with \[ {a}_{p} \leq \varphi \leq {b}_{p},\;p \in \mathbb{N}, \] such that \[ {\int }_{0}^{1}{a}_{p}{dm} \rightarrow {\int }_{0}^{1}{\varphi dm}\;\text{ and }\;{\int }_{0}^{1}{b}_{p}{dm} \rightarrow {\int }_{0}^{1}{\varphi dm} \tag{3.19} \] when \( p \rightarrow \infty \), where \( m \) is the Lebesgue measure in \( \left\lbrack {0,1}\right\rbrack \) . We observe that \[ \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{n - 1}}{a}_{p}\left( {{T}^{j}\left( 0\right) }\right) \leq \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{n - 1}}\varphi \left( {{T}^{j}\left( 0\right) }\right) \leq \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{n - 1}}{b}_{p}\left( {{T}^{j}\left( 0\right) }\right) . \tag{3.20} \] Since \( T \) is uniquely ergodic, it follows from Theorem 3.3 that \[ \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{n - 1}}{a}_{p}\left( {{T}^{j}\left( 0\right) }\right) \rightarrow {\int }_{0}^{1}{a}_{p}{dm} \tag{3.21} \] and \[ \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{n - 1}}{b}_{p}\left( {{T}^{j}\left( 0\right) }\right) \rightarrow {\int }_{0}^{1}{b}_{p}{dm} \] when \( n \rightarrow \infty \) . By (3.20) and (3.21), we obtain \[ {\int }_{0}^{1}{a}_{p}{dm} \leq \mathop{\liminf }\limits_{{n \rightarrow \infty }}\frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{n - 1}}\varphi \left( {{T}^{j}\left( 0\right) }\right) \] \[ \leq \mathop{\limsup }\limits_{{n \rightarrow \infty }}\frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{n - 1}}\varphi \left( {{T}^{j}\left( 0\right) }\right) \leq {\int }_{0}^{1}{b}_{p}{dm}. \] Letting \( p \rightarrow \infty \), it follows from (3.19) that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{n - 1}}\varphi \left( {{T}^{j}\left( 0\right) }\right) = {\int }_{0}^{1}{\varphi dm}. \] Therefore, the frequency with which the integer \( k \) occurs in the sequence \( {\ell }_{n} \) is given by \[ {\int }_{0}^{1}{\varphi dm} = {\int }_{0}^{1}{\chi }_{\left\lbrack {\log }_{10}k,{\log }_{10}\left( k + 1\right) \right) }{dm} \] \[ = {\log }_{10}\left( {1 + \frac{1}{k}}\right) \text{.} \]
Yes
Problem 2.139
Problem 2.139 \( \cup \mathcal{B} \) is a maximal orthonormal set in \( H \) . (Solution If not, otherwise, suppose that there exists an orthonormal set \( C \) in \( H \) such that \( \left( {\cup \mathcal{B}}\right) \subset C \), and \( \left( {\cup \mathcal{B}}\right) \neq C \) . We have to arrive at a contradiction. It follows that \( \mathcal{B} \cup \{ C\} \) is a linearly ordered set such that \( \mathcal{B} \subset \left( {\mathcal{B}\cup \{ C\} }\right) \) . Now, since \( \mathcal{B} \) is a maximal linearly ordered set, \( \mathcal{B} = \left( {\mathcal{B}\cup \{ C\} }\right) \), and hence \( C \in \mathcal{B} \) . Since \( C \in \mathcal{B} \), we have \( C \subset \left( {\cup \mathcal{B}}\right) \left( { \subset C}\right) \), and hence \( \left( {\cup \mathcal{B}}\right) = C \) . This is a contradiction. ∎) Thus, \( \cup \mathcal{B} \) is a maximal orthonormal set in \( H \) containing \( \left\{ {{u}_{k} : k \in I}\right\} \) .
No
Exercise 7.17
Consider a model which consists of a charged complex scalar field interacting with an Abelian gauge field. The classical Lagrangian is \[ L\left\lbrack {\varphi ,{A}_{\mu }}\right\rbrack = - \frac{1}{2}{\left( {D}_{\mu }\varphi \right) }^{ * }{D}_{\mu }\varphi - \frac{\lambda }{4}{\left( {\left| \varphi \right| }^{2} - {\mu }^{2}\right) }^{2} - \frac{1}{4}{F}_{\mu \nu }{F}^{\mu \nu }, \tag{7.147} \] where \( {F}_{\mu \nu } = {\partial }_{\mu }{A}_{\nu } - {\partial }_{\nu }{A}_{\mu } \) and \( {D}_{\mu } = {\partial }_{\mu } - {ie}{A}_{\mu } \) . The theory is invariant with respect to local \( U\left( 1\right) \) gauge transformations. The classical potential has a continuous family of minima at \( \left| \varphi \right| = \mu \) . Model (7.147) can be used to illustrate the Higgs mechanism; the gauge group is spontaneously broken in the vacuum state because the gauge field acquires a mass \( {m}_{v}^{2} = {e}^{2}{\mu }^{2} \) when \( \left| \varphi \right| = \mu \) . Calculate the Coleman-Weinberg potential for model (7.147) in the regime when \( {e}^{2} \gg \lambda \) . Show that in the ground state quantum corrections result in appearance of a new minimum where the symmetry is restored.
No
Exercise 12.2
Exercise 12.2. (a) Let \( c \in \mathbf{R} \) be a constant. Use Lagrange multipliers to generate a list of candidate points to be extrema of \[ h\left( {x, y, z}\right) = \sqrt{\frac{{x}^{2} + {y}^{2} + {z}^{2}}{3}} \] on the plane \( x + y + z = {3c} \) . (Hint: explain why squaring a non-negative function doesn’t affect where it achieves its maximal and minimal values.) (b) The facts that \( h\left( {x, y, z}\right) \) in (a) is non-negative on all inputs (so it is "bounded below") and grows large when \( \parallel \left( {x, y, z}\right) \parallel \) grows large can be used to show that \( h\left( {x, y, z}\right) \) must have a global minimum on the given plane. (You may accept this variant of the Extreme Value Theorem from single-variable calculus; if you are interested, such arguments are taught in Math 115 and Math 171.) Use this and your result from part (a) to find the minimum value of \( h\left( {x, y, z}\right) \) on the plane \( x + y + z = {3c}. \) (c) Explain why your result from part (b) implies the inequality \[ \sqrt{\frac{{x}^{2} + {y}^{2} + {z}^{2}}{3}} \geq \frac{x + y + z}{3} \] for all \( x, y, z \in \mathbf{R} \) . (Hint: for any given \( \mathbf{v} = \left( {x, y, z}\right) \), define \( c = \left( {1/3}\right) \left( {x + y + z}\right) \) so \( \mathbf{v} \) lies in the constraint plane in the preceding discussion, and compare \( h\left( \mathbf{v}\right) \) to the minimal value of \( h \) on the entire plane using your answer in (b).) The left side is known as the "root mean square" or "quadratic mean," while the right side is the usual or "arithmetic" mean. Both come up often in statistics.
Yes
Exercise 23
Exercise 23 (Recession functions)
No
Example 5.6
Example 5.6. [Associativity and commutativity] Suppose \[ \mathcal{F} = \left\{ {{Y}_{A} \sim {N}_{2}\left( {0,\Gamma }\right) }\right\} ,\mathcal{G} = \left\{ {{Y}_{B} \sim {N}_{2}\left( {0,\Omega }\right) }\right\} ,\mathcal{H} = \left\{ {{Y}_{C} \sim {N}_{2}\left( {0,\Phi }\right) }\right\} \] represent three complete Gaussian graphical models as in Figure 5.4. Here all the combinations satisfy \( \left( {\mathcal{F} * \mathcal{G}}\right) * \mathcal{H} = \mathcal{F} * \left( {\mathcal{G} * \mathcal{H}}\right) = \mathcal{F} * \left( {\mathcal{H} * \mathcal{G}}\right) \), and are the Gaussian families corresponding to the union of the three graphs (a line). It follows from the associativity property of the meta-Markov combination and the fact that the families are pairwise meta-consistent and form a cut over the common variables so that all the combinations are identical (see Proposition 5.1). If we change the order of combination, \( \left( {\mathcal{F} \star \mathcal{H}}\right) \star \mathcal{G} \) represents the independence model \( \left( {1,2}\right) ⫫ \left( {3,4}\right) \) and \( \left( {\mathcal{F}\bar{ \star }\mathcal{H}}\right) \bar{ \star }\mathcal{G} \) is again the lower graph in Figure 5.4. Suppose now to add another complete graph with variables 4 and 1 . The lower Markov combination of all four graphs is the graphical model with \( 1 ⫫ 4 \mid \left( {2,3}\right) \) . The upper Markov combination is the cordless four cycle because it combines all marginal distributions over \( \{ 1,4\} \) with all conditional distributions over \( \{ 1,2,3,4\} \) and all marginal distributions over \( \{ 1,2,3,4\} \) with all conditional distributions over \( \{ 1,4\} \) . \( \circ 2 \) \( \circ 3 \)
No
Example 3.17
Example 3.17. We use the substitution rule to compute \[ \mathop{\lim }\limits_{{x \rightarrow 0}}{\left( x\sin \left( 1/x\right) \right) }^{3} = 0. \] This follows from Lemma 3.11 by using \( f\left( x\right) \mathrel{\text{:=}} x\sin \left( {1/x}\right) \) and \( g\left( y\right) = {y}^{3} \) .
Yes
Exercise 2.23
Exercise 2.23. Show that if \( A \in {\mathbb{C}}^{n \times n} \) is an invertible triangular matrix with entries \( {a}_{ij} \in \mathbb{C} \) for \( i, j = 1,\ldots, n \), then \( {a}_{ii} \neq 0 \) for \( i = 1,\ldots, n \) . [HINT: Use Theorem 2.4 to show that if the claim is true for \( n = k \), then it is also true for \( n = k + 1 \) .]
No
Example 6.14.7
The reduced indefinite forms of discriminant \( \Delta = {105} \) can be grouped into two cycles, namely \( \left( {\left( {1,9, - 6}\right) ,\left( {6,3, - 4}\right) ,\left( {4,5, - 5}\right) ,(5,5, - 4}\right) ,(4 \) , \( 3, - 6),\left( {6,9, - 1}\right) ) \) and \( \left( {\left( {2,7, - 7}\right) ,\left( {7,7, - 2}\right) ,\left( {2,9, - 3}\right) ,\left( {3,9, - 2}\right) }\right) \) . The first cycle has even length and contains the ambiguous forms \( \left( {1,9, - 6}\right) \) and \( \left( {5,5, - 4}\right) \) . From the second ambiguous form we obtain the divisor 5 of 105. The second cycle has also even length and contains the two ambiguous forms \( \left( {7,7, - 2}\right) \) and \( \left( {3,9, - 2}\right) \) from which we obtain the divisors 7 and 3 of 105 . The knowledge of the ambiguous forms has given us the complete prime factorization of \( {105} \), namely \( {105} = 3 \cdot 5 \cdot 7 \) .
Yes
Example 22.2.5
Example 22.2.5. For more practice with the \( {QR} \) -decomposition, let’s solve \( A\mathbf{x} = \mathbf{b} \) defined by \[ \left\lbrack \begin{matrix} 2 & 1 & 1 \\ - 1 & - 2 & 1 \\ 1 & - 1 & 1 \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \end{array}\right\rbrack = \left\lbrack \begin{matrix} 7 \\ - 8 \\ 1 \end{matrix}\right\rbrack \] when given the \( {QR} \) -decomposition for \( A \) : \[ \left\lbrack \begin{matrix} 2 & 1 & 1 \\ - 1 & - 2 & 1 \\ 1 & - 1 & 1 \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} 2/\sqrt{6} & 0 & 1/\sqrt{3} \\ - 1/\sqrt{6} & - 1/\sqrt{2} & 1/\sqrt{3} \\ 1/\sqrt{6} & - 1/\sqrt{2} & - 1/\sqrt{3} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} \sqrt{6} & 3/\sqrt{6} & 2/\sqrt{6} \\ 0 & 3/\sqrt{2} & - \sqrt{2} \\ 0 & 0 & 1/\sqrt{3} \end{matrix}\right\rbrack . \] As before, even though \( A \) has simple entries, the entries of \( Q \) and \( R \) are complicated. Multiplying through by \( {Q}^{-1} = {Q}^{\top } \) gives \( R\mathbf{x} = {Q}^{\top }\mathbf{b} \), which in our case says \[ \left\lbrack \begin{matrix} \sqrt{6} & 3/\sqrt{6} & 2/\sqrt{6} \\ 0 & 3/\sqrt{2} & - \sqrt{2} \\ 0 & 0 & 1/\sqrt{3} \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \end{array}\right\rbrack = \left\lbrack \begin{matrix} 2/\sqrt{6} & - 1/\sqrt{6} & 1/\sqrt{6} \\ 0 & - 1/\sqrt{2} & - 1/\sqrt{2} \\ 1/\sqrt{3} & 1/\sqrt{3} & - 1/\sqrt{3} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} 7 \\ - 8 \\ 1 \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} {23}/\sqrt{6} \\ 7/\sqrt{2} \\ - 2/\sqrt{3} \end{matrix}\right\rbrack . \] Back-substitution (please check for yourself!) now yields \( {x}_{3} = - 2 \), then \( {x}_{2} = 1 \), and finally \( {x}_{1} = 4 \) . You should also verify directly that \( \left( {{x}_{1},{x}_{2},{x}_{3}}\right) = \left( {4,1, - 2}\right) \) satisfies the initial linear system.
Yes
Example 6
Let \( \left\{ {X}_{n}\right\} \) be a sequence of RVs defined by \[ P\left\{ {{X}_{n} = 0}\right\} = 1 - \frac{1}{n},\;P\left\{ {{X}_{n} = 1}\right\} = \frac{1}{n},\;n = 1,2,\ldots \] Then \[ E{\left| {X}_{n}\right| }^{2} = \frac{1}{n} \rightarrow 0\;\text{ as }\;n \rightarrow \infty , \] and we see that \( {X}_{n}\overset{2}{ \rightarrow }X \), where \( \mathrm{{RV}}X \) is degenerate at 0 .
Yes
Example 3.28
Example 3.28 Here we will give the classifying spaces of some groups. We will build these as topological spaces, so in truth we are really giving examples of \( \left| {BG}\right| \) . (i) Consider the group \( \mathbb{Z} \) ; this is the fundamental group of the circle, \( {S}^{1} \) . If the universal cover of \( {S}^{1} \) is contractible, then \( {S}^{1} \) is the classifying space of \( \mathbb{Z} \) . Since the universal cover of \( {S}^{1} \) is the real line \( \mathbb{R} \) , and this is contractible, \( B\mathbb{Z} \) is indeed homotopy equivalent to \( {S}^{1} \) . (ii) Consider the cyclic group \( {C}_{2} \) and the \( n \) -sphere \( {S}^{n} \) . The map sending \( x \) to \( - x \) (i.e., the antipodal map) is an action of \( {C}_{2} \) on \( {S}^{n} \), and the orbit space is real projective space \( \mathbb{R}{P}^{n} \) . Since \( {S}^{n} \) is simply connected, this means that the fundamental group of \( \mathbb{R}{P}^{n} \) is \( {C}_{2} \) . However, \( {S}^{n} \) is not contractible, so these are not classifying spaces of \( {C}_{2} \) . However, \( \mathbb{R}{P}^{\infty } \), the union of the \( \mathbb{R}{P}^{n} \), still has a \( {C}_{2} \) -action and the union of spheres is contractible, and hence \( B{C}_{2} \) is homotopy equivalent to \( {S}^{\infty }/{C}_{2} = \mathbb{R}{P}^{\infty } \) . (iii) We have that \( B\left( {G \times H}\right) \) and \( {BG} \times {BH} \) are homotopy equivalent, and hence if \( {E}_{{2}^{n}} \) denotes the elementary abelian group of order \( {2}^{n} \) , then \( B{E}_{{2}^{n}} \) is the Kelley product of \( n \) copies of \( \mathbb{R}{P}^{\infty } \) .
No
Problem 2.34
Problem 2.34 Let \( H = \left\{ {a + {bi} : a, b \in \mathbb{R},{a}^{2} + {b}^{2} = 1}\right\} \) . Prove or disprove that \( H \) is a subgroup of \( {\mathbb{C}}^{ * } \) under multiplication. Describe the elements of \( H \) geometrically.
No
Example 21
Example 21. Let \( \mathcal{A} \) be an abelian category such that: - the isomorphism classes of objects of \( \mathcal{A} \) form a set, - every object of \( \mathcal{A} \) has finite length (has a finite filtration with simple objects as consecutive factors.) Then \[ {K}_{i}\left( \mathcal{A}\right) \cong \mathop{\coprod }\limits_{{j \in J}}K\left( {D}_{j}\right) \] where \( \left\{ {{X}_{j} : j \in J}\right\} \) is a set of representatives for the isomorphism classes of simple objects in \( \mathcal{A} \), and \( {D}_{j} \) is the skew field \( \operatorname{End}{\left( {X}_{j}\right) }^{op} \).
No
Exercise 13.4
Verify that laplace correctly computes the Laplace Transforms of the functions heaviside \( \left( {t - 2}\right) \) and \( \operatorname{dirac}\left( {t - 3}\right) \) .
No
Example 2
[The languages \( {\left( ab + ba\right) }^{ * } \) and \( {\left( a{\left( ab\right) }^{ * }b\right) }^{ * } \) are star-free, but the languages \( {\left( aa\right) }^{ * } \) and \( {\left( a + bab\right) }^{ * } \) are not. This is easy to prove by computing the syntactic monoid of these languages.]
No
Example 18.4
Example 18.4 Evaluate \( \left( \left( \begin{array}{l} 2 \\ 2 \end{array}\right) \right) \) . Solution: We need to count the number of two-element multisets whose elements are selected from the set \( \{ 1,2\} \) . We simply list all the possibilities. They are \[ \langle 1,1\rangle ,\;\langle 1,2\rangle ,\;\text{ and,}\;\langle 2,2\rangle . \] Therefore \( \left( \left( \begin{array}{l} 2 \\ 2 \end{array}\right) \right) = 3 \) . In general, consider \( \left( \left( \begin{array}{l} 2 \\ k \end{array}\right) \right) \) . We need to form a \( k \) -element multiset using only the elements 1 and 2. We can decide how many \( 1\mathrm{\;s} \) are in the multiset (anywhere from 0 to \( k \), giving \( k + 1 \) possibilities), and then the remaining elements of the multiset must be \( 2\mathrm{\;s} \) . Therefore \( \left( \left( \begin{array}{l} 2 \\ k \end{array}\right) \right) = k + 1 \) .
Yes
Exercise 19.10
Exercise 19.10. Consider a two-dimensional system where \( \operatorname{tr}\left( A\right) = 0 \) and \( \det \left( A\right) > 0 \) . a. Given those conditions, explain why \( {\lambda }_{1} + {\lambda }_{2} = 0 \) and \( {\lambda }_{1} \cdot {\lambda }_{2} > 0 \) . b. What does \( {\lambda }_{1} + {\lambda }_{2} = 0 \) tell you about the relationship between \( {\lambda }_{1} \) and \( {\lambda }_{2} \) ? c. What does \( {\lambda }_{1} \cdot {\lambda }_{2} > 0 \) tell you about the relationship between \( {\lambda }_{1} \) and \( {\lambda }_{2} \) ? d. Look back to your previous two responses. First explain why \( {\lambda }_{1} \) and \( {\lambda }_{2} \) must be imaginary eigenvalues (in other words, not real values). Then explain why \( {\lambda }_{1,2} = \pm {bi} \) . e. Given these constraints, what would the phase plane for this system be? f. Create a linear two-dimensional system where \( \operatorname{tr}\left( A\right) = 0 \) and \( \det \left( A\right) > 0 \) . Show your system and the phase plane.
No
Example 8
Example 8. An experiment was conducted about a neural network which had no bias parameters. An input sample \( \left\{ {{x}_{i} \in {\mathbb{R}}^{2};i = 1,2,\ldots, n}\right\} \left( {n = {200}}\right) \) was taken from the uniform distribution in \( {\left\lbrack -2,2\right\rbrack }^{2} \) . The true conditional distribution was made by \( p\left( {y \mid x,{w}_{0}}\right) \) where \( p\left( {y \mid x,{w}_{0}}\right) \) was a neural network with three hidden units \( H = 3 \) . A prior was set by the normal distribution \( \mathcal{N}\left( {0,{10}^{2}}\right) \) on each \( {u}_{jk} \) and \( {w}_{k\ell } \) . The posterior distribution was approximated by a Metropolis method (see Chapter 7.1). We prepared five candidate neural networks which have \( H = 1,2,3,4,5 \) hidden units. Figure 2.7 shows the results of 20 trials of: (1) Upper, left: \( G - S \) for \( H = 1,2,3,4,5 \) (2) Upper, right: \( {\mathrm{{AIC}}}_{b} - {S}_{n} \) for \( H = 1,2,3,4,5 \) (3) Lower, left: ISCV \( - {S}_{n} \) for \( H = 1,2,3,4,5 \) (4) Lower, left: WAIC \( - {S}_{n} \) for \( H = 1,2,3,4,5 \) where \( G \), ISCV, \( {\mathrm{{AIC}}}_{b} \), and WAIC are calculated by using the Markov chain Monte Carlo method. In this problem the values of DIC were quite different from others, which are not appropriate for evaluation of hierarchical statistical models. Note that in a neural network the posterior average of the parameter has no meaning. In Bayesian estimation, the generalization errors of a neural network did not so increase even if the statistical model was larger than a true model. This is the general property of Bayesian inference in hierarchical models, whose mathematical reason will be clarified by Chapter 5. Even in the case \( H = 3 \), in which the statistical model is just equal to the true distribution, the generalization error was sometimes not minimized. Such a phenomenon was caused by the local minima of the Metropolis method in neural networks. Both ISCV and WAIC correctly estimated the generalization errors, whereas \( {\mathrm{{AIC}}}_{b} \) overestimated. Note that, in Bayesian estimation, the increase of the generalization error is very small even if a statistical model is redundant for a true model, resulting that the increases of both ISCV and WAIC are also small. In selection of hierarchical models, a statistician should understand this point.
No
Example 6.6
Example 6.6. \( {SG} \) means \( S \land M = M \), so a Moore spectrum of type \( G \) may be written \( {SG} \) .
No
Example 2.1.1
Example 2.1.1. On any set \( X \), the function \[ d\left( {x, y}\right) = \left\{ \begin{array}{ll} 0, & \text{ if }x = y \\ 1, & \text{ if }x \neq y \end{array}\right. \] is a metric, called the discrete metric.
No
Exercise 1.23
Exercise 1.23 (Boolean Group) Let \( M \) be a set. a. If \( X, Y, Z \subseteq M \), then \[ X \smallsetminus \left( {\left( {Y \smallsetminus Z}\right) \cup \left( {Z \smallsetminus Y}\right) }\right) = \left( {X \smallsetminus \left( {Y \cup Z}\right) }\right) \cup \left( {X \cap Y \cap Z}\right) \] and \[ \left( {\left( {X \smallsetminus Y}\right) \cup \left( {Y \smallsetminus X}\right) }\right) \smallsetminus Z = \left( {X \smallsetminus \left( {Y \cup Z}\right) }\right) \cup \left( {Y \smallsetminus \left( {X \cup Z}\right) }\right) . \] b. We define on the power set \( G = \mathcal{P}\left( M\right) = \{ A \mid A \subseteq M\} \) of \( M \) a binary operation Operation by \[ A + B \mathrel{\text{:=}} \left( {A \smallsetminus B}\right) \cup \left( {B \smallsetminus A}\right) = \left( {A \cup B}\right) \smallsetminus \left( {A \cap B}\right) \] for \( A, B \in G \) . Show that \( \left( {G, + }\right) \) is an abelian group.
No
Exercise 8.5.3
Exercise 8.5.3. Modify the birth and death rates and study the behavior of the population over time (you will need to re-initialize the population each time you specify new birth and death rates).
No
Exercise 4.15
Recall that \( U\left( 1\right) \) is the group of \( 1 \times 1 \) unitary matrices. Show that this is just the set of complex numbers \( z \) with \( \left| z\right| = 1 \), and that \( U\left( 1\right) \) is isomorphic to \( {SO}\left( 2\right) \).
No
Example 3.3.4
Example 3.3.4. \( f\left( x\right) = \sin \left( x\right) \) is periodic (Fig. 3.3.2) with a period \( {2\pi } \), i.e. \( \sin \left( {x + {2\pi }}\right) = \sin \left( x\right) \) for all \( x \) . Clearly, \( {4\pi } \) is another period of \( \sin \left( x\right) \) and so is any integer multiple of \( {2\pi } \) . However, the number \( {2\pi } \) is the smallest of all the periods of \( \sin \left( x\right) \) .
No
Exercise 6.13
Exercise 6.13 (c) Find the expected number of customers seen in the system by the first arrival after time \( {n\delta } \) . Note: The purpose of this exercise is to make you cautious about the meaning of 'the state seen by a random arrival'.
Yes
Exercise 2.2
Use Strategy 2.1 to express the following cycles in \( {S}_{7} \) as composites of transpositions. (a) \( \left( \begin{array}{lllll} 1 & 5 & 2 & 7 & 3 \end{array}\right) \) (b) \( \left( \begin{array}{llllll} 2 & 3 & 7 & 5 & 4 & 6 \end{array}\right) \) (c) \( \left( \begin{array}{lllllll} 1 & 2 & 3 & 4 & 5 & 6 & 7 \end{array}\right) \)
No
Example 7
Let us consider the schema mapping \( M \) described in Example 3. Condition (1) can be formalized by the s-t-tgd \[ {\delta }_{1} \mathrel{\text{:=}} \forall {x}_{1}\forall {x}_{2}\forall y\left( {{KLA}\left( {{x}_{1},{x}_{2}, y}\right) \rightarrow \exists {zNew}\left( {{x}_{1},{x}_{2}, z}\right) }\right) . \] Condition (2) can be formalized by the formula \[ \forall {x}_{1}\forall {x}_{2}\forall y\left( {{AF}\left( {{x}_{1},{x}_{2}, y}\right) \rightarrow }\right. \] \[ \left( {\exists z\operatorname{New}\left( {{x}_{1},{x}_{2}, z}\right) \vee \exists z\exists {z}^{\prime }\exists {z}^{\prime \prime }\left( {\operatorname{New}\left( {{x}_{1}, z,{z}^{\prime }}\right) \land \operatorname{New}\left( {z,{x}_{2},{z}^{\prime \prime }}\right) }\right) )}\right. \text{. } \] Note, however, that this is not an s-t-tgd since, according to Definition 5, disjunctions are not allowed in s-t-tgds. In order to formalize (a variant 3 of) condition (2) we therefore use the s-t-tgd \( {\delta }_{2} \mathrel{\text{:=}} \) \[ \forall {x}_{1}\forall {x}_{2}\forall y\left( {{AF}\left( {{x}_{1},{x}_{2}, y}\right) \rightarrow \exists z\exists {z}^{\prime }\exists {z}^{\prime \prime }\left( {\operatorname{New}\left( {{x}_{1}, z,{z}^{\prime }}\right) \land \operatorname{New}\left( {z,{x}_{2},{z}^{\prime \prime }}\right) }\right) }\right) \] and the t-tgd \[ {\delta }_{3} \mathrel{\text{:=}} \forall {x}_{1}\forall {x}_{2}\forall y\left( {\operatorname{New}\left( {{x}_{1},{x}_{2}, y}\right) \rightarrow \exists z\operatorname{New}\left( {{x}_{2},{x}_{2}, z}\right) }\right) . \] With \( {M}_{\text{airline }} \) we shall henceforth denote the schema mapping \( \left( {\sigma ,\tau ,{\sum }_{\mathrm{{st}}},{\sum }_{\mathrm{t}}}\right) \) with \( \sigma = \{ {KLA},{AF}\} ,\tau = \{ {New}\} ,{\sum }_{\mathrm{{st}}} = \left\{ {{\delta }_{1},{\delta }_{2}}\right\} \), and \( {\sum }_{\mathrm{t}} = \left\{ {\delta }_{3}\right\} \) .
No
Exercise 4.6.10
[Exercise 4.6.10 (Kuratowski). Prove that the axioms for topology can be rephrased in terms of the closure. In other words, a topology on \( X \) may be defined as an operation \( A \mapsto \bar{A} \) on subsets of \( X \) satisfying - \( \bar{\varnothing } = \varnothing \) . - \( \overline{\{ x\} } = \{ x\} \) . - \( \overline{\bar{A}} = \bar{A} \) . - \( \overline{A \cup B} = \bar{A} \cup \bar{B} \) .]
No
Example 19
Example 19. [32, Exemple 3.2] Let \( {X}_{1},{X}_{2},{X}_{3}, X, Y \) and \( Z \) be inde-terminates over a field \( K \) . Let \( A = K{\left\lbrack {X}_{1}\right\rbrack }_{\left( {X}_{1}\right) } + {X}_{2}K\left( {X}_{1}\right) {\left\lbrack {X}_{2}\right\rbrack }_{\left( {X}_{2}\right) }, B = \) \( K\left( {{x}_{1},{x}_{2},{x}_{3}}\right) + {YK}\left( {{x}_{1},{x}_{2},{x}_{3}}\right) {\left\lbrack Y\right\rbrack }_{\left( Y\right) } \) and \( R = A + {XB}\left\lbrack X\right\rbrack \) . Then: (1) \( R \) is an S-domain but not a strong S-domain. (2) Both \( A\left\lbrack z\right\rbrack \) and \( B\left\lbrack {x, z}\right\rbrack \) are strong S-domains and catenary, but \( R\left\lbrack z\right\rbrack \) is neither catenary nor an S-domain.
No
Example 3.7.3
Consider the data set, data \( 3 = \{ \left( {{35}.,{0.001}}\right) ,\left( {{35.5},{0.001}}\right) \) , (36., 0.010), (36.5, 0.044), (37.,0.111), (37.5, 0.214), (38.,0.258), (38.5, 0.205), (39.,0.111),(39.5,0.043),(40.,0.010)\}. We first plot the points to get an initial guess of the function, which looks like a bell curve or normal curve (probability density function of Normal Distribution). Hence, we choose the function of the form \( f\left( x\right) = a{e}^{-{\left( x - b\right) }^{2}} \), where the parameters \( a \) and, \( b \) are to be estimated.
Yes
Problem 2.7.7
Problem 2.7.7 Suppose that each day, \( 3\% \) of material A decays into material \( B \) and \( 9\% \) of material \( B \) decays into lead. Suppose that initially, there are 50 grams of \( A \) and 7 grams of \( B \) . (i) Formulate a discrete dynamical system to model this situation. How much of each material will be left after 5 days? (ii) Make a graph of \( A\left( n\right) \) and \( B\left( n\right) \) for \( n \) going from 0 to 50, and observe how they behave. (iii) Suppose that after 30 days, there are 20 grams of material \( B \) left, but there were only 10 grams of \( B \) to start with. How many grams of material \( A \) was there, to begin with, to the nearest gram?
Yes
Problem 3
Problem 3. Tracking SET System Problem (TSSP): Given a set system \( \mathcal{P} = \{ X,\mathcal{S}\} \), find a minimum cardinality Tracking Set \( \mathbf{T} \subseteq X \) for \( \mathcal{P} \), such that for any two distinct \( {S}_{i},{S}_{j} \in S,{S}_{i} \cap T \neq {S}_{j} \cap T \) . We denote each vertex present in Tracking Set by tracker. We show a correspondence between Tracking SET System Problem and the Test Cover Problem [6]. Using this result we show that the size of a Tracking Set for Set System with \( n \) elements and \( m \) sets is at least \( \lceil \lg m{\rceil }^{1} \) and using this we show that determining whether a given set system has a Tracking Set of size at most \( k \) has a \( {O}^{ * }\left( {2}^{k{2}^{k}}\right) \) fixed-parameter algorithm \( {}^{2} \) . We then consider other natural parameterizations of the solution and we show that - Determining whether a set system with \( n \) elements and \( m \) sets has a Tracking Set of size at most \( \left( {\lg m + k}\right) \) is hard for the parameterized complexity class \( \mathrm{W}\left\lbrack 2\right\rbrack \) . - Determining whether a set system with \( n \) elements has a Tracking Set of size at most \( \left( {n - k}\right) \) is complete for the parameterized complexity class \( \mathrm{W}\left\lbrack 1\right\rbrack \), and - Determinining whether a set system with \( n \) elements and \( m \) sets has a Tracking Set of size at most \( \left( {m - k}\right) \) is fixed-parameter tractable.
No
Exercise 9.20
Compute the variance of the decision alternatives for the decision in Example 9.5. Plot risk profiles and cumulative risk profiles for the decision alternatives. Discuss whether you find the variance or the risk profiles more helpful in determining the risk inherent in each alternative.
No
Exercise 8.6
Exercise 8.6. Verify the claims made in subsection 8.3 .2 about the ranks of the matrices \( {A}_{n} \) for \( n \leq 5 \) .
Yes
Example 9.2.8
Example 9.2.8 We find all irreducible representations of the Quaternion group \( {Q}_{8} \) over an algebraically closed field \( F \) of characteristic different from 2 . There will be as many irreducible representations of \( {Q}_{8} \) as many conjugacy classes of \( {Q}_{8} \) . There are 5 conjugacy classes of \( {Q}_{8} \) . They are \( \{ 1\} ,\{ - 1\} ,\{ i, - i\} ,\{ j, - j\} \) and \( \{ k, - k\} \) . Thus, there are 5 irreducible representations of degrees \( 1,{n}_{2},{n}_{3},{n}_{4},{n}_{5} \) such that \( 1 + {n}_{2}^{2} + {n}_{3}^{2} + {n}_{4}^{2} + {n}_{5}^{2} = 8 \) . The only possible solution is \( {n}_{2} = {n}_{3} = {n}_{4} = 1 \), and \( {n}_{5} = 2 \) . In other words, there are 4 irreducible representations including the trivial representation of degrees 1, and there is a unique two-dimensional irreducible representation. We list them. All one-dimensional representations are just homomorphisms from \( {Q}_{8} \) to \( {F}^{ \star } \) . Note that the kernel of any homomorphism from \( {Q}_{8} \) to \( {F}^{ \star } \) contains the commutator subgroup \( \{ 1, - 1\} \) of \( {Q}_{8} \) (for \( {F}^{ \star } \) is abelian). Since \( {Q}_{8}/\{ 1, - 1\} \) is isomorphic to the Klein’s four group, we get the four homomorphisms from \( {Q}_{8} \) to \( {F}^{ \star } \) as in the above example. Thus, we have 4 one-dimensional representations, viz., \( {\rho }_{1} \) the trivial representation, \( {\rho }_{2} \) the homomorphism which takes 1 and -1 to \( 1, i, - i \) to 1, and the rest of them to -1 . Similarly, we have two other homomorphisms from \( {Q}_{8} \) to \( {F}^{ \star } \) . Finally, we determine the two-dimensional irreducible representation. Since \( F \) is algebraically closed field of characteristic different from \( 2,{X}^{4} - 1 = 0 \) has 4 distinct roots, which form a cyclic group of order 4 . Let \( \xi \) denote the primitive 4 roots of unity. Then the map \[ i \rightsquigarrow \left\lbrack \begin{matrix} \xi & 0 \\ 0 & - \xi \end{matrix}\right\rbrack, j \rightsquigarrow \left\lbrack \begin{matrix} 0 & 1 \\ - 1 & 0 \end{matrix}\right\rbrack, k \rightsquigarrow \left\lbrack \begin{matrix} 0 & \xi \\ \xi & 0 \end{matrix}\right\rbrack \] defines a representation which is irreducible.
No
Problem 9.31
Problem 9.31. If \( E/\mathbb{Q} \) is an elliptic curve, then prove or disprove (9.5).
No
Example 11.8
Example 11.8. Let \( A \) be a positive number and set \[ p\left( x\right) = \left\{ \begin{array}{ll} 0 & \text{ for }x < 0 \\ A{\mathrm{e}}^{-{Ax}} & \text{ for }0 \leq x \end{array}\right. \] Let us check that \( p \) satisfies \( {\int }_{-\infty }^{\infty }p\left( x\right) \mathrm{d}x = 1 \) . Using the fundamental theorem of calculus, we have \[ {\int }_{-\infty }^{\infty }p\left( x\right) \mathrm{d}x = {\int }_{0}^{\infty }A{\mathrm{e}}^{-{Ax}}\mathrm{\;d}x = - {\left. {\mathrm{e}}^{-{Ax}}\right| }_{0}^{\infty } = 1. \] We now compute \( \bar{x} \) . Using integration by parts and then the fundamental theorem, we have \[ \bar{x} = {\int }_{-\infty }^{\infty }{xp}\left( x\right) \mathrm{d}x = {\int }_{0}^{\infty }{xA}{\mathrm{e}}^{-{Ax}}\mathrm{\;d}x = {\int }_{0}^{\infty }{\mathrm{e}}^{-{Ax}}\mathrm{\;d}x = {\left\lbrack \frac{-{\mathrm{e}}^{-{Ax}}}{A}\right\rbrack }_{0}^{\infty } = \frac{1}{A}. \]
Yes
Example 6.7.5
Example 6.7.5. Let \( X \) and \( Y \) have density \[ f\left( {x, y}\right) = {\pi }^{-1},{x}^{2} + {y}^{2} \leq 1. \] Find (i) \( \mathrm{E}\left\{ {\left( {X}^{2} + {Y}^{2}\right) }^{1/2}\right\} \) ,(ii) \( \mathrm{E}\left| {X \land Y}\right| \) ,(iii) \( \mathrm{E}\left( {{X}^{2} + {Y}^{2}}\right) \) ,(iv) \( \mathrm{E}\left\{ {{X}^{2}/\left( {{X}^{2} + {Y}^{2}}\right) }\right\} \) . Solution. It is clear that polar coordinates are going to be useful here. In each case by application of (2) we have For (i): \[ \mathrm{E}\left\{ {\left( {X}^{2} + {Y}^{2}\right) }^{1/2}\right\} = {\int }_{0}^{2\pi }{\int }_{0}^{1}{\pi }^{-1}{r}^{2}{drd\theta } \] \[ = \frac{2}{3}\text{. } \] For (ii): By symmetry the answer is the same in each octant, so \[ \mathrm{E}\left| {X \land Y}\right| = 8{\iint }_{0 < y < x}{yf}\left( {x, y}\right) {dxdy} = \frac{8}{\pi }{\int }_{0}^{\pi /4}{\int }_{0}^{1}{r}^{2}\sin {\theta drd\theta } \] \[ = \frac{8}{3\pi }\left( {1 - \frac{1}{\sqrt{2}}}\right) \] For (iii): \[ \mathrm{E}\left( {{X}^{2} + {Y}^{2}}\right) = {\int }_{0}^{2\pi }{\int }_{0}^{1}{\pi }^{-1}{r}^{3}{drd\theta } = \frac{1}{2}. \] For (iv): By symmetry \( \mathrm{E}\left\{ {{X}^{2}/\left( {{X}^{2} + {Y}^{2}}\right) }\right\} = \mathrm{E}\left\{ {{Y}^{2}/\left( {{X}^{2} + {Y}^{2}}\right) }\right\} \), and their sum is 1 . Hence \[ \mathrm{E}\left\{ {{X}^{2}/\left( {{X}^{2} + {Y}^{2}}\right) }\right\} = \frac{1}{2}. \tag{O} \]
Yes
Example 8.8
Suppose \( \left( {\mathbb{G}, P}\right) \) satisfies the Markov condition where \( \mathbb{G} \) is the DAG in Figure 8.12. Then, due to Theorem 8.4, \( \{ T, Y, Z\} \) is a Markov blanket of \( X \) . So, we have \[ {I}_{P}\left( {X,\{ S, W\} \mid \{ T, Y, Z\} }\right) . \]
No
Example 4.35
Example 4.35 (Equations for a subalgebra) In this example we will see how we can use extraneous structure on a dual space and lattice to identify a smaller set of equations for a subalgebra of an infinite Boolean algebra; this technique is further exploited in Section 8.3. Consider the set \( \mathbb{Z} \) of integers, and denote by \( {\mathbb{Z}}^{ + } \) the subset of positive integers and by \( {\mathbb{Z}}^{ - } \) the subset of negative integers; so \( \mathbb{Z} = {\mathbb{Z}}^{ - } \cup \{ 0\} \cup {\mathbb{Z}}^{ + } \) . Let \( M \) be the Boolean subalgebra of \( \mathcal{P}\left( \mathbb{Z}\right) \) consisting of all those subsets \( S \) of \( \mathbb{Z} \) such that both \( S \cap {\mathbb{Z}}^{ + } \) is either finite or co-finite, and \( S \cap {\mathbb{Z}}^{ - } \) is either finite or co-finite. One may then show (see Exercise 4.2.12) that the dual space of \( M \) is the "two-point compactification of \( \mathbb{Z} \) " \[ {\mathbb{Z}}_{-\infty }^{+\infty } \mathrel{\text{:=}} \mathbb{Z} \cup \{ - \infty , + \infty \} , \] which topologically is the disjoint union of the one-point compactification \( {\mathbb{Z}}^{ + } \cup \{ + \infty \} \) of \( {\mathbb{Z}}^{ + } \) with the discrete topology, the one-point compactification \( {\mathbb{Z}}^{ - } \cup \{ - \infty \} \) of \( {\mathbb{Z}}^{ - } \) with the discrete topology, and the one point space \( \{ 0\} \) . (For the one-point compactification, see Example 3.46 and Exercise 3.2.2.) Note that, since \( \mathbb{Z} \) is the disjoint union of \( {\mathbb{Z}}^{ + },{\mathbb{Z}}^{ - } \), and \( \{ 0\} \), every free ultrafilter of \( \mathcal{P}\left( \mathbb{Z}\right) \) contains exactly one of \( {\mathbb{Z}}^{ + } \) or \( {\mathbb{Z}}^{ - } \), because a free ultrafilter clearly cannot contain \( \{ 0\} \) . The dual of the inclusion \( M \hookrightarrow \mathcal{P}\left( \mathbb{Z}\right) \) is the surjective function \[ \beta \left( \mathbb{Z}\right) \twoheadrightarrow {\mathbb{Z}}_{-\infty }^{+\infty } \] \[ \mu \mapsto \left\{ \begin{array}{lll} k & \text{ if } & \{ k\} \in \mu \text{ where }k \in \mathbb{Z}, \\ + \infty & \text{ if } & \mu \text{ free and }{\mathbb{Z}}^{ + } \in \mu , \\ - \infty & \text{ if } & \mu \text{ free and }{\mathbb{Z}}^{ - } \in \mu . \end{array}\right. \] Thus, the compatible preorder on \( \beta \mathbb{Z} \) corresponding to the subalgebra \( M \) of \( \mathcal{P}\left( \mathbb{Z}\right) \) is the equivalence relation in which each \( k \in \mathbb{Z} \) is only related to itself, and two free ultrafilters \( \mu \) and \( v \) are related provided they either both contain \( {\mathbb{Z}}^{ + } \), or both contain \( {\mathbb{Z}}^{ - } \) . That is, the remainder is split into two uncountable equivalence classes and each free ultrafilter is related to uncountably many other free ultrafilters. By contrast, we will now show that, by using the successor structure on \( \mathbb{Z} \), the subalgebra \( M \) can be "axiomatized" by a much "thinner" set of equations. The successor function \( \mathbb{Z} \rightarrow \mathbb{Z} \), which sends \( k \in \mathbb{Z} \) to \( k + 1 \), has as its discrete dual the complete homomorphism \( \mathcal{P}\left( \mathbb{Z}\right) \rightarrow \mathcal{P}\left( \mathbb{Z}\right) \) which sends a subset \( S \in \mathcal{P}\left( \mathbb{Z}\right) \) to the set \[ S - 1 \mathrel{\text{:=}} \{ k \in \mathbb{Z} \mid k + 1 \in S\} = \{ s - 1 \mid s \in S\} . \] For \( \mu \in \beta \mathbb{Z} \), write \[ \mu + 1 \mathrel{\text{:=}} \{ S \in \mathcal{P}\left( \mathbb{Z}\right) \mid S - 1 \in \mu \} = \{ S + 1 \mid S \in \mu \} . \] This is a well-defined function \( \beta \mathbb{Z} \rightarrow \beta \mathbb{Z} \), as it is the dual function of the homomorphism \( S \mapsto S - 1 \) on \( \mathcal{P}\left( \mathbb{Z}\right) \) ; it is also the unique continuous extension of the successor function \( \mathbb{Z} \rightarrow \mathbb{Z} \), when we view the codomain as a subset of \( \beta \mathbb{Z} \) . To describe the equational basis for the sublattice \( M \) of \( \mathcal{P}\left( \mathbb{Z}\right) \), consider the set of equations \( \mu + 1 \approx \mu \), as \( \mu \) ranges over \( {}^{ * }\mathbb{Z} \), where we recall that \( {}^{ * }\mathbb{Z} \mathrel{\text{:=}} \beta \mathbb{Z} - \mathbb{Z} \), the remainder of \( \beta \mathbb{Z} \), see Example 4.21. We will show that the sublattice \( M \) of \( \mathcal{P}\left( Z\right) \) contains exactly those \( S \in \mathcal{P}\left( \mathbb{Z}\right) \) that satisfy all of these equations, that is, we will prove that \[ M = \left\llbracket {\mu + 1 \approx \mu \mid \mu \in {}^{ * }\mathbb{Z}}\right.\parallel . \tag{4.3} \] To this end, note first that, for \( S \in \mathcal{P}\left( \mathbb{Z}\right), S \) satisfies \( \mu + 1 \approx \mu \) if, and only if, both \( \mu \) and \( \mu + 1 \) contain \( S \), or neither \( \mu \) nor \( \mu + 1 \) contains \( S \) . Now, for the left-to-right inclusion of (4.3), let \( S \in M \) and let \( \mu \) be a free ultrafilter of \( \mathcal{P}\left( \mathbb{Z}\right) \) . We show that \( S \vDash \mu + 1 \approx \mu \) . Since \( \mu \) is prime and \( {\mathbb{Z}}^{ + } \cup \left( {\mathbb{Z} - {\mathbb{Z}}^{ + }}\right) = \mathbb{Z} \in \mu \), it follows that either \( {\mathbb{Z}}^{ + } \in \mu \) or \( \mathbb{Z} - {\mathbb{Z}}^{ + } \in \mu \) . We treat the case \( {\mathbb{Z}}^{ + } \in \mu \) and leave the other as an exercise. Since \( S \in M \), we have that \( S \cap {\mathbb{Z}}^{ + } \) is either finite or co-finite. If \( S \cap {\mathbb{Z}}^{ + } \) is finite, then, as \( \mu \) is free, \( S \notin \mu \) . Also \( S \cap {\mathbb{Z}}^{ + } \) finite implies that \( \left( {S - 1}\right) \cap {\mathbb{Z}}^{ + } \) is finite and thus \( S - 1 \notin \mu \) . So \( S \vDash \mu + 1 \approx \mu \) . If, on the other hand, \( S \cap {\mathbb{Z}}^{ + } \) is co-finite, then, as \( \left( {{\mathbb{Z}}^{ + } - S}\right) \cup \l
No
Example 3.1
(a) Consider \( M = \left( \begin{array}{ll} 3 & 0 \\ 0 & 2 \end{array}\right) \in {\operatorname{Mat}}_{2 \times 2}\left( \mathbb{R}\right) \) . We have: - \( \left( \begin{array}{ll} 3 & 0 \\ 0 & 2 \end{array}\right) \left( \begin{array}{l} 1 \\ 0 \end{array}\right) = 3 \cdot \left( \begin{array}{l} 1 \\ 0 \end{array}\right) \) and - \( \left( \begin{array}{ll} 3 & 0 \\ 0 & 2 \end{array}\right) \left( \begin{array}{l} 0 \\ 1 \end{array}\right) = 2 \cdot \left( \begin{array}{l} 0 \\ 1 \end{array}\right) \) .
Yes
Example 1.19
[Example 1.19. Every metabolic form \( \varphi \) over \( K \) has good reduction with respect to \( \lambda \) . Of course is \( {\lambda }_{ * }\left( \varphi \right) \sim 0 \) . Proof. It suffices to prove this in the case \( \dim \varphi = 2 \), so \( \varphi = \left( \begin{array}{ll} a & 1 \\ 1 & 0 \end{array}\right) \) with \( a \in K \) . Let \( \widehat{\varphi }\widehat{ = }\left( {E, B}\right) \) and let \( e, f \) be a basis of \( E \) with value matrix \( \left( \begin{array}{ll} a & 1 \\ 1 & 0 \end{array}\right) \) . Choose an element \( c \in {K}^{ * } \) with \( a{c}^{2} \in \mathfrak{o} \) . Then \( {ce},{c}^{-1}f \) is a basis of \( E \) with value matrix \( \left( \begin{matrix} a{c}^{2} & 1 \\ 1 & 0 \end{matrix}\right) \) .]
No
Exercise 4
Suppose Properties P1., P2. and P3. hold. State and prove the dual of Property \( {P3} \) .
No
Example 2.6
Example 2.6. There are the same number of natural numbers as even natural numbers (that is, \( \left| \mathbb{N}\right| = \left| \mathbb{{2N}}\right| \) ). What’s the bijection that shows this? Let \[ f : \mathbb{N} \rightarrow 2\mathbb{N} \] \[ n \mapsto {2n}\text{.} \] (Note: This is notation for the function \( f\left( x\right) = {2x} \) from the naturals to the even naturals.) In other (non-)words, this is the pairing \[ 1 \leftrightarrow 2,\;2 \leftrightarrow 4,\;3 \leftrightarrow 6,\;4 \leftrightarrow 8,\ldots \] The Moral. Two sets can have the same size even though one is a proper subset of the other and the larger one even has infinitely many more elements than the smaller one. \( {}^{5} \) Likewise:
No
Exercise 7.1.19
Exercise 7.1.19. Suppose a topology is regular. Is a finer topology also regular? What about a coarser topology?
No
Exercise 1.12
In Example 1.6.6, we began with a standard normal random variable \( X \) on a probability space \( \left( {\Omega ,\mathcal{F},\mathbb{P}}\right) \) and defined the random variable \( Y = X + \theta \), where \( \theta \) is a constant. We also defined \( Z = {e}^{-{\theta X} - \frac{1}{2}{\theta }^{2}} \) and used \( Z \) as the Radon-Nikodým dcrivativc to construct the probability measure \( \widetilde{\mathbb{P}} \) by the formula (1.6.9): \[ \widetilde{\mathbb{P}}\left( A\right) = {\int }_{A}Z\left( \omega \right) d\mathbb{P}\left( \omega \right) \text{ for all }A \in \mathcal{F}. \] Under \( \widetilde{\mathbb{P}} \), the random variable \( Y \) was shown to be standard normal. We now have a standard normal random variable \( Y \) on the probability space \( \left( {\Omega ,\mathcal{F},\widetilde{\mathbb{P}}}\right) \), and \( X \) is related to \( Y \) by \( X = Y - \theta \) . By what we have just stated, with \( X \) replaced by \( Y \) and \( \theta \) replaced by \( - \theta \), we could define \( \widehat{Z} = {e}^{{\theta Y} - \frac{1}{2}{\theta }^{2}} \) and then use \( \widehat{Z} \) as a Radon-Nikodým derivative to construct a probability measure \( \widehat{\mathbb{P}} \) by the formula \[ \widehat{\mathbb{P}}\left( A\right) = {\int }_{A}\widehat{Z}\left( \omega \right) d\widetilde{\mathbb{P}}\left( \omega \right) \text{ for all }A \in \mathcal{F}, \] so that, under \( \widehat{\mathbb{P}} \), the random variable \( X \) is standard normal. Show that \( \widehat{Z} = \frac{1}{Z} \) and \( \widehat{\mathbb{P}} = \mathbb{P} \) .
No
Example 6.3
[Example 6.3 (Half-lines). Let \( X = ( - \infty ,\xi \rbrack \) and \( Y = ( - \infty ,\eta \rbrack \) be random subsets of \( {\mathbb{R}}^{1} \) . Assume that \( \{ \inf K : K \in \mathcal{M}\} = {\mathbb{R}}^{1} \), i.e. the class \( \mathcal{M} \) is sufficiently rich. Then \( \mathfrak{u}\left( {X, Y;\mathcal{M}}\right) \) coincides with the uniform distance between distribution functions of \( \xi \) and \( \eta \) and \( \mathfrak{L}\left( {X, Y;\mathcal{M}}\right) \) equals the Lévy distance between them. The concentration functions of \( X \) and \( Y \) are equal to the concentration functions of the random variables \( \xi \) and \( \eta \), see Hengartner and Theodorescu [233].]
No
Example 11.2
Example 11.2 The set of integers \( \mathbb{Z} \) is countable (and denumerable), since \( m = f\left( n\right) = \) \( \left\{ \begin{matrix} {2n} & \text{ if }n > 0 \\ - {2n} + 1 & \text{ if }n \leq 0 \end{matrix}\right. \) is a bijection onto \( \mathbb{N} : \) \[ f = \left\{ \begin{matrix} \cdots & - 2 & - 1 & 0 & 1 & 2 & 3 & \cdots & n & \cdots & \leftarrow {\mathcal{D}}_{f} = \mathbb{Z} \\ & \updownarrow & \updownarrow & \updownarrow & \updownarrow & \updownarrow & \updownarrow & & \updownarrow & & \\ \cdots & 5 & 3 & 1 & 2 & 4 & 6 & \cdots & m & \cdots & \leftarrow {\mathcal{R}}_{f} = \mathbb{N}. \end{matrix}\right. \] Furthermore, \( \mathbb{Z} \) is infinite, since the same bijection carries \( \mathbb{Z} \) onto a subset of itself. Notice that the bijection also provides us with subscripts for the set of integers, and hence, a listing all integers is \( \mathbb{Z} = \left\{ {{0}_{1},{1}_{2}, - {1}_{3},{2}_{4}, - {2}_{5},\ldots }\right\} \) . The sequence is fairly obvious; we simply interlace the negative integers with the positive ones.
No
Exercise 6.31
Exercise 6.31 Let \( {\Lambda }^{3} = \mathbb{N} \times \mathbb{N} \times 3\mathbb{Z} \) and \( f : {\Lambda }^{3} \rightarrow \mathbb{R} \) be defined as \[ f\left( t\right) = {t}_{1}{t}_{2}{t}_{3},\;t = \left( {{t}_{1},{t}_{2},{t}_{3}}\right) \in {\Lambda }^{3}. \] Find 1. \( {f}^{\sigma }\left( t\right) \) , 2. \( {f}_{1}^{{\sigma }_{1}}\left( t\right) \) , 3. \( {f}_{2}^{{\sigma }_{2}}\left( t\right) \) , 4. \( {f}_{3}^{{\sigma }_{3}}\left( t\right) \) , 5. \( {f}_{12}^{{\sigma }_{1}{\sigma }_{2}}\left( t\right) \) , 6. \( {f}_{13}^{{\sigma }_{1}{\sigma }_{3}}\left( t\right) \) , 7. \( {f}_{23}^{{\sigma }_{2}{\sigma }_{3}}\left( t\right) \) , 8. \( g\left( t\right) = {f}^{\sigma }\left( t\right) + {f}_{1}^{{\sigma }_{1}}\left( t\right) \) . Solution \( 1.{t}_{1}{t}_{2}{t}_{3} + 3{t}_{1}{t}_{2} + {t}_{1}{t}_{3} + {t}_{2}{t}_{3} + 3{t}_{1} + 3{t}_{2} + {t}_{3} + 3, \) 2. \( {t}_{1}{t}_{2}{t}_{3} + {t}_{2}{t}_{3} \) , 3. \( {t}_{1}{t}_{2}{t}_{3} + {t}_{1}{t}_{3} \) , 4. \( {t}_{1}{t}_{2}{t}_{3} + 3{t}_{1}{t}_{2} \) , 5. \( {t}_{1}{t}_{2}{t}_{3} + {t}_{1}{t}_{3} + {t}_{2}{t}_{3} + {t}_{3} \) , 6. \( {t}_{1}{t}_{2}{t}_{3} + 3{t}_{1}{t}_{2} + {t}_{2}{t}_{3} + 3{t}_{2} \) , 7. \( {t}_{1}{t}_{2}{t}_{3} + 3{t}_{1}{t}_{2} + {t}_{1}{t}_{3} + 3{t}_{1} \) , 8. \( 2{t}_{1}{t}_{2}{t}_{3} + 3{t}_{1}{t}_{2} + {t}_{1}{t}_{3} + 2{t}_{2}{t}_{3} + 3{t}_{1} + 3{t}_{2} + {t}_{3} + 3 \) .
Yes
Example 5.7
Example 5.7 The following implementation of the Gibbs sampler for a three-dimensional joint distribution is a classic example based on Casella and George (1992). Random variables \( X, P \), and \( N \) have joint density \[ \pi \left( {x, p, n}\right) \propto \left( \begin{array}{l} n \\ x \end{array}\right) {p}^{x}{\left( 1 - p\right) }^{n - x}\frac{{4}^{n}}{n!}, \] for \( x = 0,1,\ldots, n,0 < p < 1, n = 0,1,\ldots \) The \( p \) variable is continuous; \( x \) and \( n \) are discrete. The Gibbs sampler requires being able to simulate from the conditional distributions of each component given the remaining variables. The trick to identifying these conditional distributions is to treat the two conditioning variables in the joint density function as fixed constants. The conditional distribution of \( X \) given \( N = n \) and \( P = p \) is proportional to \( \left( \begin{array}{l} n \\ x \end{array}\right) {p}^{x}{\left( 1 - p\right) }^{n - x} \), for \( x = 0,1,\ldots, n \), which is binomial with parameters \( n \) and \( p \) . The conditional distribution of \( P \) given \( X = x \) and \( N = n \) is proportional to \( {p}^{x}{\left( 1 - p\right) }^{n - x} \), for \( 0 < p < 1 \), which gives a beta distribution with parameters \( x + 1 \) and \( n - x + 1 \) . The conditional distribution of \( N \) given \( X = x \) and \( P = p \) is proportional to \( 1 - p{)}^{n - x}{4}^{n}/\left( {n - x}\right) ! \), for \( n = x, x + 1,\ldots \) This is a shifted Poisson distribution with parameter \( 4\left( {1 - p}\right) \) . That is, the conditional distribution is equal to the distribution of \( Z + x \), where \( Z \) has a Poisson distribution with parameter \( 4\left( {1 - p}\right) \) . The Gibbs sampler, with arbitrary initial value, is implemented as follows: 1. Initialize: \( \left( {{x}_{0},{p}_{0},{n}_{0}}\right) \leftarrow \left( {1,{0.5},2}\right) \) \[ m \leftarrow 1 \] 2. Generate \( {x}_{m} \) from a binomial distribution with parameters \( {n}_{m - 1} \) and \( {p}_{m - 1} \) . 3. Generate \( {p}_{m} \) from a beta distribution with parameters \( {x}_{m} + 1 \) and \( {n}_{m - 1} - {x}_{m} + 1 \) . 4. Let \( {n}_{m} = z + {x}_{m} \), where \( z \) is simulated from a Poisson distribution with parameter \( 4\left( {1 - {p}_{m}}\right) \) . 5. \( m \leftarrow m + 1 \) 6. Return to Step 2. The output of the Gibbs sampler is a sequence of samples \[ \left( {{X}_{0},{P}_{0},{N}_{0}}\right) ,\left( {{X}_{1},{P}_{1},{N}_{1}}\right) ,\left( {{X}_{2},{P}_{2},{N}_{2}}\right) ,\ldots \] In R, the output is stored in a matrix with three columns. Each column gives a sample from the marginal distribution. Each pair of columns gives a sample from a bivariate joint distribution. See Figure 5.7 for graphs of joint and marginal distributions.
No
Problem 5
Problem 5 (Fekete) Is it true that for every convex set \( C \) there is an \[ n\text{-gon}{P}_{n} \subset C\text{with}{\mu }^{\text{ratio }}\left( {C,{P}_{n}}\right) \geq \frac{n - 1}{n + 1}\pi \text{?} \]
No
Exercise 23.10
Exercise 23.10. For any \( n \geq 1 \) we have defined the scalar-valued dot product \( \mathbf{v} \cdot \mathbf{w} \) for any \( n \) -vectors \( \mathbf{v} \) and \( \mathbf{w} \) . In the case \( n = 3 \) there is another type of "product" that is vector-valued: for \( \mathbf{v} = \left\lbrack \begin{array}{l} {v}_{1} \\ {v}_{2} \\ {v}_{3} \end{array}\right\rbrack \) and \( \mathbf{w} = \left\lbrack \begin{array}{l} {w}_{1} \\ {w}_{2} \\ {w}_{3} \end{array}\right\rbrack \) the cross product \( \mathbf{v} \times \mathbf{w} \in {\mathbf{R}}^{3} \) is defined to be \[ \mathbf{v} \times \mathbf{w} = \left\lbrack \begin{array}{l} {v}_{2}{w}_{3} - {v}_{3}{w}_{2} \\ {v}_{3}{w}_{1} - {v}_{1}{w}_{3} \\ {v}_{1}{w}_{2} - {v}_{2}{w}_{1} \end{array}\right\rbrack = \det \left\lbrack \begin{array}{ll} {v}_{2} & {v}_{3} \\ {w}_{2} & {w}_{3} \end{array}\right\rbrack {\mathbf{e}}_{1} - \det \left\lbrack \begin{array}{ll} {v}_{1} & {v}_{3} \\ {w}_{1} & {w}_{3} \end{array}\right\rbrack {\mathbf{e}}_{2} + \det \left\lbrack \begin{array}{ll} {v}_{1} & {v}_{2} \\ {w}_{1} & {w}_{2} \end{array}\right\rbrack {\mathbf{e}}_{3} \] (note the minus sign in front of the second determinant on the right). This concept is very specific to the case \( n = 3 \), and arises in a variety of important physics and engineering applications. General details on the cross product are given in Appendix F. (a) Verify algebraically that \( \mathbf{w} \times \mathbf{v} = - \left( {\mathbf{v} \times \mathbf{w}}\right) \) ("anti-commutative"), and \( \mathbf{v} \times \mathbf{v} = \mathbf{0} \) for every \( \mathbf{v} \) (!). (b) For \( \mathbf{v} = \left\lbrack \begin{matrix} 2 \\ - 1 \\ 3 \end{matrix}\right\rbrack ,\mathbf{w} = \left\lbrack \begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right\rbrack ,\mathbf{u} = \left\lbrack \begin{matrix} 4 \\ 3 \\ - 2 \end{matrix}\right\rbrack \), use the description via \( 2 \times 2 \) determinants to verify: \( \mathbf{v} \times \mathbf{w} = \left\lbrack \begin{matrix} - 9 \\ - 3 \\ 5 \end{matrix}\right\rbrack ,\mathbf{w} \times \mathbf{u} = \left\lbrack \begin{matrix} - {13} \\ {14} \\ - 5 \end{matrix}\right\rbrack ,\left( {\mathbf{v} \times \mathbf{w}}\right) \times \mathbf{u} = \left\lbrack \begin{matrix} - 9 \\ 2 \\ - {15} \end{matrix}\right\rbrack \), and \( \mathbf{v} \times \left( {\mathbf{w} \times \mathbf{u}}\right) = \left\lbrack \begin{matrix} - {37} \\ - {29} \\ {15} \end{matrix}\right\rbrack \) . (The latter two are not equal, illustrating that the cross product is not associative: parentheses matter!) (c) For a general scalar \( c \) verify algebraically that \( \left( {c\mathbf{v}}\right) \times \mathbf{w} = c\left( {\mathbf{v} \times \mathbf{w}}\right) \), and for a general third vector \( {\mathbf{v}}^{\prime } \) verify algebraically that \( \left( {\mathbf{v} + {\mathbf{v}}^{\prime }}\right) \times \mathbf{w} = \mathbf{v} \times \mathbf{w} + {\mathbf{v}}^{\prime } \times \mathbf{w} \) (distributivity over vector addition, which is the reason this operation deserves to be called a "product"). (d) For linearly independent \( \mathbf{v} \) and \( \mathbf{w} \) making an angle \( \theta \in \left( {0,\pi }\right) \), the vector \( \mathbf{v} \times \mathbf{w} \) is perpendicular to \( \mathbf{v} \) and \( \mathbf{w} \) with magnitude \( \parallel \mathbf{v}\parallel \parallel \mathbf{w}\parallel \) sin \( \left( \theta \right) \) . Verify these orthogonality and magnitude properties for the specific 3-vectors \( \mathbf{v} \) and \( \mathbf{w} \) in (b). (Hint on the magnitude aspect: \( \sin \left( \theta \right) = \sqrt{1 - {\cos }^{2}\left( \theta \right) } \) since \( \sin \left( \theta \right) > 0 \) for \( 0 < \theta < \pi \), and \( \cos \left( \theta \right) \) can be computed via a dot product.)
No
Example 5.13
[Example 5.13. Consider again the knowledge base \( \mathcal{R} = \left\{ {{r}_{1},{r}_{2}}\right\} \) but without the grounding constraint of \( {r}_{1} \), i. e. \[ {r}_{1} = {}_{def}\left( {a\left( \mathrm{X}\right) }\right) \left\lbrack 0\right\rbrack \;{r}_{2} = {}_{def}\left( {a\left( {\mathrm{c}}_{1}\right) }\right) \left\lbrack d\right\rbrack . \] Obviously, \( \mathcal{R} \) is \( {\mathcal{G}}_{U} \) -inconsistent with respect to every finite \( D \) with \( {c}_{1} \in \) \( D \) . However, \( \mathcal{R} \) is \( {\mathcal{G}}_{sp} \) -consistent with respect to every finite \( D \) with \( {\mathrm{c}}_{1} \in \) \( D \) as \( \left( {a\left( {\mathrm{c}}_{1}\right) }\right) \left\lbrack 0\right\rbrack \notin {\mathcal{G}}_{sp}\left( \mathcal{R}\right) \) as \( {r}_{2} \) is more specific than \( {r}_{1} \) with respect to \( {\mathrm{c}}_{1} \) .]
No
Problem 3.2.9
Problem 3.2.9. (Corman, et al) Let \( {x}_{i} = f\left( {x}_{i - 1}\right), i = 1,2,3,\ldots \) Let also \( t, u > 0 \) be the smallest numbers in the sequence \( {x}_{t + i} = {x}_{t + u + i}, i = \) \( 0,1,2,\ldots \), where \( t \) and \( u \) are called the lengths of the \( \rho \) tail and cycle, respectively. Give an efficient algorithm to determine \( t \) and \( u \) exactly, and analyze the running time of your algorithm.
No
Exercise 6.10
Exercise 6.10. Prove or disprove the following statements: 1. In the Smale horseshoe, the periodic points of period odd are dense. 2. In the Smale horseshoe, the periodic points of period prime are dense. 3. In the Smale horseshoe, the periodic points of period at least 100 are dense.
No
Example 3.1.3
Example 3.1.3. We adopt the notation from Example 3.1.2. Let \( \Delta = {2}^{\left\lbrack d\right\rbrack } \) be the full simplex on ground set \( \left\lbrack d\right\rbrack \) . If we grade the simplex \( A \in \Delta \) by \( \operatorname{gr}\left( A\right) = \) \( \mathop{\sum }\limits_{{i \in A}}{e}_{i} \), then it is easily seen that the resolution \( \mathcal{C} \) is supported by \( \left( {\Delta ,\mathrm{{gr}}}\right) \) . Now for any degree \( \alpha \in {\mathbb{Z}}^{d} \) we have that \( {\Delta }_{ \preccurlyeq \alpha } \) is either empty or \( {2}^{A} \) for \( A = \operatorname{supp}\alpha = \left\{ {i \in \left\lbrack d\right\rbrack \mid {\alpha }_{i} \neq 0}\right\} \) . Thus indeed \( \left( {\Delta ,\mathrm{{gr}}}\right) \) fulfills the criterion from Proposition 3.1.2 (2).
No
Exercise 8.5.10
Exercise 8.5.10. Suppose \( A, B \), and \( {AB} \) are symmetric. Show that \( A \) and \( B \) are simultaneously diagonalizable. Is \( {BA} \) symmetric?
No
Problem 8.4.11
Problem 8.4.11. Prove unconditionally that for any asymptotically exact family \( \mathcal{K} \) of number fields the Brauer-Siegel ratio \( \operatorname{BS}\left( \mathcal{K}\right) \) is well defined.
No
Example 1.15
Example 1.15 (Maxitive capacity). Define a maxitive capacity \( T \) by \[ T\left( K\right) = \sup \{ f\left( x\right) : x \in K\} , \tag{1.22} \] where \( f : \mathbb{E} \mapsto \left\lbrack {0,1}\right\rbrack \) is an upper semicontinuous function. Then \( T = {f}^{ \vee } \) is the sup-integral of \( f \) as defined in Appendix E. This capacity functional \( T \) describes the distribution of the random closed set \( X = \{ x \in \mathbb{E} : f\left( x\right) \geq \alpha \} \), where \( \alpha \) is a random variable uniformly distributed on \( \left\lbrack {0,1}\right\rbrack \) .
No
Example 2.1.26
[Consider the semigroup algebra \( A = \left( {{\ell }^{1}\left( {\mathbb{Z}}_{ \vee }\right) , \star }\right) \), and set \[ M = \left\{ {f = \mathop{\sum }\limits_{{n \in \mathbb{Z}}}{\alpha }_{n}{\delta }_{n} \in A : \mathop{\sum }\limits_{{n \in \mathbb{Z}}}{\alpha }_{n} = 0}\right\} , \] so that \( M \) is a maximal modular ideal in \( A \) and \( A = \mathbb{C}p \oplus M \), where \( p \mathrel{\text{:=}} {\delta }_{0} \in \mathfrak{I}\left( A\right) \) . Thus \( A \) and \( M \) are weakly sequentially complete. It is clear that the algebra \( A \) is not unital, that \( \left( {{\delta }_{-n} : n \in {\mathbb{Z}}^{ + }}\right) \) is a contractive approximate identity for \( A \), and also that \( \left( {{\delta }_{-n} - {\delta }_{n + 1} : n \in {\mathbb{Z}}^{ + }}\right) \) is a bounded approximate identity for \( M \) .]
No
Exercise 3.8.2
Show that if condition 4 is satisfied, then conditions (3.8.4) and (3.8.5) hold.
No
Example 7.15
Example 7.15 Consider Example 3.6 again, but now suppose that the total number of animals is 22 and that counts are \( y = \left( {{14},3,5}\right) \) . Adopting an uniform prior for \( \theta \) leads to \( \pi \left( \theta \right) \propto {\left( 2 + \theta \right) }^{14}{\left( 1 - \theta \right) }^{3}{\theta }^{5} \) . Posterior simulation is now performed through the above slice sampling algorithm. Based on a sample of size \( M = {5000} \) from \( \pi \left( \theta \right) \), approximations for the posterior mean, standard deviation and \( {95}\% \) credibility interval of \( \theta \) are \( {0.698},{0.123} \) and \( \left( {{0.417},{0.908}}\right) \), respectively. Figure 7.8 illustrates the sampler.
Yes
Problem 2.43
Problem 2.43 Let \( \Theta ,{X}_{1},{X}_{2},\ldots \) be RVs. Suppose that, conditional on \( \Theta = \) \( \theta ,{X}_{1},{X}_{2},\ldots \) are independent and \( {X}_{k} \) is normally distributed with mean \( \theta \) and variance \( {\sigma }_{k}^{2} \) . Suppose that the marginal PDF of \( \Theta \) is \[ \pi \left( \theta \right) = \frac{1}{\sqrt{2\pi }}{\mathrm{e}}^{-\frac{{\theta }^{2}}{2}},\theta \in \mathbb{R}. \] Calculate the mean and variance of \( \Theta \) conditional on \( {X}_{1} = {x}_{1},\ldots ,{X}_{n} = {x}_{n} \) .
Yes
Example 5.3
In the example, we use Lemma 8.12 to choose a basis for \( {\left( {\operatorname{Ext}}_{A}^{1}\left( {M}_{i},{M}_{j}\right) \right) }_{1 \leq i, j \leq 2} \) . That is, we choose the basis \[ \left\{ \begin{matrix} {\partial }_{{t}_{11}\left( 1\right) },{\partial }_{{t}_{11}\left( 2\right) },{\partial }_{{t}_{12}}, \\ {\partial }_{{t}_{22}} \end{matrix}\right\} . \] By the given action of \( A \) on \( \left\{ {{M}_{1},{M}_{2}}\right\} \), we have that \[ {\partial }_{{t}_{12}}\left( {{t}_{11}\left( 1\right) {t}_{12} - {t}_{12}{t}_{22}}\right) = {t}_{11}\left( 1\right) {\partial }_{{t}_{12}}\left( {t}_{12}\right) + {\partial }_{{t}_{12}}\left( {{t}_{11}\left( 1\right) }\right) {t}_{12} \] \[ - {t}_{12}{\partial }_{{t}_{12}}\left( {t}_{22}\right) - {\partial }_{{t}_{12}}\left( {t}_{12}\right) {t}_{22} \tag{5.6} \] \[ = {t}_{11}\left( 1\right) - {t}_{22} = 0 \] because \( {t}_{11}\left( 1\right) = {t}_{22} = 0 \) in this example. Also \[ {\partial }_{{t}_{12}}\left( {{t}_{11}^{3}\left( 2\right) {t}_{12} - {t}_{12}{t}_{22}^{2}}\right) = {t}_{11}^{3}\left( 2\right) - {t}_{22}^{2} = 0, \tag{5.7} \] which says that the \( {\mathrm{{Ext}}}^{1} \) -dimension drops outside the parametrization: If \( {t}_{11}^{3}\left( 2\right) {t}_{12} \neq {t}_{12}{t}_{22}^{2} \), then \( {\partial }_{{t}_{12}} \) will be a super-position of the other basis-derivations.
No
Example 4.96
An interesting special case of a closed separable \( {\text{complete}}^{126} \) subring \( {}^{127} \) of \( {C}^{0}\left( \widehat{\mathbf{S}}\right) \) is \[ \mathcal{R} = {\left. C\left( \mathcal{Y}\left( \Omega ;\widehat{S}\right) \right) \right| }_{\widehat{\mathbf{S}}} = \left\{ {v \in {C}^{0}\left( \widehat{\mathbf{S}}\right) ;\exists \bar{v} \in C\left( {\mathcal{Y}\left( {\Omega ;\widehat{S}}\right) }\right) : v = \bar{v} \circ \delta }\right\} \tag{4.235} \]
No
Example 5.9
Example 5.9 Let \( E = {\mathbb{R}}^{2} \) . Consider the following non-empty convex subset \( X \) of \( E \) : \[ X = \{ \left( {u, v}\right) \in {\mathbb{R}}^{2}\left| {0 < u < 1\text{ and }0 < v \leq 1 - u\} \cup \{ \left( {u, v}\right) \in {\mathbb{R}}^{2}}\right| u = 0\text{ and }0 \leq v \leq 1\} . \] Fix \( {x}_{0} = \left( {\frac{1}{2},\frac{1}{2}}\right) \in X \) . For each \( x \in X \) with \( x \neq \left( {0,0}\right) \) and \( x \neq {x}_{0} \), let \( {A}_{x} \) denote the following set: \( {A}_{x} = \) the closed region in \( X \) bounded by the line \( v = 1 - u \) and the line passing through the point \( x \) and parallel to the line \( v = 1 - u \) . Now, we define \( F : X \rightarrow {2}^{X} \) by \[ F\left( x\right) = \left\{ \begin{matrix} {A}_{x} \cup \{ \left( {0,0}\right) \} \cup \left\{ {\left( {\frac{1}{n + 2},\frac{1}{n + 2}}\right) : n = 1,2,3,\ldots }\right\} , \\ \text{ if }x \notin \left( {0,0}\right) \text{ and }x \notin {x}_{0}; \\ X,\text{ if }x = \left( {0,0}\right) ; \\ \{ \left( {0,0}\right) \} \cup \left\{ {\left( {\frac{1}{n + 2}\frac{1}{n + 2}}\right) : n \neq 1,2,\ldots }\right\} , \\ \text{ if }x = {x}_{0} \end{matrix}\right. \] Then for each \( A \in \mathcal{F}\left( \mathcal{X}\right) \) with \( {x}_{0} \in A \) and for each \( x \in {co}\left( A\right), F\left( x\right) \cap {co}\left( A\right) \) is closed in \( {co}\left( A\right) \) . However, consider \( L = {\mathbb{R}}^{2} \) and \( x = \left( {0,0}\right) \) ; then \( F\left( x\right) \cap L = F\left( \left( {0,0}\right) \right) \cap {\mathbb{R}}^{2} = \) \( X \) is not closed in \( L \) . Note that \( F \) is a KKM-map such that \( c{l}_{X}F\left( {x}_{0}\right) = F\left( {x}_{0}\right) \) is compact and the condition (c) of Lemma 5.30 is also satisfied. Thus Lemma 5.30 is applicable but Lemma 1 of [H. Brézis and Stampacchia (1972)] is not.
No
Example 4
Example 4 Find \( {A}^{-1} \) by Gauss-Jordan elimination starting from \( A = \left\lbrack \begin{array}{ll} 2 & 3 \\ 4 & 7 \end{array}\right\rbrack \) . There are two row operations and then a division to put 1 's in the pivots: \[ \left\lbrack \begin{array}{ll} A & I \end{array}\right\rbrack = \left\lbrack \begin{array}{llll} 2 & 3 & 1 & 0 \\ 4 & 7 & 0 & 1 \end{array}\right\rbrack \rightarrow \left\lbrack \begin{array}{rrrr} 2 & 3 & 1 & 0 \\ 0 & 1 & - 2 & 1 \end{array}\right\rbrack \;\text{ (this is }\left\lbrack \begin{array}{ll} U & {L}^{-1} \end{array}\right\rbrack \] \[ \rightarrow \left\lbrack \begin{array}{rrrr} 2 & 0 & 7 & - 3 \\ 0 & 1 & - 2 & 1 \end{array}\right\rbrack \rightarrow \left\lbrack \begin{array}{rrrr} 1 & 0 & \frac{7}{2} & - \frac{3}{2} \\ 0 & 1 & - 2 & 1 \end{array}\right\rbrack \;\text{ (this is }\left\lbrack \begin{array}{ll} I & {A}^{-1} \end{array}\right\rbrack \text{. } \] That \( {A}^{-1} \) involves division by the determinant \( {ad} - {bc} = 2 \cdot 7 - 3 \cdot 4 = 2 \) . The code for \( X = \) inverse \( \left( A\right) \) can use rref, the "row reduced echelon form" from Chapter 3 : \[ I = \text{ eye }\left( n\right) \] \( \% \) Define the \( n \) by \( n \) identity matrix \[ R = \operatorname{rref}\left( \left\lbrack {AI}\right\rbrack \right) \] \( \% \) Eliminate on the augmented matrix \( \left\lbrack \begin{array}{ll} A & I \end{array}\right\rbrack \) \[ X = R\left( { :, n + 1 : n + n}\right) \] \( \% \) Pick \( {A}^{-1} \) from the last \( n \) columns of \( R \) \( A \) must be invertible, or elimination cannot reduce it to \( I \) (in the left half of \( R \) ). Gauss-Jordan shows why \( {A}^{-1} \) is expensive. We must solve \( n \) equations for its \( n \) columns. To solve \( {Ax} = b \) without \( {A}^{-1} \) , we deal with \( {one} \) column \( b \) to find one column \( x \) . In defense of \( {A}^{-1} \), we want to say that its cost is not \( n \) times the cost of one system \( {Ax} = b \) . Surprisingly, the cost for \( n \) columns is only multiplied by 3 . This saving is because the \( n \) equations \( A{x}_{i} = {e}_{i} \) all involve the same matrix \( A \) . Working with the right sides is relatively cheap, because elimination only has to be done once on \( A \) . The complete \( {A}^{-1} \) needs \( {n}^{3} \) elimination steps, where a single \( x \) needs \( {n}^{3}/3 \) . The next section calculates these costs.
No
Example 3.11
[For \( q = 2, p > 1 \), and \( \ell = 0 \), we have \[ {\phi }_{2,0} \cdot {\Omega }^{p} = C{\varphi }_{KM} \land {\varphi }_{KM} \land {\Omega }^{p - 2} + {C}^{\prime }\omega \left( {R}_{1}\right) \omega \left( {R}_{2}\right) {\varphi }_{0} \cdot {\Omega }^{p} \] for some nonzero constants \( C \) and \( {C}^{\prime } \) . Here \( \Omega \) denotes the Kähler form on the Hermitian domain \( D \) . But we will not need this. In view of Lemma 3.8 and Proposition 3.6, we see for \( q + \ell \) even that \[ \omega \left( k\right) \xi = \det {\left( \mathbf{k}\right) }^{m/2 + \ell }\xi \tag{3.14} \] for \( k \in K \) . We let \( \Xi \left( {g, s}\right) \) be the section in the induced representation corresponding to the Schwartz function \( \xi \) via (2.4).]
No
Exercise 1.1.4
Exercise 1.1.4 Show that \( \left\{ {c}_{\alpha }\right\} \) is summable if and only if \( \left\{ \left| {c}_{\alpha }\right| \right\} \) is summable; show also that \( \left\{ {c}_{\alpha }\right\} \) is summable if and only if \[ \left\{ {\left| {\mathop{\sum }\limits_{{\alpha \in A}}{c}_{\alpha }}\right| : A \in F\left( I\right) }\right\} \] is bounded.
No
Example 5
Example 5 Find a general solution of (20) \[ {\mathbf{x}}^{\prime }\left( t\right) = \mathbf{A}\mathbf{x}\left( t\right) ,\text{ where }\mathbf{A} = \left\lbrack \begin{array}{rrr} 1 & - 2 & 2 \\ - 2 & 1 & 2 \\ 2 & 2 & 1 \end{array}\right\rbrack . \] Solution A is symmetric, so we are assured that A has three linearly independent eigenvectors. To find them, we first compute the characteristic equation for A: \[ \left| {\mathbf{A} - r\mathbf{I}}\right| = \left| \begin{matrix} 1 - r & - 2 & 2 \\ - 2 & 1 - r & 2 \\ 2 & 2 & 1 - r \end{matrix}\right| = - {\left( r - 3\right) }^{2}\left( {r + 3}\right) = 0. \] Thus the eigenvalues of \( \mathbf{A} \) are \( {r}_{1} = {r}_{2} = 3 \) and \( {r}_{3} = - 3 \) . Notice that the eigenvalue \( r = 3 \) has multiplicity 2 when considered as a root of the characteristic equation. Therefore, we must find two linearly independent eigenvectors associated with \( r = 3 \) . Substituting \( r = 3 \) in \( \left( {\mathbf{A} - r\mathbf{I}}\right) \mathbf{u} = \mathbf{0} \) gives \[ \left\lbrack \begin{array}{rrr} - 2 & - 2 & 2 \\ - 2 & - 2 & 2 \\ 2 & 2 & - 2 \end{array}\right\rbrack \left\lbrack \begin{array}{l} {u}_{1} \\ {u}_{2} \\ {u}_{3} \end{array}\right\rbrack = \left\lbrack \begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right\rbrack \] This system is equivalent to the single equation \( - {u}_{1} - {u}_{2} + {u}_{3} = 0 \), so we can obtain its solutions by assigning an arbitrary value to \( {u}_{2} \), say \( {u}_{2} = v \), and an arbitrary value to \( {u}_{3} \), say \( {u}_{3} = v \) . --- \( {}^{ \dagger } \) See Linear Algebra and Its Applications,3rd updated ed., by David C. Lay (Addison-Wesley, Reading, Mass.,2006). --- Solving for \( {u}_{1} \), we find \( {u}_{1} = {u}_{3} - {u}_{2} = s - v \) . Therefore, the eigenvectors associated with \( {r}_{1} = {r}_{2} = 3 \) can be expressed as \[ \mathbf{u} = \left\lbrack \begin{matrix} s - v \\ v \\ s \end{matrix}\right\rbrack = s\left\lbrack \begin{array}{l} 1 \\ 0 \\ 1 \end{array}\right\rbrack + v\left\lbrack \begin{array}{r} - 1 \\ 1 \\ 0 \end{array}\right\rbrack . \] By first taking \( s = 1, v = 0 \) and then taking \( s = 0, v = 1 \), we get the two linearly independent eigenvectors (21) \[ {\mathbf{u}}_{1} = \left\lbrack \begin{array}{l} 1 \\ 0 \\ 1 \end{array}\right\rbrack ,\;{\mathbf{u}}_{2} = \left\lbrack \begin{array}{r} - 1 \\ 1 \\ 0 \end{array}\right\rbrack . \] For \( {r}_{3} = - 3 \), we solve \[ \left( {\mathbf{A} + 3\mathbf{I}}\right) \mathbf{u} = \left\lbrack \begin{array}{rrr} 4 & - 2 & 2 \\ - 2 & 4 & 2 \\ 2 & 2 & 4 \end{array}\right\rbrack \left\lbrack \begin{array}{l} {u}_{1} \\ {u}_{2} \\ {u}_{3} \end{array}\right\rbrack = \left\lbrack \begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right\rbrack \] to obtain the eigenvectors \( \operatorname{col}\left( {-s, - s, s}\right) \) . Taking \( s = 1 \) gives \[ {\mathbf{u}}_{3} = \left\lbrack \begin{array}{r} - 1 \\ - 1 \\ 1 \end{array}\right\rbrack \] Since the eigenvectors \( {\mathbf{u}}_{1},{\mathbf{u}}_{2} \), and \( {\mathbf{u}}_{3} \) are linearly independent, a general solution to (20) is \[ \mathbf{x}\left( t\right) = {c}_{1}{e}^{3t}\left\lbrack \begin{array}{l} 1 \\ 0 \\ 1 \end{array}\right\rbrack + {c}_{2}{e}^{3t}\left\lbrack \begin{array}{r} - 1 \\ 1 \\ 0 \end{array}\right\rbrack + {c}_{3}{e}^{-{3t}}\left\lbrack \begin{array}{r} - 1 \\ - 1 \\ 1 \end{array}\right\rbrack . \]
No
Exercise 9.10
Exercise 9.10. A dog’s weight \( W \) (pounds) changes over \( D \) days according to the following function: \[ W = f\left( {D,{p}_{1},{p}_{2}}\right) = \frac{{p}_{1}}{1 + {e}^{{2.462} - {p}_{2}D}}, \tag{9.9} \] where \( {p}_{1} \) and \( {p}_{2} \) are parameters. a. This function can be used to describe the data wilson. Make a scatterplot with the wilson data. What is the long term weight of the dog? b. Generate a contour plot for the likelihood function for these data. What are the values of \( {p}_{1} \) and \( {p}_{2} \) that optimize the likelihood? You may assume that \( {p}_{1} \) and \( {p}_{2} \) are both positive. c. With your values of \( {p}_{1} \) and \( {p}_{2} \) add the function \( W \) to your scatterplot and compare the fitted curve to the data.
Yes
Example 3.13
Under the same setting as in the previous example, let \( a = 2 \) . Then, (i) \( X \) is MJ-canonical if and only if \( N \leq {2d} - 1 \) , (ii) \( X \) is MJ-log canonical if and only if \( N \leq {2d} \) . Note that these conditions on \( N \) and \( d \) are only the necessary conditions for a general \( X \) to be MJ-canonical and MJ-log canonical as are seen in Proposition 3.3.
No
Exercise 3.31
Exercise 3.31. (Continuation of Exercise 3.27) Consider matrices of the form \[ \left( \begin{matrix} p & 1 - p & a \\ q & 1 - q & b \\ 0 & 0 & c \end{matrix}\right) , \] where \( 0 < p, q < 1, a \) and \( b \) are real, and \( c = \pm 1 \) .
No
Example 12.1
Example 12.1 (Linear prediction of NMR laser data). In Chapter 1 we compared a simple nonlinear prediction scheme to something which we vaguely called the "best" linear predictor of the "best" linear predictor of the NMR laser data (Fig. 1.2). These predictions were made with an AR model of order \( {M}_{\mathrm{{AR}}} = 6 \), where it turned out that the improvements resulting from increasing \( {M}_{\mathrm{{AR}}} \) were negligible. This is in agreement with the fact that the autocorrelation function of these data (Appendix B.2) decays very fast. The one-step prediction error there was found to be \( e = {962} \) units.
Yes