prompt
stringlengths
26
6.42k
chosen
stringlengths
1
9.7k
rejected
stringlengths
2
13.2k
domain
stringclasses
4 values
I have been trying to write a program, that will allow me to calculate the constant e to an extremely precice value. This is the equation I used: 1 + 1/1! + 1/2! and so on My goal, was to calculate e to 9001 digits in precision (aka, OVER 9000!) However, with the equation I used, I had to do some rounding.For example, 1/(1*2*3) would end up getting me 0.16666666666 (the 6 repeats until the 9001st digit, where it is rounded to a 7). the rounding rules I used were, if the number cannot be divided evenly within 9001 digits, I would look at the 9002nd digit, if it is 5 or above, round up. else, round down. Now my question is, in my circumstance, is it possible to figure out at most how many digits at the end would be made inaccurate because of the rounding? thanks.
If you keep $n$ digits beyond what you want, each term can contribute a rounding error of at most $\pm 5 \cdot 10^{-n}$. Then if you add up $k$ terms, the maximum error is $\pm 5k \cdot 10^{-n}$. Of course, it is unlikely to be that bad. Alternately, if you keep no guard digits, the error could be $\frac k2$ in the last place.
You will want to try to calculate $$\sum_{n=0}^{2962} \frac{1}{n!}$$ since $2963! \gt 10^{9002}$. All these terms will require rounding except when $n=0,1,2$ so you have to round $2960$ terms. So the maximum absolute error will be $2560 \times \frac{1}{2} \times 10^{-9001}$ and you cannot absolutely trust the $8998$th decimal place and the $8997$th may be out by $1$, though probably is not. More realistically the errors may largely cancel each other out, so assuming the errors are independently uniformly distributed, the standard deviation of each error is $\sqrt{\frac{1}{12}} \times 10^{-9001} $ so the standard deviation of the sum of the rounding errors is $\sqrt{\frac{2560}{12}} \times 10^{-9001} $ and taking three times this suggests that you should not trust the $9000$th decimal place and the $8999$th may be out by $1$ but is unlikely to be wrong by more than this.
stem_mix_good_quality
As I understand, the Lebesgue integral is the limit of the sum of the areas of horizontal (lying) regtangles that approximate the area under the curve. And this is the main difference with the Riemann integral which is the limit of vertical (standing) regtangles. If the function $f: (X,\Sigma)\rightarrow (Y,B(Y))$ is simple then it can be expressed as follows: $$f(x)=\sum_{i=1}^{n}1_{F_i}(x)\cdot c_i$$ where the sets $F_i$ may intersect with each other and $\forall i\quad F_i \in \Sigma$ and $\cup_iF_i=X$. A nonnegative function $f$ can be approximated by simple functions and a general function $f$ can be represented as $f=f^++f^-$, where $f^+=\max(0,f)$ and $f^-=\min(0,f)$. Hence $$\int f(x)\mu(dx)=\int f^+(x)\mu(dx)+\int f^-(x)\mu(dx)$$ Since $|f|=f^+-f^-$, $f$ is integrable if and only if $|f|$ is integrable. Now I'm, confused. Does it mean, that we can't use the Lebesgue integral to calculate $$\int_0^{+\infty}\frac{\sin(x)}{x}dx$$ Because the function $g(x)=\frac{|\sin(x)|}{|x|}$ is not integrable on $(\epsilon,+\infty)$, where $\epsilon>0$ I'm confused because I think that it is possible to calculate the Riemann integral of this function $\frac{\sin(x)}{x}$. However, every Riemann integrable function is also Lebesgue integrable.
The Riemann integral is only defined for finite intervals. $$\int\limits_{a}^{b}f(x)dx = \lim_{n \to \infty} \frac{(b-a)}{n}\sum_{k = 1}^{n}f(a + \frac{k(b-a)}{n})$$ This expression only make sense if $a, b \in \mathbb R$. Your integral is actually an improper integral . It is the limit of Riemann integrals, but it is not a Riemann integral itself: $$\int_0^{+\infty}\frac{\sin(x)}{x}dx = \lim_{b\to +\infty} \int_0^{b}\frac{\sin(x)}{x}dx$$
You can show that $\int_0^{+\infty}\frac{\sin(x)}{x}dx\tag 1$ converges as an improper Riemann Integral (integrate by parts) but that $\int_0^{+\infty}\left | \frac{\sin(x)}{x} \right |dx\tag 2$ does not converge i.e. it is not Lebesgue integrable. (use convexity of $f(x)=1/x$ and compare to a harmonic series). It is true however, that if $\int_{a}^{b}f(x)dx\tag 3$ converges as a Riemann integral, then it is Lebesgue integrable (use the fact that the upper and lower sums are simple functions which can be used in an application of the MCT).
stem_mix_good_quality
I'm trying to implement a mathematical formula in a program I'm making, but while the programming is no problem I'm having trouble with some of the math. I need to calculate $\sin(\alpha(x,y))$ with $\alpha(x,y)$ the local tilt angle in $(x,y)$. I have $2$-dimentional square grid, with at each point the height, representing a $3$-dimentional terrain. To find the tilt in a point I can use the height of its direct neighbors. So $h(x+1,y)$ can be used, however $h(x+2,y)$ not. I also know the distance between two neighboring points ($dx$). By tilt I mean the angle between the normal at a point on the terrain and a vector pointing straight up. This seems like a not too hard problem, but I can't seem to figure out how to do it. Anyone got a good way to do this? Thanks!
A helpful construct here would be the normal vector to our terrain. Our terrain is modeled by the equation $$ z = h(x,y) $$ Or equivalently, $$ z - h(x,y) = 0 $$ We can define $g(x,y,z) = z - h(x,y)$. It turns out that the vector normal to this level set is given by $$ \operatorname{grad}(g) = \newcommand{\pwrt}[2]{\frac{\partial #1}{\partial #2}} \left\langle \pwrt{g}{x},\pwrt gy, \pwrt gz \right \rangle = \left\langle -\pwrt{h}{x},-\pwrt hy, 1 \right \rangle := v(x,y) $$ We can calculate the angle between this normal and the vertical $\hat u = \langle 0,0,1 \rangle$ using the formula $$ \cos \theta = \frac{u \cdot v}{\|u\| \|v\|} $$ in particular, we find that $$ \cos \theta = \frac{\hat u \cdot v}{\|\hat u\| \|v\|} = \frac{1}{\sqrt{1 + \left( \pwrt hx \right)^2 + \left( \pwrt hy \right)^2}} $$ We may approximate $$ \pwrt hx(x,y) \approx \frac{h(x+dx,y) - h(x-dx,y)}{2(dx)}\\ \pwrt hy(x,y) \approx \frac{h(x,y+dy) - h(x,y-dy)}{2(dy)} $$ Note: since you have to calculate $\sin \theta$, you find $$ \sin \theta = \sqrt{1 - \cos^2 \theta} = \frac{\sqrt{\left( \pwrt hx \right)^2 + \left( \pwrt hy \right)^2}}{\sqrt{1 + \left( \pwrt hx \right)^2 + \left( \pwrt hy \right)^2}} $$
Option 1: estimate the partial derivatives using the finite difference scheme $\frac{h(x+1,y)-h(x-1,y)}{2\Delta x}$, $\frac{h(x,y+1)-h(x,y-1)}{2\Delta x}$ and use the normalvector $(h_x',h_y',-1)\to\cos\theta=1/\sqrt{h_x^2+h_y^2+1}$. Option 2: fit a least squares plane $z=ax+by+c$ in the 3x3 neighborhood and use $(a,b,-1)\to\cos\theta=1/\sqrt{a^2+b^2+1}$.
stem_mix_good_quality
I am trying to calculate the partial derivative of a function of several variables, and it is pushing my understanding of matrix calculus. The function is the following $$f(x) = M D(x) R x$$ where $D(x)$ is the diagonal matrix with the vector $x = (x_1,\dots,x_n)$ on the main diagonal, and $M$ and $R$ are $n \times n $ real matrices. What I am looking for is the matrix of partial derivatives $$\frac{\partial f(x)}{\partial x_i}$$ I can derive this by expanding the above into non-matrix notation, but it is quite messy and I can't figure out how to simplify it. Ideally I'd like to have $\partial f(x) / \partial x_i$ in terms of $M$ and $R$. I'm hoping this is a fairly straightforward application of matrix calculus rules, but I can't seem to find any useful way of dealing with this combined function of matrix. Thanks!
I think expanding matrix multiplication, as you have tried, is a good idea, I have not been able to find a closed form solution for this case. What you are looking for is the Jacobian matrix ($J_f \in \mathbb{R^{n \times n}}$): $$ J_f = \frac{\partial f}{\partial x} = \frac{\partial}{\partial x}\left( M D(x) R x\right)=M \frac{\partial}{\partial x}\left(D(x) R x\right)$$ Now, you can develop with indexes term $R x$, which is a column vector, as follows: $$(R x)_j = \sum_k^n R_{j,k}x_k$$ And, intermediately, you can compute $D(x)R x$ as: $$(D(x) R x)_j = \sum_k^n x_j R_{j,k}x_k$$ Now, you can take derivatives w.r.t. $x_i$, such as: $$\left(\frac{\partial D(x) R x}{\partial x}\right)_{i,j} = \frac{\partial (D(x) R x)_j}{\partial x_i} = \sum_k^n R_{j,k} \frac{\partial}{\partial x_i} (x_j x_k)$$ Now, you can differentiate two cases: Case 1 $i \neq j$ (out off diagonal terms): In this case, you have $\frac{\partial}{\partial x_i} (x_j x_k) = x_j \delta_{i,k}$: $$\left(\frac{\partial D(x) R x}{\partial x}\right)_{i,j} =R_{j,i} x_j \equiv (D(x)R)_{i,j}$$ Case 2 $i = j$ (diagonal terms): In this case, you have $\frac{\partial}{\partial x_i} (x_j x_k) = x_k - x_j \delta_{j,k}+2x_j\delta_{j,k}=x_k +x_j\delta_{j,k}$: $$\left(\frac{\partial D(x) R x}{\partial x}\right)_{i,j} = \left(\sum_k^n R_{j,k} x_k\right) + R_{j,j}x_j \equiv (D(x)R)_{i,j}+(Rx)_{i,j}$$ So, recapping results, and putting then in matrix form: $$\left(\frac{\partial D(x) R x}{\partial x}\right)_{i,j} = \delta_{i,j}R_{i,j}x_j + (D(x)R)_{i,j} = D(x)R + \text{diag}(R x)$$ Leading to: $$ J_f = M[D(x)R+\text{diag}(Rx)]$$ Edit: To correct calculation error. Special thanks to @daw.
A quick numerical (julia) test of Carlos' and Rodrigo's solutions. n=3; x=8*randn(n); M=38*randn(n,n); R=88*randn(n,n); dx=randn(n)*1e-6; D=diagm(x); dD=diagm(dx); Dr=diagm(diag(R)); f = M*D*R*x; df = M*(D+dD)*R*(x+dx) - f; J1 = M*(D*R + diagm((R+Dr)*x)); # Carlos's jacobian J2 = M*(D*R + diagm(R*x)); # Rodrigo's jacobian df 3-element Array{Float64,1}: 0.105202 -0.0797321 0.300291 J1*dx 3-element Array{Float64,1}: 0.152478 -0.0989752 0.351571 J2*dx 3-element Array{Float64,1}: 0.105202 -0.0797322 0.300291 The answers differ by one term, i.e. Carlos uses $${\rm diag}((R+D_R)x)$$ in place of Rodrigo's $${\rm diag}((R)x)$$
stem_mix_good_quality
$x$ and $y$ are reals such that $1 \le y \le 2$ and $2y \le xy + 2$ . Calculate the minimum value of $$\large \frac{x^2 + 4}{y^2 + 1}$$ This problem is adapted from a recent competition... It really is. There must be better answers.
We first need to prove that $x \ge 0$ . It is true that $2y \le xy + 2 \iff 0 \le \dfrac{2(y - 1)}{y} \le x$ (because $1 \le y$ ). Moreover, $$y \le 2 \implies \left\{ \begin{align} y^2 &\le 2y\\ xy + 2 &\le 2x + 2 \end{align} \right. (x, y \ge 0)$$ We have that $$\frac{x^2 + 4}{y^2 + 1} \ge \frac{2x + 3}{2y + 1} \ge \frac{2x + 3}{xy + 3} \ge \frac{2x + 3}{2x + 3} = 1$$ The equality sign occurs when $x = 1$ and $y = 2$ .
$$ x y \ge 2y-2\Rightarrow x\ge 2-\frac {2}{\max y}\ge 1 $$ because $y > 0$ now $$ \frac{\min x^2+4}{\max y^2+1}\le \frac{x^2+4}{y^2+1} $$ hence $$ \frac 55 = 1 \le \frac{x^2+4}{y^2+1} $$
stem_mix_good_quality
In my calculations I stumbled upon the following integral which is a little bit daunting. I couldn't come up with the proper variable substitution solution. Can anybody please explain using which method I can calculate the following integral ?: $$ \int_{-\infty}^{\infty}\mathrm{e}^{-\epsilon} \exp\left(\, -e^{-\epsilon} \left[\, 1 + \exp\left(\, u_{2}\ -\ u_{1}\,\right)\,\right]\,\right) \,\mathrm{d}\epsilon $$ Any ideas would be appreciated.
As many already suggested, use the substitution $$y = e^{-\epsilon} ~~~~~~~ \text{d}y = -e^{-\epsilon}\ \text{d}\epsilon$$ The extrema will now run from $+\infty$ to $0$: $$\large \int_{+\infty}^0 \underbrace{-e^{-\epsilon}\ \text{d}\epsilon}_{\text{d}y}\ e^{\overbrace{e^{-\epsilon}}^{y}\ A}$$ Hence, exchanging the extrema and the sign: $$\int_0^{+\infty} e^{-y A}\ \text{d}y$$ Where $A = 1 + e^{u_2 - u_1}$ Result is trivial and it's $$\boxed{\frac{1}{A} = \frac{1}{1 + e^{u_2 - u_1}}}$$
Sub $x=e^{-\epsilon}$. This is then $$\int_0^{\infty} dx \, e^{-a x} = \frac1{a} $$ where $a=1+e^{u_2-u_1}$.
stem_mix_good_quality
(1) Let $X_1,X_2$ be two independent gamma-distributed random variables: $X_1 \sim \Gamma(r,1), X_2 \sim \Gamma(s,1)$ . Are $Z_1:=\frac{X_1}{X_1+X_2}$ and $Z_2:= X_1+X_2$ independent? if yes, I have to find their density. I have already found that $X_1=Y_1Y_2$ and $X_2=Y_1(1-Y_2)$ . But I am not done. What is the domain of $Y_1$ and $Y_2$ ? Since $X_1,X_2>0$ I have that $Y_1>0$ and $0<Y_2<1$ . (2) If $X_1 \sim B (a,b), X_2 \sim B(a+b,c),$ prove $X_1 X_2 \sim B(a,b+c)$ (3) If $X \sim N(0,\sigma^2),$ calculate $E(X^n).$ What I know is that $$E(x^n) = \int_{-\infty}^{\infty}x^n\frac{1}{\sqrt{2\pi t}}e^{-x^2/2t}\;dx$$ I've tried solving numerous times it by parts and then taking limits but I keep getting $0$ and not $3t^2$ ! Can somebody give me a better direction?
The integral $\displaystyle \int_{-\infty}^\infty x^n\frac{1}{\sqrt{2\pi t}}e^{-x^2/2t}\;dx$ is in fact $0$ when $n$ is odd, since it's the integral of an odd function over an interval that is symmetric about $0.$ (If the integrals of the positive and negative parts were both infinite, then complications would arise, but we don't have that problem in this case. Here is what I suspect you did: Let \begin{align} & u = \dfrac {x^2} {2t} \\[6pt] & t\, du = x \, dx \\[6pt] & x^n = \big( 2tu \big)^{n/2} \end{align} Then as $x$ goes from $-\infty$ to $+\infty,$ $u$ goes from $+\infty$ down to $0$ and back up to $+\infty,$ so you get $$ \int_{+\infty}^{+\infty} $$ and you conclude that that is $0.$ But you shouldn't use a non-one-to-one substitution. Instead, write $$ \int_{-\infty}^{+\infty} x^n \frac 1 {\sqrt{2\pi t}} e^{-x^2/(2t)} \, dx = 2\int_0^{+\infty} x^n \frac 1 {\sqrt{2\pi t}} e^{-x^2/(2t)} \, dx $$ i.e. $$ \int_{-\infty}^{+\infty} = 2\int_0^{+\infty}. $$ This is correct when $n$ is even. Then go on from there, using the substitution above. Postscript: With $n=4,$ we have $$ x^4 \,dx = x^3\big(x\,dx\big) = (2tu)^{3/2} \big(t\,du\big) $$ and so \begin{align} & 2\int_0^{+\infty} x^3 \frac 1 {\sqrt{2\pi t}} e^{-x^2/(2t)} \big(x \, dx\big) \\[8pt] = {} & \frac 2 {\sqrt{2\pi t}} \int_0^{+\infty} (2tu)^{3/2} e^{-u} \big(t\,du\big) \\[8pt] = {} & \frac 2 {\sqrt{2\pi t}} \cdot (2t)^{3/2} \cdot t \int_0^{+\infty} u^{3/2} e^{-u} \, du \\[8pt] = {} & \frac 4 {\sqrt{\pi t}} \cdot t^{5/2} \Gamma\left( \frac 5 2 \right) \tag 1 \\[8pt] = {} & \frac{4t^2}{\sqrt\pi} \cdot\frac 1 2 \cdot \frac 3 2 \Gamma\left( \frac 1 2 \right) \\[8pt] = {} & 3t^2. \end{align} Starting on line $(1)$ you need to know some properties of the Gamma function.
If $X_1 \sim B (a,b), X_2 \sim B(a+b,c),$ prove $X_1 X_2 \sim B(a,b+c)$ Suppose $Y_1,Y_2,Y_3$ are independent and have gamma distributions $$ \text{constant} \times x^{k-1} e^{-x} \, dx \quad \text{for } x \ge0 $$ for $k=a,b,c$ respectively. Then the distribution of $X_1$ is the same as that of $Y_1/(Y_1+Y_2),$ and the distribution of $X_2$ is the same as that of $(Y_1+Y_2)/(Y_1+Y_2+Y_3).$ If the joint distribution of $(X_1,X_2),$ rather than only the two marginal distributions, is the same as that of $\big( Y_1, Y_1+Y_2\big)/\big(Y_1+Y_2+Y_3\big),$ then you can see how the result would follow. But that is a big "if" and that information is not given in your question.
stem_mix_good_quality
Calculate limit in use of integrals $$ \lim_{n \rightarrow \infty} \sum_{k=1}^{n} \frac{1+n}{3k^2+n^2} $$ My attempt: $$\sum_{k=1}^{n} \frac{1+n}{3k^2+n^2} = \frac{1}{n} \sum_{k=1}^{n} \frac{\frac{1}{n}+1}{3(k/n)^2+1} = \\ \frac{1}{n}\cdot (1/n + 1) \sum_{k=1}^{n} \frac{1}{3(k/n)^2+1} $$ Ok, I know that when I am taking limit I should replace (from aproximation theorem) $$ \sum_{k=1}^{n} \frac{1}{3(k/n)^2+1}$$ with $$ \int_{0}^{1} \frac{1}{1+3x^2}$$ but I still don't know what have I do (and why) with $$ \frac{1}{n}\cdot (1/n + 1) $$ part. In many solutions we just ignore part $\frac{1}{n}$ but I don't know why and there where I have little more 'difficult' expression like $ \frac{1}{n}\cdot (1/n + 1) $ I completely don't know what should I do... $$ $$
$$\frac{1}{n}\left(1+\frac{1}{n}\right)\sum...=\underbrace{\frac{1}{n}\sum...}_{\to \int...}+\underbrace{\frac{1}{n}\underbrace{\left(\frac{1}{n}\sum_{}...\right)}_{\to \int...}}_{\to 0}\to \int...$$
Just ignore the $1$ in the numerator as it is obviously less than $\infty$ . And put $\frac{1}{n}=dx$ & $\frac{k}{n}=x$ So your integration becomes $$\int_0^1\frac{dx}{3x^2+1}=\frac{tan^{-1}\sqrt{3}}{\sqrt{3}}$$ Answer to your question, what to do with $\frac{1}{n}(\frac{1}{n}+1)$ as $(\frac{1}{n}+1)\to1$ as $n\to\infty$ . Remember, when factors are in multiplication, partial limit can be applied. Hope this will be helpful!
stem_mix_good_quality
I have seen ways to evaluate this integral using the upper and lower incomplete gamma functions. I want to know if there are ways to calculate this integral using change of variables or tricks similar to the evaluation of $$ \int_{0}^{\infty} e^{-x^2}dx $$ using double integrals. Thanks in advance
There isn't any better form for this integral than $$\int_{0}^{\infty} e^{-x^3}\ dx = \Gamma\left(\frac{4}{3}\right)$$ You can find this rather simply by substitution, giving exactly the form of the gamma function. As far as it is known, I believe, there is no simpler way to write the values of the gamma function at third-integer arguments like there is for half-integers, giving the nice forms involving the square root of $\pi$ that you are thinking of. More generally, we have the identity $$\int_{0}^{\infty} e^{-x^\alpha}\ dx = \Gamma\left(\frac{\alpha+1}{\alpha}\right)$$ in the same way, for all real $\alpha > 0$ .
As shown in the other answers, the integral can be expressed in terms of $\Gamma\left(\frac13\right)$ . As no closed-form expression of this constant is known, this is a sure sign that you can't find an alternative resolution method.
stem_mix_good_quality
I need to show that the function $z^n$ with $n \in \mathbb{Z}$, is entire, using the conditions of Cauchy-Riemann. My problem is to calculate the partial derivatives of this function. Can someone help me? Thank you.
Entire function. if $f(z) = u + iv$ then $\frac {\partial u}{\partial x} = \frac {\partial v}{\partial y}$ and $\frac {\partial u}{\partial y} = - \frac {\partial v}{\partial x}$ The tricky part is geting $z^n$ in to $u+iv$ form $z = r(cos\theta +i\sin \theta)\\ z^n = r^n\cos n\theta +ir^n\sin n\theta\\ u= r^n\cos n\theta\\ v=r^n\sin n\theta$ $\frac {\partial u}{\partial x} = nr^{n-1}\cos n\theta \frac {\partial r}{\partial x} - nr^n\sin n\theta \frac {\partial\theta}{\partial x}$ $r = (x^2+y^2)^\frac 12\\ \frac {\partial r}{\partial x} = \frac xr$ $\theta = \arctan \frac yx\\ \frac {\partial \theta}{\partial x} = \frac y{r^2}$ $\frac {\partial u}{\partial x} = nr^{n-2} (x \cos n\theta - y\sin n\theta)$ Now use a similar process to find: $\frac {\partial u}{\partial y},\frac {\partial v}{\partial x},\frac {\partial v}{\partial y}$ and show that $\frac {\partial u}{\partial x} = \frac {\partial v}{\partial y}$ and $\frac {\partial u}{\partial y} = - \frac {\partial v}{\partial x}$
Suppose that $f=u_f + i v_f$ and $g = u_g + i v_g$ are entire functions, then the product $h = fg = u_h + i v_h $ is also entire. We have: $$ \begin{split} u_h &= u_f u_g - v_f v_g\\ v_h &= u_f v_g + v_f u_g \end{split}$$ We can then show that: $$ \begin{split} \frac{\partial u_h}{\partial x} =&u_g \frac{\partial u_f}{\partial x} +u_f \frac{\partial u_g}{\partial x} -v_g \frac{\partial v_f}{\partial x} -v_f \frac{\partial v_g}{\partial x}\\ = &u_g \frac{\partial v_f}{\partial y} +u_f \frac{\partial v_g}{\partial y} +v_g \frac{\partial u_f}{\partial y} +v_f \frac{\partial u_g}{\partial y}\\ =&\frac{\partial v_h}{\partial y} \end{split} $$ and similarly it's easy to demonstrate that the other Cauchy-Riemann relations are satisfied. This then allows you to do induction. The case $n=1$ is trivial and the product rule for the Cauchy-Riemann relations then allows you to prove the case $n+1$ from assuming that it's valid for $n$ and $n=1$.
stem_mix_good_quality
I am trying to follow a derivation on a very old paper. My knowledge of group theory is limited, I have the basis but not much experience with advanced concepts. We are working in 4 dimensions, so the paper quotes the 4D representations of the generators of $D_5$, $r$ (a rotation by $\pi/5$) and $p$ (the reflection $x \rightarrow x, y \rightarrow y$) as: $$ \mathcal{R}(r) = \left (\begin{array}{ccc} 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & -1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \\ \end{array} \right ), $$ $$ \mathcal{R}(p) = \left (\begin{array}{ccc} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{array} \right ). $$ Question 1 : I thought all dihedral groups had irreducible representations of at most 2 dimensions, hence all >2 dimensional representations should be constructed from direct sums of these (so block diagonals)? Where do they get these expressions from? Then we calculate the centraliser of $D_5$ in $GL_4(\mathbb{Z})$, i.e. the largest subgroup of $GL_4(\mathbb{Z})$ which commutes with $D_5$. Unfortunately they skip directly to the answer because apparently "computing these groups in straightforward". The generators of the centraliser $C$ are given by: $$ \mathcal{R}(\delta) = \left (\begin{array}{ccc} -1 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \\ -1 & 1 & 0 & 0 \\ -1 & 0 & 1 & -1 \\ \end{array} \right ), $$ $$ \mathcal{R}(\tau) = \left (\begin{array}{ccc} -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{array} \right ). $$ Question 2 : where do these come from? Is it there something very trivial that I'm not getting? Where would I even start? My attempts : 1)I started this by building a 4D representation of $D_5$ just by direct summing its 2D irreps. I am using $r_{2D}$ as the 2D rotation matrix and $p_{2D} = \left (\begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right ), $, to then make $ \mathcal{R}(r_{4D}) = \left (\begin{array}{cc} r_{2D} & 0 \\ 0 & r_{2D} \\ \end{array} \right ), $ $ \mathcal{R}(p_{4D}) = \left (\begin{array}{ccc} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{array} \right ), $ etc. 2) I used the generic element of $GL_4(\mathbb{Z})$ as $ \beta = \left (\begin{array}{ccc} a & b & c & d \\ e & f & g & h \\ i & l & m & n \\ q & s & t & u \\ \end{array} \right ). $ Then I brute-forced computed the relationships between the entries by requiring $h$ to commute with every element $d$ of $D_5$: $h^{-1}dh = d$. I found that the matrix above is reduced to $\beta = \left (\begin{array}{ccc} a & 0 & c & 0 \\ 0 & a & 0 & c \\ s & 0 & u & 0 \\ 0 & s & 0 & u \\ \end{array} \right ). $ However this is where I stop, for I don't know how to compute the generator of the group of these matrices.
$\def\glz{\operatorname{GL}_4(\mathbb Z)}$ $\def\inv{^{-1}}$ The question is about filling in some details about assertions made in this paper . The answer got quite long as I added more details. Guide: the part of the answer through the first edit shows that the commutant of the copy of $D_5$ in $\operatorname{GL}_4(\mathbb C)$ is two dimensional, abelian, and generated as a unital algebra by a certain matrix $M = R + R\inv$. However, the copy of $D_5$ and the matrix $M$ both lie in $\glz$, and the subsequent part of the argument in the 4th postlude shows that $M$ and $- 1$ (minus the identity matrix) generate the centralizer of $D_5$ in $\glz$ as a group . In between, I added more detail about the first part, and also explained the construction of the 4 dimensional representation of $D_5$ given in the paper. I'm going to write $R$ and $J$ for your two matrix generators. $R$ is diagonalizable with eigenvectors the 4 primitive 5th roots of 1, call them $\omega, \omega^{-1}, \omega^2, \omega^{-2}$. In the basis of eigenvectors, matrices commuting with your $D_5$ in particular commute with $R$, so are diagonal. Also in the basis of eigenvectors, $J$, up to some scaling which is probably irrelevant, permutes the eigenvectors in pairs (belonging to inverse pairs of eigenvalues). Thus your diagonal commuting matrices have only two distinct diagonal entries. Thus the commutant is commutative, generated by two commuting idempotents of rank 2. Now, still in the basis of eigenvectors, $M = R + R^{-1}$ is a diagonal matrix with just two eigenvalues $\omega + \omega^{-1}$ and $\omega^2 + \omega^{-2}$ corresponding to the range of the two desired idempotents. So in fact, the whole commutant is generated by polynomials in $M$. Note that this is where $\mathcal R(\delta)$ came from, it is $R + R^{-1}$, in the original basis, not the basis of eigenvectors. I've considered the problem from the point of view of the commuting algebra , but the centralizer subgroup just consists of the invertible elements in the commuting algebra. I've also worked over the complex numbers. Anyway, I think this is close to what you need. Edit: Looking at this from more of a group representations point of view, we can get to the same place more conceptually, perhaps. Clearly $R + R^{-1}$ is in the commutant for any representation. So $R + R^{-1}$ acts as a scalar in each irreducible representation. In a 2 dimensional irreducible representation, the value of that scalar is the character of $R$, which is $2 \cos(2 \pi/k)$, where $k$ can be taken as a label for the representation. In either of the two one dimensional representations, $R + R^{-1}$ acts as the scalar $2$. Now in our particular representation, by examining the spectrum of $R + R^{-1}$ or of $R$, you can see that both 2 dimensional irreducible representations occur with multiplicity 1, so it follows that $R + R^{-1}$ actually generates the commutant (as an algebra). 2nd edit: You asked how to use polynomials in $R + R^{-1}$ to generate the whole commutant. In fact linear polynomials will do, because of the special structure here, as I will next explain. Mini-mini-course in representation theory: Suppose I have a group representation $\rho : G \to \operatorname{GL}(V)$, and it decomposes as a direct sum of mutually inequivalent irreducible subrepresentations, $V = W_1 \oplus W_2 \oplus \cdots \oplus W_n$. Let $P_j$ be the projection operator which is $1$ on $W_j$ and zero on $W_i$ for $i \ne j$. Thus $P_j^2 = P_j$, $P_i P_j = 0 $ for $i \ne j$ and $\sum_j P_j = 1$, where I write $1$ for the identity operator. Then the entire commutant $\mathcal C$ of $\rho(G)$ consists of linear combinations of $P_1,\dots, P_n$. Mini-mini-course in linear algebra Now suppose I have a collection of projection operators as above $P_1,\dots, P_n$ with $P_j^2 = P_j$, $P_i P_j = 0 $ for $i \ne j$ and $\sum_j P_j = 1$. No matter if they came from a group representation or from somewhere else. Let $W_j = P_j V$, so $V$ is the direct sum of the subspaces $W_j$. Let $\mathcal C$ be the algebra generated by the projection operators $P_j$; in fact, it consists of linear combinations of the operators $P_j$. Suppose someone hands me a particular linear combination $T= \sum_i \alpha_i P_i$ with all the $\alpha_i$ distinct. Then I claim that I can recover the $P_j$ and hence all of $\mathcal C$ from polynomials in $T$, and if $n = 2$ I can do it with linear polynomials. This goes by the name interpolation formulas. Fix $i$ and consider $$\prod_{j \ne i} \frac{ T - \alpha_j 1}{\alpha_i - \alpha_j},$$ which is a polynomial of degree $n-1$ in $T$. You can check that applied to a vector in $W_k$ for $k \ne i$, it gives zero, but applied to a vector in $W_i$ it gives back the vector. Thus the polynomial in $T$ is actually equal to $P_i$. 3rd postlude I see from your discussion/chat that you are still puzzled about the derivation of the 4d representation matrices. As "explained" in the physics paper, the representation was derived from the 5d permutation representation exactly as Josh B. suggested. First of all, you have to know that given a representation $V$ and a subrepresentation $W$, you get a quotient representation on $V/W$. Start with the permutation representation of $D_5$ on $F^5$, $F$ your favorite field, given by $$ R: e_0 \to e_1 \to e_2 \to e_3 \to e_4 \to e_0 $$ where $e_i$ are the standard basis elements of $F^5$ (with shifted labels) and $$ J: e_0 \to e_0,\quad e_1 \to e_4 \to e_1, \quad e_2 \to e_3 \to e_2. $$ Now $w = \sum_{j = 0}^4 e_j$ is fixed by both $R$ and $J$. In fact, it's fixed by all permutation matrices. Let $W = F w$ and $V = F^5/W$. Give $V$ the basis $\overline e_1 = e_1 + W, \dots, \overline e_4 = e_4 + W$. Note $e_0 + W = -(\overline e_1 + \overline e_2 + \overline e_3 + \overline e_4)$, since $e_0 = -\sum_{j = 1}^4 e_j + w$. It follows that the matrices of $R$ and $J$ in $V = F^5/W$ with respect to the basis $\{\overline e_i\}_{1 \le i \le 4}$ are exactly those that you quoted from the paper. 4th postlude: I finished it off. Demonstration that the centralizer of $D_5$ in $\glz$ is generated (as a group) by $M = R + R\inv$ and $- 1$. Recall that $\glz$ is the set of integer matrices with determinant $\pm 1$. We already know that any matrix that commutes with $D_5$ pointwise is a linear combination of $M$ and the identity matrix. We need to find out when $a M + b 1$ has integer entries and determinant $\pm 1$. By inspection, $$ a M + b 1 =\left( \begin{array}{cccc} b-a & a & 0 & -a \\ 0 & b & a & -a \\ -a & a & b & 0 \\ -a & 0 & a & b-a \\ \end{array} \right) $$ is integer valued if and only if $a$ and $b$ are integers. By computation, $$ \det(a M + b 1) = \left(a^2+a b-b^2\right)^2, $$ and in particular $\det M = 1$. Moreover, $\det(a M + b 1) \ge 0$, so $$a M + b 1 \in \glz \implies \det(a M + b 1) = 1.$$ The minimal polynomial for $M$ is $x^2 + x -1$, as follows for example from the computation of the eigenvalues of $M$. Thus we have $$ M\inv = (M + 1) \quad \text{and} \quad M^2 = (1 - M). $$ Observation 1. Suppose $a M + b 1 \in \glz$ then $a$ and $b$ are relatively prime. Proof. If $a$ and $b$ have a common prime factor $p$, then $\det(a M + b 1)$ is divisible by $p^4$. qed Suppose that $T = a M + b 1 \in \glz$. We want to show that $T$ is in the subgroup of $\glz$ generated by $M$ and $ - 1$. If one of the coefficients $a, b$ is zero, then $T = \pm M$ or $T = \pm 1$, and we are done, If both of the coefficients $a, b$ are $\pm 1$, then $T = \pm M\inv$ or $T = \pm M^2$, by the discussion of the minimal polynomial, so again we are done. Now we can proceed by induction on $|a| + |b|$. Let us consider the case that $a b > 0$. By extracting a factor of $ -1$ (i.e. the matrix $-1$), we can assume $a, b > 0$, and relatively prime. Moreover, we are only interested in the case that $ a b > 1$. In particular neither of $a, b$ is divisible by the other. Observation 2. $ b > a > 0$. Proof. If $a >b$ then $$a ^2 + ab - b^2 > a^2 - b^2 = (a -b)(a + b) > a + b > a > 1.$$ Hence, $\det(T) >1$, a contradiction. qed Now we take our element $T = a M + b 1 \in \glz$ and factor out $M\inv$: $$ T = M\inv (a M^2 + b M) = M\inv ( a(1 - M) + b M) = M\inv ( (b - a) M + a 1). $$ The factor $ (b - a) M + a 1$ is necessarily in $\glz$, and has positive coefficients whose sum $b$ is less than the sum of the coefficients of $T$. So our conclusion follows from the induction hypothesis. Next consider the case that $a b < 0$. By extracting a factor of $-1$ if necessary, we can assume $a > 0 > b$, and $|a b| > 1$. Observation 3. $ a > -b$. Proof. if $-b > a$, then $$ b^2 - ab - a^2 > b^2 - a^2 = (b-a)(b+a) > 1. $$ Hence $\det(T) > 1$, a contradiction. qed We take our element $T = a M + b 1 \in \glz$ and factor out $M$: $$ T = M( a 1 + b M\inv) = M ( a 1 + b (M + 1)) = M ( b M + (a + b) 1). $$ The factor $b M + (a + b) 1$ is in $\glz$ and has coefficients $a + b > 0$ and $b <0$. The sum of absolute values of these coefficients is $(a + b) - b = a$, which is less than the corresponding sum for the coefficients of $T$, namely $a - b$. Again our conclusion follows from the induction hypothesis.
$\def\glz{\operatorname{GL}_4(\mathbb Z)}$ $\def\inv{^{-1}}$ I'm posting some further observations, as if my previous gigantic post wasn't enough. We have specific matrices $R = \mathcal R(r)$ and $J = \mathcal R(p)$ in $\glz$ providing a representation of the dihedral group $D_5$. It is required to find the centralizer of $D_5$ in $\glz$ We also have the matrix $M = R + R\inv$, denoted $\mathcal R(\delta)$ in the original post, which is an element of $\glz$ commuting with $D_5$. The first step is to find the commutant of $D_5$, that is the set of all matrices commuting with all elements of $D_5$. Your original brute force method for computing the commutant of $R$ works fine. I asked Mathematica to do it, and was told the solutions are of the form $$\left( \begin{array}{cccc} a & b & 0 & -b \\ 0 & a+b & b & -b \\ -b & b & a+b & 0 \\ -b & 0 & b & a \\ \end{array} \right)$$ which by examination is $b M + (a + b) 1$. So this gives back the solution for the commutant, that it is the set of linear combinations of $M$ and the identity, without any of the "fancy" arguments from my previous answer. The determinant of $ a M + b 1$ is $D(a, b) = (a^2 + ab - b^2)^2$. So we have a Diophantine problem to determine the integral solutions of $D(a, b) = 1$. I'm not going to provide a simpler solution to this problem than already appeared in my other answer (4th postlude), but I want to point out that Fibonacci numbers pervade this problem. In fact, we can first observe that the sequence of powers of $M$ is a matrix Fibonacci sequence in the following sense. Using the minimal polynomial of $M$, namely $x^2 + x - 1$ repeatedly, we find that, with $(f_k)_{k \ge 0}$ denoting the Fibonacci sequence, $$ (-M)^k = -f_k M + f_{k-1} \quad \text{and} \quad M^{-k} = f_k M + f_{k+1} 1, $$ for $k \ge 1$. So what we have to show is that an integer pair $(a, b)$ satisfies $D(a, b) = 1$ if and only if $$(a, b) = \pm (-f_k, f_{k-1}) \quad \text{or} \quad (a, b) = \pm(f_k, f_{k+1}), $$ for some $k$. (I am extending the Fibonacci sequence by taking $f_0 = 0$.) This requires more or less the same sort of work as was contained in my previous 4th postlude, and I am going to skip the details. (This is known: Jones, James P. Diophantine representation of the Fibonacci numbers. Fibonacci Quart. 13 (1975), 84–88. ) In any case the OP is interested in generalizing these results to $D_n$, $n >5$, and all of these Fibonacci phenomena are very much tied to the case of $D_5$. Edit: I am adding this in response to some questions posed to me by the OP. $\def\glz{\operatorname{GL}_4(\mathbb Z)} \def\inv{^{-1}} \def\boldG{\mathbf G} \def\boldn{\mathbf n} \def\Z{\mathbb Z} \def\R{\mathbb R} \def\boldq{\mathbf q} $ Let me start over with a few things. The starting point in the paper is the set of points $\boldG_j = (\cos(2 \pi j/ 5), \sin(2 \pi j/5))$, $0 \le j \le 4$ in the real vector space $\mathbb R^2$. (It is also convenient to take the indices as representing residue classes mod 5.) These $\boldG_j$ satisfy $\sum_{j = 0}^4 \boldG_j = 0$, but $\{\boldG_j : 1 \le j \le 4\}$ is linearly independent over $\mathbb Z$, while of course linearly dependent over $\mathbb R$. So the set $P$ of integer linear combinations of $\{\boldG_j : 1 \le j \le 4\}$ is a complicated configuration of points in the plane, but is also a free $\mathbb Z$--module of rank $4$. The dihedral group $D_5$, generated by the geometric rotation $\tau$ by angle $2 \pi/5$ in the plane and the reflection through the $x$-axis $\sigma$ leaves invariant the set of $\{\boldG_j : 0 \le j \le 4\}$. The group $D_5$ acts on $\mathbb R^2$ by real linear automorphisms, and hence acts on $P$ by $\mathbb Z$--linear automorphisms. The matrices of the generators $\tau$, $\sigma$ with respect to the $\Z$--basis $\{\boldG_j : 1 \le j \le 4\}$ of $P$ are the matrices $M(\tau)$ and $M(\sigma)$ given in the paper. We have a $\Z$--module isomorphism $\boldq : \Z^4 \to P$, $\boldq(\boldn) = \sum_{j = 1}^4 n_j \boldG_j$. This gives us a representation of $D_5$ on $\Z^4$ with the same representation matrices. We can regard this representation as a representation on $\mathbb R^4$ or on $\mathbb C^4$, or whatever, by using the same representation matrices. There is also an element $\delta = \tau + \tau\inv$. (It lives in the the group ring consisting of formal linear combinations of the group elements of $D^5$.). It acts on $P$ or on $\Z^4$ by the matrix $M(\delta) = M(\tau) + M(\tau)\inv$. All of these transformations $\tau, \sigma, \delta$ will act via the same matrices, either on $P$, with respect to the basis $\{\boldG_j : 1 \le j \le 4\}$ or on $\Z^4$ with respect to the standard basis.
stem_mix_good_quality
I have been struggling to solve this limit. What is the limit as $x$ approaches $45^0$ of $$\frac{\sqrt{2}\cos x-1}{\cot x-1}?$$ I know how to use L'Hospital's rule to calculate this limit and got the answer as $0.5$. But, how do I calculate the limit by manipulating the function? Please provide only some hints to proceed.
multiply numerator and denominator by $\sqrt{2}\cos x+1$. you get $$\frac{2\cos^2 x-1}{(\cot x -1)( \sqrt{2}\cos x+1 )}$$ now $2\cos^2 x-1=\cos 2x$ and $\cos 2x=\frac{1-\tan^2x}{1+\tan^2 x}$ also $\cot x-1=\frac{1-\tan x}{\tan x}$ using all these you get $$\frac{\sqrt{2}\cos x-1}{\cot x-1}=\frac{\tan x(1+\tan x)}{(1+\tan^2 x)( \sqrt{2}\cos x+1 )}$$
$$\lim_{x\to\frac\pi4}\frac{\sqrt{2}\cos x-1}{\cot x-1} =\sqrt2\lim_{x\to\frac\pi4}\frac{\cos x-\cos\frac\pi4}{\cot x-\cot\frac\pi4}$$ $$=\sqrt2\frac{\frac{d(\cos x)}{dx}_{(\text{ at }x=\frac\pi4)}}{\frac{d(\cot x)}{dx}_{(\text{ at }x=\frac\pi4)}}=\cdots$$ Alternatively, $$F=\lim_{x\to\frac\pi4}\frac{\sqrt{2}\cos x-1}{\cot x-1} =\sqrt2\lim_{x\to\frac\pi4}\frac{\cos x-\cos\frac\pi4}{\cot x-\cot\frac\pi4}$$ $$=\sqrt2\lim_{x\to\frac\pi4}\frac{-2\sin\frac{x+\frac\pi4}2\sin\frac{x-\frac\pi4}2}{-\sin(x-\frac\pi4)}\cdot\sin x\sin\frac\pi4$$ $$=\sqrt2\lim_{x\to\frac\pi4}\frac{-2\sin\frac{x+\frac\pi4}2\sin\frac{x-\frac\pi4}2}{-2\sin\frac{x-\frac\pi4}2\cos\frac{x-\frac\pi4}2}\cdot\sin x\sin\frac\pi4$$ As $x\to\frac\pi4, x\ne\frac\pi4\implies \sin\frac{x-\frac\pi4}2\ne0 $ $$\implies F=\sqrt2\lim_{x\to\frac\pi4}\frac{\sin\frac{x+\frac\pi4}2}{\cos\frac{x-\frac\pi4}2}\cdot\sin x\sin\frac\pi4$$ $$=\sqrt2\frac{\sin\frac\pi4}{\cos0}\cdot\sin\frac\pi4\sin\frac\pi4=\cdots$$
stem_mix_good_quality
I am trying to find the expected value for the sum of two dices, given that if both dices shows an identical number, then the sum is doubled. Basically if we get $1$ for a dice and $1$ for the other one, this will count as: $4$ , similarly: $2+2=8$ , $3+3=12$ , $4+4=16$ , $5+5=20$ , $6+6=24$ . If any other combination appears then it is counted as for a normal dice. I know how to find the expected value for two normal dices, since I just find it for one, then double it. $$E(x)={\frac16} (1+2+3+4+5+6)=\frac16\frac{6\cdot 7}{2}=\frac72$$ And doubling it gives the expected value for two dices to be $7$ . I would expected that in the question case the expected value to be a little higher, but I don't know how to start calculating it. I don't need a full solution, only some good hints that will help me to calculate it later. Edit. I think I have gotten an idea to draw a matrix. $$\begin{array}[ht]{|p{2cm}|||p{0.5cm}|p{0.5cm}|p{0.5cm}|p{0.5cm}|p{0.5cm}|p{0.5cm}|p{0.5cm}|p{0.5cm}|p{0.5cm}|} \hline \text{ x } & 1 &2 &3 &4 &5 &6 \\ \hline \hline \hline 1 &4 &3 &4 &5 &6 &7 \\ \hline 2& 3 & 8 &5 &6 &7&8 \\ \hline 3& 4 &5 &12 &7 &8&9\\ \hline 4 &5 &6 &7&16&9&10 \\ \hline 5 &6 &7&8&9&20&11 \\ \hline 6&7&8&9&10&11&24 \\ \hline \end{array}$$ And now the expected value in our case is: $$E=\frac1{36}\left(l1+l2+l3+l4+l5+l6\right)$$ Where $l_k$ is the sum of the numbers in $l_k$ . This gives: $$E=\frac1{36}\left(29+37+45+53+61+69\right)=\frac{294}{36}=8.1(6)$$ Is this fine?
The probability of throwing two $1$ s, two $2$ s, two $3$ s, two $4$ s, two $5$ s or two $6$ s all equal $\frac{1}{36}$ each. In these special cases, we must add $2$ , $4$ , $6$ , $8$ , $10$ and $12$ to the sum respectively. We thus find: $$E(X) = 7 + \frac{2 + 4 + 6 + 8 + 10 + 12}{36} \approx 8.17$$
Given that there are two dice with 6 sidesand we are considering all possible pairs of outcomes, there will be a total of 36 possible sum outcomes ranging from 2 to 12. You can find the average of these outcomes by $$(2 + 3 + 3 +4 + 4 + 4 +...+12)/36$$ To get the proper average with the new weighted outcome you could simply multiply by 36 (to just get the sum of the outcomes) add the sum of the like pairs in (doubling their original weight), then divide by 36 (since the number of actual outcomes is still 36) again. However, we can do the same thing in less steps by using the linearity of expectation.
stem_mix_good_quality
I'm using the following formula to calculate the new vector positions for each point selected, I loop through each point selected and get the $(X_i,Y_i,Z_i)$ values, I also get the center values of the selection ( $X,Y,Z$ ) , call them $(X_c,Y_c,Z_c)$ The distance of each point from the center is $d_i=\sqrt{(X_i-X_c)^2+(Y_i-Y_c)^2+(Z_i-Z_c)^2}$ . The coordinates for the new vector position is: $g_i=\left(\frac b{d_i}(X_i-X_c)+X_c,\frac b{d_i}(Y_i-Y_c)+Y_c,\frac b{d_i}(Z_i-Z_c)+Z_c\right)$ My problem is I don't think it's averaging properly, it should be a smooth path or an average path the whole way from every axis. Here's a screen shot before I run the script: And here is what happens after: It's perfect on the front axis, but as you can see from the top and the side it's not so smooth. I'm using Python in MAYA to calculate this, here's the code I'm using: import maya.cmds as cmds import math sel = cmds.ls(sl=1, fl=1) averageDistance = 0 cmds.setToolTo('Move') oldDistanceArray = [] cs = cmds.manipMoveContext("Move", q=1, p=1) for i in range(0, len(sel), 1): vts = cmds.xform(sel[i],q=1,ws=1,t=1) x = cs[0] - vts[0] y = cs[1] - vts[1] z = cs[2] - vts[2] distanceFromCenter = math.sqrt(pow(x,2) + pow(y,2) + pow(z,2)) oldDistanceArray += [(distanceFromCenter)] averageDistance += distanceFromCenter if (i == len(sel) -1): averageDistance /= len(sel) for j in range(0, len(sel), 1): vts = cmds.xform(sel[j],q=1,ws=1,t=1) gx = (((averageDistance / oldDistanceArray[j]) * (vts[0] - cs[0])) + cs[0]) gy = (((averageDistance / oldDistanceArray[j]) * (vts[1] - cs[1])) + cs[1]) gz = (((averageDistance / oldDistanceArray[j]) * (vts[2] - cs[2])) + cs[2]) cmds.move(gx,gy,gz,sel[j]) cmds.refresh() Aditionally, I have found another 'error' here: (before) After: It should draw a perfect circle, but it seems my algorithm is wrong
You are putting those points on a sphere , not a circle , this is why from front it looks alright (more or less, in fact I suspect that even in front it is not a circle), while from the side is not a line. To make it more as you wanted, you need align all the points in some common plane. As for the "error" you mentioned, it is not a bug, just your algorithm works this way. To view your transform graphically, draw yourself rays from each point to the center. You should see that those are not evenly spaced and thus the effect is some parts of the circle don't get enough points. If I were to code such a thing, I would make two more inputs: the circle I would like to obtain (a plane shape) and one special point which would tell where the circle "begins" (this is so that the circle won't appear "twisted"). Then, for each point of the transform calculate its destination evenly spaced along the circle, and then make a transform that move each point to its destination. Some final comments: you didn't describe any context, so I might be wrong, but the evenly spaced points on the circle might distort the over-all shape of the object (the non-evenly spaced points would disturb it less), so I don't think that is a good idea. It might be better to handle the "density of the points" by hand (e.g. by tessellating the object, whatever), but keep the point at the angles/azimuths they were (of course, you want them in one plane if you need a circle). And the algorithm would be easier (for example, you would not need the "circle starting point" then) ;-) I hope it explains something ;-)
In your second example, at least, if you want it to be a circle with the yellow points that I can see highlighted, simply changing the distance to the centre won't be enough, as there are no points along the green axis to be modified to form the perfect circle. However returning to your original problem, the issue to me seems to be that half of the points selected are skewed to the back of the yellow axis, effectively moving your centre to a position that isn't actually the centre you want. By the looks of it, the centre that you want is the geometric centre - given the points you have, you should find $X_c, Y_c$, and $Z_c$ independently by finding the two points furthest from each other along each axis, and then finding the point, along that axis, that lies between those two points
stem_mix_good_quality
I want to calculate the product $\alpha= \prod \limits_{i=2}^{\infty} (1+\frac{1}{(p_i -2)p_i})$ for all primes $p_i >2$ . I calculated this product first with computer and get for the first primes under ten millions $\alpha=1.5147801192603$ . Analytical I tried to expand the product as a telescope sum: $\alpha=1+\frac{1}{2}(1-\frac{1}{3}+\frac{1}{3}-\frac{1}{5}+\frac{1}{5}-\frac{1}{7}+\frac{2}{45}+\frac{1}{9}-\frac{1}{11}+\frac{2}{105}+\cdots)$ . But this doesn't help me. I don't even know if $\alpha$ converges.
Recall that if $\{ a_n \}$ is positive and $a_n \to 0$ then $\prod (1 + a_n)$ and $\sum a_n$ converge/diverge together. Given that $p_n \sim n \log n$ convergence is clear. So at least that. As for a closed form: this is actually the reciprocal of the twin primes constant. This appears in many, many conjectures about the prime numbers.
Ideas : The product can be separated as : $\left(\displaystyle \prod \frac{p-1}{p}\right)\left(\displaystyle \prod \frac{p-1}{p-2}\right)$ But we can't separately take both as $\left(\displaystyle \prod \frac{p-1}{p}\right)=\displaystyle \prod (1-p^{-1})=\frac{1}{\zeta(1)}$ is not defined The other one is harder and concerns the First Hardy-Littlewood conjecture , which is, I think, still not proven.
stem_mix_good_quality
Calculate the area of sphere $x^2+y^2+z^2 = 1$ where $a \le z \le b$ where $-1 \le a \lt b \le 1$ . So I know I should use the usual parameterization: $r(\phi, \theta)=(cos(\phi) sin(\theta), sin(\phi)sin(\theta),cos(\theta))$ and it is easy to see that $\sqrt{det(D_r*D_r^T)} = sin(\theta)$ . All that remains is putting it all together into the integral formula. I know that $0 \le \phi \le \pi$ , but I need to find the limits of $\theta$ , and in order to do that I need to find the exact limits of $z$ - using $a,b$ . How can I find the right limits, knowing that $-1 \le a \lt b \le 1$ ? EDIT Using cylindrical coordinates as Emilio suggested - $x=\rho cos(\phi), y=\rho sin(\phi), z = z$ And we get that - $\sqrt{det(D_rD_r^T)} =1 $ , so: $\int_A ds = \int_0^{2\pi}d\phi\int_a^b1dz = \int_0^{2\pi}b-ad\phi = 2\pi(b-a)$ Is that correct?
In your notation (see here for a comparison of the different notations used) we have: $0\le\phi<2\pi$ and the limits for $\theta$ are given from $z=\cos \theta$ so, in your case: $\arccos(a)\le \theta \le \arccos(b)$ ( for $\theta$ counterclockwise from south to north pole) But, why don't use cylindrical coordinates ?
Archimedes showed that that there was an equi-areal map from the surface of sphere to the surface of a cylinder. The area of the band around the sphere is proportional to the width of the band and the radius of the sphere. $2\pi(b-a)$
stem_mix_good_quality
"Prove that every right triangular region is measurable because it can be obtained as the intersection of two rectangles. Prove that every triangular region is measurable and its area is one half the product of its base and altitude." We have to make use of the axioms for area as a set function . Well , I was able to show that a right angled triangle is measurable. As a right angled triangle can be shown to be the intersection of two rectangles. One which shares its two sides with the two non-hypotenuse sides of the right angled triangle. And the other rectangle has the hypotenuse of the triangle as its side. Now my query is do we use these two rectangles to calculate the area of the triangle ? I couldn't do it that way. Or do we have to show that a rectangle is composed of two right angled triangles which are congruent. (This one seems easier. But in Apostol's Calculus , the notion of 'congruence' is given in terms of sets of points. I.e two sets of points are congruent if their points can be put in one-to-one correspondence in such a way that distances are preserved. Proving that the two triangles are congruent in this way seems to me to be too hard. I think , Apostol doesn't want us to use the congruence proofs of Euclidean Geometry , which is justified I guess.) In the end, I couldn't use either of the above two methods , hence I used the method of exhaustion . (Apostol used this method to calculate the area under a parabola. I applied the same method by calculating the area under a straight line , i.e the hypotenuse.) I wanted to know how to find the area using congruence. Do we have to use the notion of 'congruence' as Apostol has given it ?
Since you clearly state "We have to make use of the axioms for area as a set function," lets list them: DEFINITION/AXIOMS We assume that there exists a class $M$ of measurable sets in the plane and a set function a whose domain is $M$ with the following properties: 1) $A(S) \geq 0$ for each set $S\in M$. 2) if S and T are two sets in M their intersection and union is also in M and we have: $A(S \cup T) = A(S) + A(T) - A(S \cap T)$. 3)If S and T are in M with $S\subseteq T$ then $T − S\subseteq M$ and $A(T − S) = A(T) − A(S)$. 4) If a set S is in M and S is congruent to T then T is also in M and $A(S) = A(T).$ 5) Every rectangle R is in M. If the rectangle has length h and breadth k then $A(R) = hk$. 6) Let Q be a set enclosed between two step regions S and T. A step region is formed from a finite union of adjacent rectangles resting on a common base, i.e. $S\subseteq Q \subseteq T$. If there is a unique number $c$ such that $A(S) \le c \le A(T)$ for all such step regions $S$ and $T$, then $A(Q) = c$. So we know by 2) that the intersection of two rectangles $R_1 \subseteq M$, $R_2 \subseteq M$ is in $M$. Use the axioms/properties listed above to determine what this means for the triangle formed by the intersection of two such rectangles. Note also that Apostol defines a rectangle as follows: $R = \{ (x,y)| 0\le x\leq h, \text{ and}\; 0\le y\le k \}$. Note that the union of two congruent right triangles can be positioned to form a rectangle, and alternatively, a rectangle's diagonal forms two congruent right triangles. Refer to how Apostol defines congruence : to show two triangles (or two rectangles, or two sets of points, in general) are congruent, show that there exists a bijection between the two sets of points such that distances are preserved. Note that this problem can be generalized to all triangles, not just right triangles. To see this, you need only note that every triangle is the union of two right triangles.
The notion of 'congruence' as Apostol has given is the well-known " Side-Side-Side " formula as mentioned here . If we split a rectangle by anyone of the diagonal, we can prove the two right triangles produced, are congruent and hence of the same area. So, the area of a right triangle= $\frac12$ the area of the rectangle.
stem_mix_good_quality
Please help to solve: $ x_{n+1}=\frac{1}{2-x_n}, x_1=1/2,$ $x_{n+1}= \frac{2}{3-x_n}, x_1=1/2$ I know answers, but can't figure out the solution. The first one is obvious if you calculate first 3-5 terms by hand. But how can I get the result not by guessing, but mathematically? Answers are: $x_n = \frac{n}{n+1}$ $x_n = \frac{3\cdot2^{n-1}-2}{3\cdot2^{n-1}-1}$
If we recursively apply the recursive relation we get $$x_{n+1} = \frac{1}{2-x_n} = \frac{2-x_{n-2}}{3-2x_{n-2}} = \frac{3-2x_{n-3}}{4-3x_{n-3}}$$ and in general $$x_{n+1} = \frac{k-(k-1)x_{n-k}}{k+1-kx_{n-k}}$$ Setting $k=n-1$ we get $$x_{n+1} = \frac{n-1-(n-2)\frac{1}{2}}{n-(n-1)\frac{1}{2}} = \frac{n}{n+1}$$ I haven't checked the second one but I believe the same method should produce the result.
A recurrence of the form: $\begin{equation*} w_{n + 1} = \dfrac{a w_n + b}{c w_n + d} \end{equation*}$ with $a d \ne b c$ and $c \ne 0$ is called a Ricatti recurrence. One way to solve them is to recognize the right hand side is a Möbius transform, and those can be composed like matrix products: $\begin{align*} A(z) &= \frac{a_{1 1} z + a_{1 2}}{a_{2 1} z + a_{2 2}} \\ B(z) &= \frac{b_{1 1} z + b_{1 2}}{b_{2 1} z + b_{2 2}} \\ C(z) &= A(B(z)) \\ &= \frac{c_{1 1} z + c_{1 2}}{c_{2 1} z + c_{2 2}} \end{align*}$ where the matrix of coefficients in $C$ is $B \cdot A$ . Thus the solution (as a matrix) is given by $w_n = A^n(w_0)$ . Another way is due to Mitchell. Define a new variable $x_n = (1 + \eta w_n)^{-1}$ , write the recurrence in terms of $x_n$ : $\begin{equation*} x_{n + 1} = \dfrac{(d \eta - c) x_n + c} {(b \eta^2 - (a - d) \eta - c) x_n + a \eta + c} \end{equation*}$ Selecting $\eta$ such that $b \eta^2 - (a - d) \eta - c = 0$ (both roots work fine) reduces the recurrence to linear of the first order.
stem_mix_good_quality
I want to find the period of the function $f(x) = \sin 2x + \cos 3x$. I tried to rewrite it using the double angle formula and addition formula for cosine. However, I did not obtain an easy function. Another idea I had was to calculate the zeros and find the difference between the zeros. But that is only applicable if the function oscillates around $y = 0$, right? My third approach was to calculate the extrema using calculus and from that derive the period. Does anybody have another approach? Thanks
Hint The period of $\sin(2x)$ is $\pi$, and the period of $\cos(3x)$ is $2\pi/3$. Can you find a point where both will be at the start of a new period?
$\sin(2x)=\sin(2(x+k\pi))$ and $\cos(3x)=\cos(3(x+l2\pi/3))$, then when $k=\dfrac{2l}3$, $3k=2l=6m$, the function repeats itself, and the period is at most $2\pi$. Anyway, remains to prove that there is no shorter period.
stem_mix_good_quality
Ok, So I have the following vector \begin{equation} A = \left( \begin{array}{ccc} 2 & 1 & 1 \\ 0 & 2 & 1 \\ 0 & 0 & 2 \end{array} \right) \end{equation} I have found the eigenvalue $\lambda=2$ which has algebraic multiplicity of 3. But now I want to find the generalised vectors $v_2$ and $v_3$ such that \begin{equation} (A - \lambda I)v_2 = v_1, \quad (A - \lambda I)v_3 = v_2 \end{equation} When I compute this, I get the following \begin{equation} v_2 = \left( \begin{array}{ccc} a\\ 1 \\ 0 \end{array} \right), \quad v_3 = \left( \begin{array}{ccc} b \\ -1 \\ 1 \end{array} \right) \end{equation} for any $a$ and $b$. Obviously, we cannot find one solution for $a,b$. Would I be correct in saying that we can take any value for $a$ and $b$ so for simplicity, we can just take both $a = b = 0$ because this obviously works for any value of $a$ and $b$? In addition to this, how do we calculate the Jordan Normal form of A. I have got an idea of how to do it but I don't know why we would do it. This is what I have: \begin{equation} A = \left( \begin{array}{ccc} 2 & 1 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2 \end{array} \right) \end{equation} Which was given to me, but how do we actually calculate this? Thanks!
Up to a minor bug, your ideas are correct. You eigenvector is $v_1=\begin{pmatrix}1\\0\\0\end{pmatrix}$. You want $v_2$ such that $(A-2I)v_2=v_1$, or, as you said, $\begin{pmatrix}a\\1\\0\end{pmatrix}$ for arbitrary $a$ (because first generalisd eigenvector is defined up to an eigenvector). For the $v_3$ you want $(A-2I)v_3=v_2$ $$\begin{pmatrix}0&1&1\\0&0&1\\0&0&0\end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}a\\1\\0\end{pmatrix},$$ which gives you $v_3=\begin{pmatrix}b\\a-1\\1\end{pmatrix}$. Now you can make an obvious choice and put $a=b=0$.
A nice trick for finding eigenvectors for $2$ and $3$ dimensional matrices is to exploit the Cayley-Hamilton theorem (it works for higher dimensions too, but the cost of multiplying the matrices makes it less attractive). The Cayley Hamilton theorem says that if $p(x)$ is the characteristic polynomial of a square matrix $A$, then $p(A) = 0$ (scalars in this equation are considered to be the scalar multiplied by the identity matrix I). Now the characteristic polynomial is given by $p(x) = (x - \lambda_1)(x - \lambda_2)\ldots(x - \lambda_n)$, for an $n \times n$ matrix, where $\lambda_1, \lambda_2, ..., \lambda_n$ are the eigenvalues. So $$p(A) = (A - \lambda_1I)(A - \lambda_2I)\ldots(A - \lambda_nI)$$ (this factorization is possible because $A$ commutes with itself and $I$). So eigenvectors of $\lambda_1$ are found among the columns of the product $(A - \lambda_2I)\ldots(A - \lambda_nI)$ In this case $\lambda_1 = \lambda_2 = \lambda_3 = 2$, so the eigenvectors of $2$ are in the columns of $$(A - 2I)^2 = \begin{bmatrix}0&1&1\\0&0&1\\0&0&0\end{bmatrix}^2 = \begin{bmatrix}0&0&1\\0&0&0\\0&0&0\end{bmatrix}$$ Thus $(1, 0, 0)$ is an eigenvector. Now $(A - 2I)^2(A - 2I) = 0$, so generalized eigenvectors can be found among the columns of $(A - 2I)$ itself, where we find $(1,0,0)$ again, and $(1,1,0)$. Alas, it only gives us two generalized eigenvectors, so we have to fall back on the method used by Ross and TZarkrevskiy to find the third. (This is the first matrix I have ever seen fail to provide all three, but I've mostly used it on symmetric matrices, which are guaranteed to have three linearly independent ordinary eigenvectors.)
stem_mix_good_quality
I have a reoccurring problem when doing the following type of problem. $$\frac{x+1}{\sqrt{x^2+2x+3}}$$ and: $$\frac{4x-2}{\sqrt{x^2+10x+16}}$$ For some reason, I always end up by dividing in half. For example, the answer to the first one is: $\sqrt{x^2+2x+3}$ and I calculate $\frac{\sqrt{x^2+2x+3}}{2}$ . Here is how I do it: $$\int\frac{x+1}{\sqrt{x^2+2x+3}}dx$$ Square of polynome $(x+1)^2+2$ , then $u=x+1 \to x= u-1$ and $a=\sqrt{2}$ $$\int\frac{u-1+1}{\sqrt{u^2+a^2}}du\to \int \dfrac{u}{\sqrt{u^2+a^2}}du$$ Substitution $w=u^2+a^2$ , $\frac{dw}{du}=2u \to du = \frac{1}{2u}dw$ $$\int \dfrac{u}{\sqrt{w}}\dfrac{1}{2u}dw \to \dfrac{1}{2}\int\dfrac{1}{\sqrt{w}}dw \to \dfrac12\int w^{-\frac12} dw = \dfrac12 w^{\frac12}$$ $$\text{Final result} = \dfrac{\sqrt{x^2+2x+2}}{2} \neq \sqrt{x^2+2x+2}$$ I feel like I am missing a point or something. Could someone point out where I keep missing it? Thank you.
You ask if this is correct: $$\{\emptyset\}\subseteq \{1, \{\emptyset\}\}$$ It is not. Indeed, $\emptyset \in \{\emptyset\}$ while $\emptyset \notin\{1, \{\emptyset\}\}$ .
There are only 2 cases : (1) either all the elements of the set on the left are also elements of the set on the right, and in that case , the statement is true ( set on the left is included in set on the right) OR (2) at leat one element of the set on the left is not an element of the set on the right. Here we are on case 2. Indeed I can exhibit the element " empty set" that belongs to the set on the left but not to the set on the right. Note: the empty set is a set, but it can also be an element of a set; that is what happens here; and, in fact, it is the only element of the set on the left. The set on the left can be compared to a box with an empty box inside. The empty box is empty. But the box with an empty box inside is not empty, it has 1 element ( namely, the empty box).
stem_mix_good_quality
Can someone help me calculate $\lim_{x\to \infty} \frac{1+2\sqrt{2}+...+n\sqrt{n}}{n^2\sqrt{n}}$ using Stolz theorem? So far I've only managed to write it as $\lim_{x\to \infty} \frac{(n+1)\sqrt{n+1}}{(n+1)^2\sqrt{n+1}-n^2\sqrt{n}}$ , I was wondering if I did it correctly and what would the next steps be. Thanks in advance!
Is Stolz compulsory? Otherwise you can rewrite the sum as a Riemann sum and calculate the integral: $$\sum_{i=1}^n\left(\frac in \right)^{\frac 32}\cdot \frac 1n \stackrel{n\to \infty}{\longrightarrow} \int_0^1x^{\frac 32}dx = \frac 25$$
I think you want to use Stolz–Cesàro theorem , so I try to avoid a clear easy solution that consciously using integrals, as @trancelocation says. It is trivial that denominator of $\frac{1\sqrt1+\cdots n\sqrt n}{n^2\sqrt n}$ , $n^2\sqrt n$ is monotone increasing and diverges, so we can use $\cdot/\infty$ case of Stolz-Cesàro theorem. Following is whole solution. $$\lim_{n\to\infty}\frac{1\sqrt1+\cdots+n\sqrt n}{n^2\sqrt n}=\lim_{n\to\infty}\frac{n\sqrt n}{n^2\sqrt n-(n-1)^2\sqrt {n-1}}\\=\lim_{n\to\infty}\frac{n^{\frac32}(n^{\frac52}+(n-1)^\frac52)}{n^5-(n-1)^5}\\=\lim_{n\to\infty}\frac{1+(1-\frac1n)^\frac52}{5-\frac{10}n+\frac{10}{n^2}-\frac5{n^3}+\frac1{n^4}}\\=\frac25$$
stem_mix_good_quality
Suppose we have n balls $b_1, b_2,\cdots , b_n$ and n boxes. Each ball is placed into a box chosen independently and uniformly randomly. A colliding pair is defined as $(b_i,b_j)$ , where $i<j$ and $b_i$ and $b_j$ are placed in the same box. We are asked to evaluate the expected number of colliding pairs. What I did - Clearly, for any $k$ -set of balls in some box, there are $\binom{k}{2}$ colliding pairs in that box. Next if $C$ is the total number of colliding pairs after randomly placing all the balls in boxes, $E[C] = E[C_1+C_2+\cdots+C_n]=\displaystyle\sum_{i=1}^{n}E[C_i]=nE[C_k]$ where $C_k$ is the number of colliding pairs in box $k$ and $E[C_k] = E[C_1]=E[C_2]=\cdots$ . Now as each box will have $\binom{i}{2} = i(i-1)/2$ colliding pairs if the box contains $i$ balls, we can calculate the expected value as follows - $\begin{align} nE[C_k] &= n\left(\displaystyle\sum_{i=2}^{n}\binom{i}{2}\text{Pr}\left[i(i-1)\text{ colliding pairs in box k}\right]\right) \\&= n\left(\displaystyle\sum_{i=2}^{n}\binom{n}{i}\dfrac{i(i-1)}{2}\left(\dfrac{1}{n}\right)^i\left(\dfrac{n-1}{n}\right)^{n-i}\right)\end{align}$ This can be calculated using various tricks like differentiating the $(1+x)^n$ binomial expansion and so on. But, the answer $E[C] = \dfrac{n-1}{2}$ . Given that the answer is so simple, is there a much simpler/slicker/quicker way to see it?
HINT Yes. Whenever you are asked to find an expected value, always check to see if you can use the linearity of expectations. Remember that linearity of expectations apply even when variables are dependent! For any pair $i < j$ , let $X_{ij}$ be the indicator variable for whether $(i,j)$ is a colliding pair. What is $E[X_{ij}]$ ? How many pairs are there? Use linearity of expectation... Lemme know if you need further help.
Continuing the OP's solution, let $B_{1}$ denote the number of balls in box $1$ , so $C_{1}=\dbinom{B_{1}}{2}=\dfrac{B_{1}\left( B_{1}-1\right) }{2}$ . Next, one can write $B_{1}=I_{1}+I_{2}+\ldots+I_{n}$ , where $I_{k}=\left\{ \begin{array} [c]{l}% 1\text{, if the ball }k\text{ is in box }1\\ 0\text{, otherwise}% \end{array} \right. \quad (k=1,2,\ldots,n)$ . Then $E\left[ B_{1}\right] =\sum_{k=1}^{n}E\left[ I_{k}\right] =\sum _{k=1}^{n}\underbrace{P\left( \text{the ball }k\text{ is in box }1\right) }_{p_{k}}=n\cdot\dfrac{1}{n}=1$ . The mutual independence of $\left\{ I_{k}\right\} $ assures that $V\left[ B_{1}\right] =\sum_{k=1}^{n}V\left[ I_{k}\right] =\sum_{k=1}^{n}\left( p_{k}-p_{k}^{2}\right) =n\left( \dfrac{1}{n}-\dfrac{1}{n^{2}}\right) =\dfrac{n-1}{n}$ . Concluding, \begin{align*} E\left[ C\right] & =nE\left[ C_{1}\right] =\dfrac{n}{2}\left( E\left[ \left( B_{1}-1\right) ^{2}+B_{1}-1\right] \right) \\ & =\dfrac{n}{2}\left( E\left[ \left( B_{1}-E\left[ B_{1}\right] \right) ^{2}\right] +E\left[ B_{1}\right] -1\right) \\ & =\dfrac{n}{2}\left(V\left[ B_{1}\right] +1-1\right)=\dfrac{n-1}{2}. \end{align*} .
stem_mix_good_quality
Hello my question is quite simple i would think but i just cant seem to find an answer. I have a set of $\{1,2,3,4,5,6,7,8,9,10\}$ and i would like to calculate how many unique given sets of $6$ can i get from this set. In other words for the number $1$ i would end up with $[1,2,3,4,5,6] [1,3,4,5,6,7] [1,4,5,6,7,8] [1,5,6,7,8,9] [1,6,7,8,9,10]$ I would move down the line with the number $2$ to compare to unique sets of $6$ note: when moving to two I would no longer do this $[2,1,3,4,5,6]$ because it repeats my first case above. its there a formula to figure this sort of thing? Thanks in advance. when I work this out on paper i end up with 15 sets here is how for 1 [1,2,3,4,5,6] [1,3,4,5,6,7] [1,4,5,6,7,8] [1,5,6,7,8,9] [1,6,7,8,9,10] for 2 [2,3,4,5,6,7] [2,4,5,6,7,8] [2,5,6,7,8,9] [2,6,7,8,9,10] for 3 [3,4,5,6,7,8] [3,4,6,7,8,9] [3,5,6,7,8,9,10] for 4 [4,5,6,7,8,9] [4,6,7,8,9,10] for 5 [5,6,7,8,9,10] after that i cant make any more groups of $6$ thus i end up with $15$ sets.
Yes, there is, it is called the binomial, written $\binom{n}{k}$, read $n$ choose $k$. The value is $$\binom{n}{k}=\frac{n!}{k!(n-k)!}.$$ So, in your case, you have $$\binom{10}{6}=\frac{10!}{6!4!}=210.$$ I hope you find this helpful!
Binomial coefficients count the number of distinct subsets of $k$ elements from a set containing $n$ elements. The notation for this is $\binom{n}{k}$ which is equal to $\frac{n!}{k!(n-k)!}$
stem_mix_good_quality
Consider the function $f \in C_{st}$ given by $$ f(x) = x^2- \frac{\pi^2}{3} $$ for $x \in ]-\pi,\pi[$ Then I have to calculate the Fourier coefficient $c_n$ which I am struggling a little bit with. I know that $$ 2\pi c_n = \int_{-\pi}^\pi x^2e^{-inx} - \frac{\pi^2}{3} \int_{-\pi}^\pi e^{-inx} dx $$ For the case where $n \neq 0$ we have $$ \frac{\pi^2}{3} \int_{-\pi}^\pi e^{-inx} dx = 0 $$ Thus \begin{align*} 2 \pi c_n & = \int_{-\pi}^\pi x^2e^{-inx} dx = \frac{1}{-in} \left[x^2e^{-inx} \right]_{-\pi}^\pi - \frac{1}{-in} \int_{-\pi}^\pi 2xe^{-inx} dx \\ & = \frac{2}{in} \int_{-\pi}^\pi xe^{-inx} dx \\ & = \frac{2}{in} \left( \frac{1}{-in} \left[xe^{-inx} \right]_{-\pi}^\pi - \frac{1}{-in} \int_{-\pi}^\pi e^{-inx} dx \right) \\ & = \frac{2}{in} \left(-2\pi + \frac{1}{n^2} (e^{-in\pi} - e^{in\pi}) \right) \\ & = - \frac{4\pi}{in} \end{align*} which does not give the right answer. I can't see where I am doing a mistake. Do you mind helping me? Thanks
In the third line from the last, $$\frac{2}{in}\left[\frac{1}{-i n}(xe^{-inx})\right]^\pi_{-\pi} = \frac{2}{+n^2}(\pi e^{-i\pi n} + \pi e^{i\pi n}) = \color{blue}{\frac{4\pi}{n^2}\cos(n\pi)} $$ and the next one $\left[\int_{-\pi}^\pi e^{-i n x}dx\right]$ obviously becomes zero.
The part with the constant is not hard. For the part with $x^2 e^{-inx}$ , do some integration by parts : \begin{align} \int x^2 e^{-inx}\mathrm{d}x &= [x^2 \frac{e^{-inx}}{-in}] - \int 2x \frac{e^{-inx}}{-in}\mathrm{d}x \\ \end{align} Do it again : \begin{align} \int xe^{-inx}\mathrm{d}x &= [x \frac{e^{-inx}}{-in}] - \int \frac{e^{-inx}}{-in} \mathrm{d}x \end{align} I let you fill the holes with suitable values
stem_mix_good_quality
My idea is to use disjoint events and calculating the probability of getting at least two heads for each number rolled. For example, if I roll a 3, I would calculate the probability with the expression $(\frac{1}{6}) (\frac{1}{2})^3 \binom{3}{2} + (\frac{1}{6}) (\frac{1}{2})^3\binom{3}{3})= \frac{1}{12}$ and then add up the probabilities of getting at least two for each rolls, since the events are disjoint, summing to $\frac{67}{128}$ . Is this a valid solution? Is there a better approach to solving this problem?
It is valid. You find that P(at least 2 heads|die=1) = 0, P(at least 2 heads|die=2)=1/4, P(at least 2 heads|die=3)=1/2, P(at least 2 heads|die=4)=11/16, P(at least 2 heads|die=5)=13/16, and P(at least 2 heads|die=6)=57/64. Then 1/6*(0+1/4+1/2+11/16+13/16+57/64)=67/128. There is a way you can numerically approximate the answer and that is to use simulation. You can write code to run 10000 rolls of the die to calculate the probability that you get at least 2 heads. Then do this 100 times, and on each iteration gets a probability of getting at least 2 heads. The mean of these 100 probabilities is a cross-validated probability of 0.523544 . We can check that $\frac{67}{128}\approx0.5234375$ , which is very close.
There is a 1/6 chance you will roll a 1. Then you CAN'T flip two heads. The probability is 0. There is a 1/6 chance you will roll a 2. The probability of flipping two head in two flips is (1/2)(1/2)= 1/4. The probability is (1/6)(1/4)= 1/24. There is a 1/6 chance you will roll a 3. The probability of flipping two heads,and then a tail, HHT, (1/2)(1/2)(1/2)= 1/8. But the probability of HTH or THH is the same. The probability of two heads in three flips is 3/8. The probability is (1/6)(3/8)= 1/16. There is a 1/6 chance you will roll a 4. The probability of flipping two heads,and then two tails, HHTT, (1/2)(1/2)(1/2)(1/2)= 1/16. There are $\frac{4!}{2!2!}= 6$ permutations, HHTT, HTHT, THHT, THTH, and THHT, all having the same probability, 1/16. The probability of two heads in four flips is 6/16= 3/8. The probability is (1/6)(3/8)= 1/16. There is a 1/6 chance you will roll a 5. The probability of flipping two heads, and then three tails is $(1/2)^5= 1/32$ . There are $\frac{5!}{2!3!}= 10$ permutations (I won't write them all) so the probability of two heads and three tails in any order is $\frac{10}{32}= \frac{5}{16}$ . The probability is (1/6)(5/16)= 5/96. There is a 1/6 chance you will roll a 6. The probability of flipping two heads then four tails is $(1/2)^6= 1/64$ . There are $\frac{6!}{2!4!}= 15$ permutations so the probability of two heads and four tails in any order is [tex]\frac{15}{64}[/tex]. The probability is (1/6)(15/64)= 5/32. If you roll a die then flip a coin that number of times the probability you get exactly two heads is 1/24+ 1/16+ 1/16+ 5/96+ 5/32= 4/96+ 6/96+ 6/96+ 5/96+ 15/96= 36/96= 3/8.
stem_mix_good_quality
Calculating without techniques involving the contour integration $$a) \ \int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^2-(\operatorname{Li}_2(e^{i x}))^2}{e^{-i x}-e^{i x}}\textrm{d}x;$$ $$b) \ \int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^3-(\operatorname{Li}_2(e^{i x}))^3}{e^{-i x}-e^{i x}}\textrm{d}x.$$ I'm working now on such a method. What would your real method inspiration be here? Supplementary question : Calculate $$ \int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^4-(\operatorname{Li}_2(e^{i x}))^4}{e^{-i x}-e^{i x}}\textrm{d}x.$$ Moreover, may we hope for a generalization of the type below? $$ I(n)=\int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^n-(\operatorname{Li}_2(e^{i x}))^n}{e^{-i x}-e^{i x}}\textrm{d}x.$$ Preparing another two generalizations: $$ i) \ J(n,m)=\int_0^{2\pi} \frac{(\operatorname{Li}_m(e^{-i x}))^n-(\operatorname{Li}_m(e^{i x}))^n}{e^{-i x}-e^{i x}}\textrm{d}x;$$ $$ ii) \ K(n)=\int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})\operatorname{Li}_3(e^{-i x})\cdots \operatorname{Li}_n(e^{-i x})-\operatorname{Li}_2(e^{i x})\operatorname{Li}_3(e^{i x})\cdots \operatorname{Li}_n(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x.$$
Getting an idea by going trough an example Consider the Integral $$ I=\int_0^{2\pi}dx\frac{\text{Li}^2_2(e^{i x})-\text{Li}^2_2(e^{-i x})}{2 i \sin(x)} $$ using the series representation for the dilogarithm this can be rewritten as $$ I=\int_0^{2\pi}dx\sum_{n,m>0}\frac{1}{n^2 m^2}\frac{\sin(m+n)x}{\sin(x)} $$ exchange summation and integration and using the simple fact that $\int_0^{2\pi}dx \frac{\sin(lx)}{\sin(x)}=2 \pi$ for $l \in 2\mathbb{N}+1$ we get $$ I=2 \pi s^{(2)}_2 =2 \pi\sum_{\substack{n,m>0 \\n+m=odd}}\frac{1}{n^2 m^2} $$ The closed form solution to $s^{(2)}_2$ is pretty simple to obtain. Observe to fullfil the condition $n+m=odd$ either $n$ has to be odd and $m$ to be even, or vice versa. This means we have $2$ possible combinations of even and odd which yield a contribution to our sum. $$ s^{(2)}_2 =2\sum_{\substack{n>0,m\geq 0}}\frac{1}{(2n)^2 (2m+1)^2}=2\frac{\zeta(2)}{4}\frac{3\zeta(2)}{4}=\frac{3}{8}\zeta^2(2) $$ the strategy for providing a closed form solution will follow the same arguments, except that we additonally need a combinatoric lemma proven in the appendix The General Case We now want to investigate $$ I^{(r)}_n=\int_0^{2\pi}dx\frac{\text{Li}^n_r(e^{i x})-\text{Li}^n_r(e^{-i x})}{2 i \sin(x)} $$ Going through the same procedure then in the motivating example we might show that $$ I^{(r)}_n=2 \pi s^{(r)}_n $$ This means we are interested in a family of Euler like sums, since $$ s^{(r)}_n=\sum_{\substack{ k_i \geq 1, \\ \sum_{n\geq i \geq 1} k=odd }}\frac{1}{\prod_{ n \geq i\geq1}{k^r_i}} $$ we now have to take care that we account for all possible partitions of the integers such that the constraint $\sum_{n\geq i \geq 1} k=odd$ is fulfilled. As shown in the Appendix, we have to choose $2l-1$numbers to be odd and $n-2l+1$ to be even. Each of this partitions contains $N_{l,n}=\binom{n}{2l-1}$ equivalent combinations. This means that $$ s^{(r)}_n=\sum_{l_{max}(n)\geq l\geq1}N_{l,n}\sum_{k_i\geq 1, K_i \geq 0} \prod_{2l-1 \geq i\geq1}\frac{1}{{(2K_i+1)^r}} \prod_{n- 2l+1 \geq i\geq1}\frac{1}{{(2k_i)^r}} $$ using now the well known identity $\sum_{k\geq 0}(2m+1)^{-r}=(1-1/2^{-r})\zeta(r)$ we can carry out the infinite summations $$ s^{(r)}_n=\sum_{l_{max}(n)\geq l\geq1}\frac{N_{l,n}}{2^{r(n-2l+1)}}(1-\frac{1}{2^{r}})^{2l-1}\zeta^n(r)=\sum_{l_{max}(n)\geq l\geq1}c_{l,n}\zeta(r)^n $$ Furthermore the sum over coefficents can be done in closed form by virtue of the Binomial identiy: $$ s^{(r)}_n=C_{n,r}\zeta(r)^n\,\,,\,\,C_{n,r}=\begin{cases} \frac{1}{2}\left(1+\frac{2^{n-r}}{4^{r n/2}}(2^{r-1}-1)^n\right)\,\, \text{if} \,\,n \,\, even\\ \frac{1}{2}\left(1+\frac{2^{n-r}}{4^{r(n-1)/2}}(2^{r-1}-1)^n\right)\,\, \text{if} \,\,n \,\, odd \\ \end{cases} $$ Note that we get the sums with the constraint $\sum_{n\geq i\geq1} k_i=even$ for free: $$ \bar{s}_n^{r}=\left(1-C_{n,r}\right)\zeta(r)^n $$ It is also interesting to note, that $\lim_{n\rightarrow\infty}\frac{s^{(r)}_n}{\zeta(r)^n}=\frac{1}{2}$ which can be traced back to fact that for very large $n$ we have to choose roughly $n/2$ odd factors out ouf $\sum_{n\geq 1 i\geq1} k_i$ due to concentration of $N_{l,n}$ around $n/2$. Last but not least a few examples: \begin{align*} s^{(2)}_2=\frac{3}{8}\zeta^2(2)\,\, ,\,\,\bar{s}^{(2)}_2=\frac{5}{8}\zeta^2(2) \\ s^{(3)}_3=\frac{91}{128}\zeta^3(3)\,\, ,\,\,s^{(3)}_5=\frac{1267}{2048}\zeta^5(3), \end{align*} Appendix: A small detour to combinatorics Consider the sum of integers $$ c_m=n_1+n_2+...+n_m $$ how can we partion $c_m$ into odd and even elements, such that $c_m$ is odd? Since the odd and even numbers furnish a representation of the group $\mathbb{Z}_2$ it follows trivially that we need always an odd number $2l-1$ of the $n_m$'s to be odd. For any fixed $l$ we then have $$ N_{l,m}=\binom{m}{2l-1}\,\, ,\,\, l \in \begin{cases} \{1,m/2\} \,\, \text{if} \,\,m \,\, \text{even}\\ \{1,\lceil m/2 \rceil\} \,\, \text{if} \,\,m \,\, \text{odd}\\ \end{cases} $$ equivalent admissible partitions of $c_m$.
$ \large \text{ Hooray!!!}$ The closed-form of the integral $a)$ is impressive. According to my calculations, $$ \int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^2-(\operatorname{Li}_2(e^{i x}))^2}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{\pi^5}{48}.$$ Including also the trivial case, $n=1$, $$ \int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})-\operatorname{Li}_2(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{\pi^3}{4}.$$ $ \large \text{ Second Hooray!!!}$ $$ \int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^3-(\operatorname{Li}_2(e^{i x}))^3}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{\pi^7}{192}.$$ $ \large \text{Third Hooray!!!}$ I think I have found a first generalization! $$ I(n)=\int_0^{2\pi} \frac{(\operatorname{Li}_2(e^{-i x}))^n-(\operatorname{Li}_2(e^{i x}))^n}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{\pi^{2n+1}}{6^n}\left(1-\left(-\frac{1}{2}\right)^n\right).$$ $ \large \text{Fourth Hooray!!!}$ Guess what?! I'm also done with the generalization $J(n,m)$ $$\ J(n,m)=\int_0^{2\pi} \frac{(\operatorname{Li}_m(e^{-i x}))^n-(\operatorname{Li}_m(e^{i x}))^n}{e^{-i x}-e^{i x}}\textrm{d}x=\pi(\zeta(m)^n-((2^{1-m}-1)\zeta(m))^n).$$ $ \large \text{Fifth Hooray!!!}$ I computed $2$ cases of the generalization in $K(n)$ and I approach the solution of the generalization. So, $$ \int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})\operatorname{Li}_3(e^{-i x})-\operatorname{Li}_2(e^{i x})\operatorname{Li}_3(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{5}{48}\pi^3\zeta(3);$$ $$ \int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})\operatorname{Li}_3(e^{-i x})\operatorname{Li}_4(e^{-i x})-\operatorname{Li}_2(e^{i x})\operatorname{Li}_3(e^{i x})\operatorname{Li}_4(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x=\frac{17}{6912}\pi^7 \zeta(3).$$ $ \large \text{Sixth Hooray!!!}$ Looks like I have been lucky today! Let me put the last generalization I just proved in a nice form $$K(n)=\int_0^{2\pi} \frac{\operatorname{Li}_2(e^{-i x})\operatorname{Li}_3(e^{-i x})\cdots \operatorname{Li}_n(e^{-i x})-\operatorname{Li}_2(e^{i x})\operatorname{Li}_3(e^{i x})\cdots \operatorname{Li}_n(e^{i x})}{e^{-i x}-e^{i x}}\textrm{d}x$$ $$=\pi \left(\zeta(2)\zeta(3)\cdots \zeta(n)+(-1)^{n-1} \eta(2)\eta(3)\cdots\eta(n))\right).$$ Extra information : https://en.wikipedia.org/wiki/Riemann_zeta_function https://en.wikipedia.org/wiki/Dirichlet_eta_function https://en.wikipedia.org/wiki/Polylogarithm
stem_mix_good_quality
Problem : An experiment consists of removing the top card from a well shuffled pack of 52 playing cards. Calculate the probability that (a) the first card is an ace, (b) the next card is an ace. Attempt: The (a) is quite simple, there are only 4 aces, so the probability that I get an ace at first attempt is $4/52=1/13$. The (b) part, we can see that if the first one is not an ace, then there are 48 choices, for the 1st take, and 4 ace choices for the 2nd take. So the probability is $$ \frac{48 \times 4}{52 \times 51} $$ But the answer key for (b) says the probability is $1/13$. If we look in different point of view, say a stack of 52 cards after a shuffle, then the probability that the card at the 2nd place on top is an ace is $1/13$. Is my analysis accuratr or not? Thanks.
Your reasoning is okay, but the first card may be an ace, or it may not. The probability that the first is an ace and so is the second is $\tfrac {4}{52}\tfrac{3}{51}$ The probability that the first is something else and the second is and ace is $\tfrac {48}{52}\tfrac{4}{51}$ You need to add them together (re: Law of Total Probability) to get the probability that the second card is an ace whatever the first may be.   And since $3+48$ conveniently equals $51$... the result is :$$=\dfrac 1{13}$$ As anticipated. Hey, now, what is the probability that the fifteenth card down the deck is an ace?
Hint: If cards are randomly placed in a row and have received numbers $1,2,\dots,52$ then what is the probability that e.g. the card with number $37$ is an ace? Are there any reasons to think that it will differ from the probability that the card with number e.g. $13$ (or $1$, or $2$ if you like) will be an ace?
stem_mix_good_quality
I know the homology group of Real Projective plane $\mathbb{RP}^2$ $H_i(\mathbb{RP}^2) = 0$ for $i>2$, $\mathbb{Z}$ for $i=0$ , $\mathbb{Z}/2\mathbb{Z}$ for $i=1$ (non-reduced case). Proving when $i \neq 2$ is easy but $i=2$ case is slightly hard for me. $\mathbb{RP}^2$ has CW complex structure with one of each $0,1,2$ cells so this takes care of $i>2$ case and $\mathbb{RP}^2$ is connected so it takes care of $i=0$ case and finally I know the fundamental group of real projective plane and I know the relation between first homology group and fundamental group so that part is done too. I also understand that we can use simplicial homology tool to calculate it as well as using the degree formula to find out the boundary map for CW complex. But is there any other way (for instance using Mayer-Vietoris sequence or directly working out the boundary map $\delta_2$ explicitly in CW complex case) to show $H_2(\mathbb{RP}^2)=0$?
You'll want to use the fact that $\mathbb{R}P^n$ can be written as $\mathbb{R}P^{n-1}\cup_f D^n$ where $D^n$ is the $n$-dimensional ball, and $f\colon S^{n-1}\to\mathbb{R}P^{n-1}$ is a 2-fold covering map, so we are gluing the $n$-ball along its boundary to $\mathbb{R}P^{n-1}$ via this map. You can then use Mayer-Vietoris and induced maps to explicitly work out the connecting map. In your case, you have $\mathbb{R}P^2=M\cup_f D^2$, where $M$ is the Mobius strip and $f\colon S^1\to M$ is the doubling map up to homotopy, or just the inclusion of the boundary into the Mobius strip. To be more explicit, via Mayer-Vietoris, we get a long exact sequence $$\cdots\to H_2(M)\oplus H_2(D^2)\to H_2(\mathbb{R}P^2)\to H_1(S^1)\to H_1(M)\oplus H_1(D^2)\to\cdots$$ which, using the fact that $H_2(M)=H_2(D^2)=H_1(D^2)=0$ and $H_1(S^1) \cong H_1(M) \cong\mathbb{Z}$, reduces to the exact sequence. $$\cdots\to 0\to H_2(\mathbb{R}P^2) \stackrel{g}{\to} \mathbb{Z} \stackrel{\times 2}{\to} \mathbb{Z} \to\cdots$$ where we get that $\times 2$ map in the above sequence from the fact that the inclusion of the intersection of the two spaces (homotopy equivalent to a circle) into the Mobius strip is (up to homotopy) the degree-$2$ covering map, which induces multiplcation by $2$ in first homology. By exactness, the image of $g$ must be $0\subset\mathbb{Z}$ as the doubling map in injective, but $g$ must itself be injective by exactness because the map $0\to H_2(\mathbb{R}P^2)$ has trivial image. The only way both of these conditions on $g$ can be satisfied is if $H_2(\mathbb{R}P^2)$ is trivial.
There's a way to do this only involving diagram chasing, using the pushout square $$ \newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}} \begin{array}{c} S^1 & \ra{\sigma_2(z)=z^2} & S^1 \\ \da{inc} & & \da{J}\\ D^2 & \ras{\hspace 0.4cm F \hspace 0.4cm} & \mathbb{R}P^2\\ \end{array} $$ and that $H_k(D^2,\,S^1) \cong H_k(\mathbb{R}P^2,\,J(S1))$, because $(D^2, S^1)$ is a good pair: Take the LESes for both pairs $(\mathbb{R}P^2,\,J(S1)),\, (D^2,\,S^1)$ and 'connect' them using the characteristic map $F$. Diagram chasing and some basic properties of singular homology should then do the trick.
stem_mix_good_quality
I used the equations found on Paul Bourke's "Circles and spheres" page to calculate the intersection points of two circles: $P_3$ is what I'm trying to get, except now I want to do the same with two ellipses. Calculating $h$ is the tricky bit. With regular circles, it can be done with the Pythagorean Theorem $a^2 + b^2 = c^2$, since we already know $r_0$ (the radius): $$h = \sqrt{a^2 + r_0^2}.$$ With ellipses it seems much trickier. I don't know how to calculate $h$. There is not a single radius anymore: there are $\operatorname{radiusX}$ and $\operatorname{radiusY}$. Given $\operatorname{radiusX}$, $\operatorname{radiusY}$, and the center points $(x,y)$ of each ellipse, how do I find the two intersecting points? (Note: the ellipses are guaranteed to have two intersecting points in my specific application.)
Two ellipses can intersect in up to $4$ points. The system of two equations in two variables is equivalent to solving one polynomial of degree $4$ in one variable. In your case it is known that two of the solutions of this polynomial are real, but this does not simplify anything algebraically except for a few cases where there is some special geometric relationship between the ellipses. An exact solution in terms of the equations defining the ellipses will have the same complicated zoo of nested radicals as the formula for solving a general quartic equation. Unless for some unusual reason you really need the infinite-precision algebraic answer, it is simpler to solve numerically for the two intersection points.
Let your ellipses has their foci on X-axis. Then calculate points of intersection of both ellipses by solving the system: x^2/a1 + y^2/b1 = 1 and x^2/a2 + y^2/b2 = 1 h will be a Y and -Y of this two point of solution.
stem_mix_good_quality
Can anyone tell me at a high level (I am not aware of measure theory much) about Lebesgue integration and why measure is needed in case of Lebesgue integration? How the measure is used to calculate the horizontal strip mapped for a particular range?
Imagine a cashier who is in-charge of counting coins at a bank and thereby report the total money collected everyday to the bank authorities. Also, let us assume that the coins can only be of denomination $1$, $2$, $5$ and $10$. Now say he receives the coins in the following order: $$5,2,1,2,2,1,5,10,1,10,10,5,2,1,2,5,10,2,1,1,1$$ Now he has two different ways to count. $1$. The first way is to count the coins as and when they come, i.e., he does $$5+2+1+2+2+1+5+10+1+10+10+5+2+1+2+5+10+2+1+1+1$$ which gives $79$. $2$. The second way is as follows. He has $4$ boxes, one box for each denomination, i.e., the first box is for coins with denomination $1$, the second box is for coins with denomination $2$, the third box is for coins with denomination $5$ and the last box is for coins with denomination $10$. He drops the coins in the corresponding box as and when it comes. At the end of the day, he counts the coins in each box, i.e., he counts that there are $7$ coins with denomination $1$, $6$ coins with denomination $2$, $4$ coins with denomination $5$ and $4$ coins with denomination $10$. He hence finally reports the total money as $$7 \times 1 + 6 \times 2 + 4 \times 5 + 4 \times 10 = 79$$ $\color{red}{\text{The first method is the Riemann way of summing}}$ the total money, while $\color{blue}{\text{the second method is the Lebesgue way of summing}}$ the same money. In the second way, note that there are $4$ sets, i.e., the boxes for denominations $1$, $2$, $5$ and $10$. The measure of each of these sets/boxes is nothing but the denomination of each of these boxes, i.e., the measure of each of these sets is $1$, $2$, $5$ and $10$ respectively and the functional value on each of these sets is nothing but the number of coins in that particular denomination.
This is more about Lebesgue integration in general, and not the the horizontal strip business. I imagine it something like this: the Riemann integral is only able to approximate functions by rectangles. Rectangles basically only use a single set that we "know" the length of: the interval. It's easy to compute the length of an interval, and so if $\chi_{[a,b]}$ is the function that is 1 on the interval $[a,b]$ and 0 off of it, we can easily compute the integral $$\int c \chi_{[a,b]} = c(b-a).$$ To compute the integrals of other functions, we approximate them by functions like these. This gives the Riemann integral. But the Riemann integral has a few issues. It doesn't behave well with limits and there are lots of functions that "ought" to be integrable but aren't. So what we do is replace boring intervals $[a,b]$ as our "basic integration set" with a much larger class: measurable sets. Think of these sets as being a very, very large collection of sets that we can find the length of. A measure is an assignment of a number to each of these sets in a way that is compatible with our notion of area. Let's call $\mu(A)$ the measure of a set, given by the measure $\mu$. This is basically the area of $A$, or perhaps the length of $A$. Now, we think about our function $\chi_A$ again, that takes the value of $1$ on $A$ and $0$ off $A$. If our integral does anything like what is should, then we had better have $$\int \chi_A d\mu = \mu(A).$$ How do we compute the integrals over other functions? We essentially approximate them by finite sums of functions like the ones above and that tells us what the area should be. There are, of course, lots of details not mentioned here, but in short: the Lebesgue integral allows more flexibility in approximation by letting us approximate by a much richer collection of sets.
stem_mix_good_quality
Given a number of items n, what is the most efficient way to calculate the number of pages required to display n if we are going to display 20 items on the first page, and 10 items on each subsequent page?
$n$ items distributed over $p$ pages would look like this: $$ 20 + (p-1)10 = n $$ So you have 20 items on the first page and 10 items on each of the remaining $(p-1)$ pages, summing up to a total of $n$ items. Therefore $$ p=\frac{n-20}{10} + 1$$ Edit: As you most likely want to your result in "whole" pages and would therefore need to round up the resulting $p$ to the next integer. If you want to count "full" pages only (not counting the one that has less than 10 items), you'll need to round down. If $n<20$, then you only need one page, so you need to be careful with this formula as it might yield negative values for low $n$.
Divide by 10, round up, and then subtract 1 if the answer is not 0 or 1.
stem_mix_good_quality
I would like to expand the inverse function of $$g(x) := x^4+x $$ in a taylor series at the point x = 0. I calculated the first and second derivate at x = 0 with the rule of the derivation of an inverse function. Theoretically, this process could be continued for higher derivates. But I would like to have an easier method to calculate higher derivates of an inverse function in order to calculate the taylor series. Any ideas ?
In general you may use series reversion to get the taylor expansion of the inverse of a function given by a Taylor series. See for example at Mathworld or in A&S page 16 . But for functions like $\;f(x):=x+x^n\;$ with $n>1$ integer we may obtain the expansion of the inverse in explicit form as shown by Tom Copeland in his paper "Discriminating Deltas, Depressed Equations, and Generalized Catalan Numbers" (just before chapter $5$). For $n=4$ and using the generating function of OEIS A002293 we 'guess' directly the solution : $$f^{-1}(x)= \sum_{n=0}^\infty \frac{(-x)^{3n+1}}{3n+1}\binom{4n}{n}$$ and using Wolfram Alpha or by studying the asymptotic of $\binom{4n}{n}\,x^{3n}$ with Stirling's formula we obtain the equivalence : $$\sqrt{2\,\pi\, n\, 3/4}\;\binom{4n}{n}x^{3n}\sim \frac{(4n/e)^{4n}\,x^{3n}}{(n/e)^{n}(3n/e)^{3n}}\sim \frac{4^{4n}\,x^{3n}}{3^{3n}}\sim \left(\frac{4^4\,x^{3}}{3^3}\right)^n$$ showing that $\,|x|^3$ shouldn't be larger than $\dfrac{3^3}{4^4}=\dfrac{27}{256}$ as quickly found by Mark McClure.
This is an answer to Peter's question in the comments, namely, how did I deduce the radius of convergence to be $\sqrt[3]{27/256}$? I have no proof as the technique is experimental, relying completely on Mathematica. But, this is how I did it. First, we invert the series using Mathematica's InverseSeries command. invSeries = InverseSeries[Series[x + x^4, {x, 0, 30}]] Next, we use FindSequenceFunction to generate a closed form expression for the non-zero coefficients. a[n_] = FullSimplify[FindSequenceFunction[ DeleteCases[CoefficientList[Normal[invSeries], x], 0], n]] Clearly, this step has some issues. It doesn't always work and, even when it appears to work, there's no guarantee that it works for all $n$. Of course, we can check that the formula works for fairly large $n$, but there's still no proof here. Furthermore, this expression is clearly not as nice as the one found by Raymond! Now that we have a candidate for the $a_n$s, though, we can easily use a ratio test to find the radius of convergence. Limit[a[n]/a[n + 1], n -> Infinity] (* Out: -27/256 *) Taking the absolute value and accounting for the fact that only every 3rd term is non-zero, we get $\sqrt[3]{27/256}$.
stem_mix_good_quality
When discussing with my son a few of the many methods to calculate the digits of $\pi$ (15 yo school level), I realized that the methods I know more or less (geometric approximation, Monte Carlo and basic series) are all convergent but none of them explicitly states that the $n$ -th digit calculated at some point is indeed a true digit (that it will not change in further calculations). To take an example, the Gregory–Leibniz series gives us, for each step: $$ \begin{align} \frac{4}{1} & = 4\\ \frac{4}{1}-\frac{4}{3} & = 2.666666667...\\ \frac{4}{1}-\frac{4}{3}+\frac{4}{5} & = 3.466666667...\\ \frac{4}{1}-\frac{4}{3}+\frac{4}{5}-\frac{4}{7} & = 2.895238095... \end{align} $$ The integer part has changed four times in four steps. Why would we know that $3$ is the correct first digit? Similarly in Monte Carlo: the larger the sample, the better the result but do we mathematically know that "now that we tried [that many times] , we are mathematically sure that $\pi$ starts with $3$ ". In other words: does each of the techniques to calculate $\pi$ (or at least the major ones) have a proof that a given digit is now correct? if not, what are examples of the ones which do and do not have this proof? Note: The great answers so far (thank you!) mention a proof on a specific technique, and/or a proof that a specific digit is indeed the correct one. I was more interested to understand if this applies to all of the (major) techniques (= whether they all certify that this digit is guaranteed correct) . Or that we have some which do (the ones in the two first answers for instance) and others do not (the further we go, the more precise the number but we do not know if something will not jump in at some step and change a previously stable digit. When typing this in and thinking on the fly, I wonder if this would not be a very bad technique in itself, due to that lack of stability)
I think the general answer you're looking for is: Yes, proving that a method for calculating $\pi$ works requires also describing (and proving) a rule for when you can be sure of a digit you have produced. If the method is based on "sum such-and-such series", this means that one needs to provide an error bound for the series. Before you have that, what you're looking at is not yet a "method for calculating $\pi$ ". So the answer to your first question is "Yes; because otherwise they wouldn't count as techniques for calculating $\pi$ at all". Sometimes the error bound can be left implicit because the reader is supposed to know some general theorems that leads to an obvious error bound. For example, the Leibniz series you're using is an absolutely decreasing alternating series , and therefore we can avail ourselves of a general theorem saying that the limit of such a series is always strictly between the last two partial sums. Thus, if you get two approximations in succession that start with the same $n$ digits, you can trust those digits. (The Leibniz series is of course a pretty horrible way to calculate $\pi$ -- for example you'll need at least two million terms before you have any hope of the first six digits after the point stabilizing, and the number of terms needed increases exponentially when you want more digits). In other cases where an error bound is not as easy to see, one may need to resort to ad-hoc cleverness to find and prove such a bound -- and then this cleverness is part of the method .
The question was:         Why would we know that 3 is the correct first digit? Following Archimedes, the regular hexagon inscribed into the unit circle has circumference $\ =\ 6\cdot 1\ = 6,\ $ hence $$ 3\ <\ \pi $$ Next, the hexagon circumscribed around the unit circle has circumference $\ =\ 6\cdot\frac 2{\sqrt 3},\ $ hence $$ \pi\ <\ \frac 6{\sqrt 3} $$ i.e. $$ \pi^2\ <\ 12\ < 4^2 $$ Thus, $$ 3\ <\ \pi\ <\ 4 $$ Great!
stem_mix_good_quality
I am having problems with calculating $$x \mod m$$ with $$x = 2^{\displaystyle2^{100,000,000}},\qquad m = 1,500,000,000$$ I already found posts like this one https://stackoverflow.com/questions/2177781/how-to-calculate-modulus-of-large-numbers But can someone explain me how to use this in my case, please?
Using the Chinese Remainder Theorem in addition to other bits. Observe that your modulus factors like $m=2^8\cdot3\cdot 5^9$. Your number is very obviously divisible by $2^8$, so we can forget about that factor until the end. Modulo $3$? The number $2^{2^{\text{ZILLION}}}$ is clearly a power of $4$, so its remainder modulo $3$ is equal to $1$. Modulo $5^9$? Because $2$ is coprime with $5^9$ we can use the Euler totient function $\phi$. We have $\phi(5^9)=(5-1)5^8=4\cdot5^8.$ Call this number $K$. We know from elementary number theory that $2^K\equiv1\pmod{5^9}$. Consequently also $2^N\equiv 2^n\pmod{5^9}$ whenever $N\equiv n\pmod{K}$. Therefore we want to calculate the remainder of $M=2^{100000000}$ modulo $K$. Let's repeat the steps. $M$ is clearly divisible by $4$, so we concentrate on the factor $5^8$. Euler's totient gives $\phi(5^8)=4\cdot5^7$. Clearly $100000000=10^8=2^8\cdot5^8$ is divisible by $4\cdot5^7$. This implies that $M\equiv 2^0=1\pmod{5^8}$. Now we begin to use the Chinese Remainder Theorem. We know that $M\equiv 0\pmod 4$ and $M\equiv 1\pmod {5^8}$. The CRT says that these congruences uniquely determine $M$ modulo $K=4\cdot5^8$. As $5^8\equiv1\pmod4$, we see that $3\cdot5^8+1$ is divisible by four. As it is clearly also congruent to $1\pmod{5^8}$ we can conclude that $M\equiv 3\cdot5^8+1=1171876\pmod K$. This, in turn, means that $$ 2^M\equiv 2^{1171876}\pmod{5^9}. $$ This exponent, finally, is small enough for square-and-multiply. I cheat and use Mathematica instead. The answer is that $$ 2^{1171876}\equiv1392761\pmod{5^9}. $$ Now we know every thing we need about the remainders: $$ \begin{aligned} 2^M&\equiv0\pmod{2^8},\\ 2^M&\equiv1\pmod3,\\ 2^M&\equiv 1392761\pmod{5^9}. \end{aligned} $$ All that remains is to put these bits together by yet another application of CRT. Have you implemented those routines? Edit: I did this run of CRT with Mathematica. Barring an earlier error (in the above calculations) the answer is that $$ X=2^{2^{100000000}}\equiv 741627136 \pmod{1500000000}. $$ The observations leading to this are: The integer $256$ has remainder $256\pmod {256}$ and $256\equiv1\pmod3$. Therefore CRT says that $X\equiv256\pmod{3\cdot256}$. Here $3\cdot256=768$ Extended Euclidean algorithm tells us that $(-928243\cdot768+365\cdot5^9=1$. Consequently the integer $A=365\cdot5^9$ has remainder $0$ modulo $5^9$ and remainder $1$ modulo $768$. Similarly the integer $B=(-928243)*768$ is divisible by $768$ and has remainder $1$ modulo $5^9$. Therefore $$X\equiv 256\,A+1392761\,B\pmod{1500000000}.$$ Calculating the remainder of that `small' number gives the answer.
Okay, so you want to calculate $a^b\ mod \ m$ with $b$ and m are very large numbers and a and m are coprime. We can use an interesting result called Euler's theorem : $$ a^{\phi(m)} \ mod \ m=1$$ So now we reduce to problem to calculate $$ b\ mod \ \phi(m) $$ , because if $r=b\ mod \ \phi(m) $ then $$ a^b\ mod\ m = a^{q \phi(m) +r} \ mod\ m= (a^\phi(m))^q.a^r\ mod\ m = a^r\ mod\ m$$ Thus, the problem is reduced. So in your exemple you have to calculate $\phi(1500000000000)$; use the Wikipedia URL above. and to answer the same problem for $x'=2^{1000}$ and $m'\phi(1500000000000)$ you have to calculate $\phi(\phi(1500000000000))$ And i will let you to do the rest of calculations. This how to do it manually , but in the general case the calcul of $\phi$ is very hard!
stem_mix_good_quality
This came up in a part of the proof. $-\log(1-x)$ is $x$ and then want to calculate the error of this. The idea is that taylor series of $-\log(1-x)=x+\dfrac{x^2}{2}+\dfrac{x^3}{3}+...$ We have $|x|<1$.I know how to calculate Taylor expansion, however can't see the justification from saying it is x. Next it says what is the error of this. Well, it has $x\leq \int_{1-x}^{1} \dfrac{dt}{t} \leq \dfrac{1}{1-x} x$ However, can't understand how this is true. This is due to trying to prove that $0 \leq \sum_{p\leq N} ((-log(1- \dfrac{1}{p})-\dfrac{1}{p}) \leq \sum_{p \leq N} \dfrac{1}{p(p-1)}$
Notice that the interval of integration in $\displaystyle\int_{1-x}^{1}\frac{\mathrm{d}t}{t}$ is $[1-x,1]$ which has a length of $x$. Note also that the integrand is between $\dfrac{1}{1-x}$ and $1$. Thus, the integral is going to be between the length of the interval times the minimum and maximum of the integrand. That is, $$ x\le\int_{1-x}^{1}\frac{\mathrm{d}t}{t}\le\frac{x}{1-x} $$ You could also use the Mean Value Theorem, noting that $$ \frac{\log(1)-\log(1-x)}{x}=\frac{1}{\xi} $$ for some $\xi$ between $1$ and $1-x$, again giving $$ x\le-\log(1-x)\le\frac{x}{1-x} $$
The inequality you give is an elementary property of the natural logarithm $$1 - \frac{1}{x} \leqslant \log x \leqslant x - 1$$ The equality holds for $x=1$. This changing $x$ to $1-x$ gives $$1 + \frac{1}{{x - 1}} \leqslant \log \left( {1 - x} \right) \leqslant - x$$ And multiplying by $-1$ gives $$\frac{x}{{1 - x}} \geqslant - \log \left( {1 - x} \right) \geqslant x$$ which is the desired result. Here you have an image. The blue plot is the logarithm. You can find a proof of this first inequality in Edmund Landau's book Integral and Differential Calculus.
stem_mix_good_quality
Given $A=(a_{ij})_{n\times n}$ and $D=(d_{ij})_{n\times n}$ and a permutation $\pi:\{1,\ldots,n\}\rightarrow \{1,\ldots,n\}$ , the quadratic assignment cost is $$\sum_{1\le i,j\le n}a_{ij}d_{\pi(i)\pi(j)} $$ I want to know the expectation and variance of this cost over all permutations (with the same probability $1/n!$ ). The expectation is relatively easy: $$\frac{1}{n!}\sum_{\pi\in \Pi}\sum_{1\le i,j\le n}a_{ij}d_{\pi(i)\pi(j)}=\frac{1}{n!}\sum_{1\le i,j\le n} a_{ij}\sum_{\pi\in \Pi}d_{\pi(i)\pi(j)}=\frac{1}{n}\sum_{1\le i\le n} a_{ii}\sum_{1\le i\le n} d_{ii}+\frac{1}{n(n-1)}\sum_{i\neq j} a_{ij}\sum_{i\neq j} d_{ij}$$ However, I cannot calculate the variance. I have tried to calculate $\sum_{\pi\in \Pi}(\sum_{1\le i,j\le n}a_{ij}d_{\pi(i)\pi(j)})^2$ , which will gives the cross term $a_{ij}d_{\pi(i)\pi(j)}a_{i'j'}d_{\pi(i')\pi(j')}$ , and I cannot handle it.
This is not a complete answer. However, I wanted to point out that I find a different result for the expectation. You can write the cost as: $$\sum_{1\le i,j\le n}\sum_{1\le k,l\le n} a_{i,j}d_{k,l}X_{i,k}X_{j,l}.$$ Where $X_{i,k} = \begin{cases}1 & \text{if $k=\pi (i)$}\\ 0 & \text{otherwise}\end{cases}$ It is clear that \begin{align} E\left[X_{i,k}X_{j,l}\right] &= P\left[X_{i,k} = 1 \cap X_{j,l} = 1\right]\\ &=P\left[X_{j,l}=1\mid X_{i,k}=1\right]P\left[X_{i,k}=1\right]\\ &=\begin{cases} 0 & \text{if ($i\neq j$ and $k=l$) or ($i=j$ and $k\neq l$)}\\ \frac1n & \text{if $i=j$ and $k=l$}\\ \frac1{n(n-1)} & \text{if $i\neq j$ and $k\neq l$} \end{cases} \end{align} So the expected cost is \begin{align} \frac1n\left(\sum_{i=1}^{n}a_{i,i}\right)\left(\sum_{k=1}^{n}d_{k,k}\right) + \frac1{n(n-1)}\left(\sum_{i\neq j}a_{i,j}\right)\left(\sum_{k\neq l}d_{k,l}\right) \end{align} Now to compute the variance you need to compute: $$E\left[X_{i,k}X_{j,l}X_{i',k'}X_{j',l'}\right] = P\left[X_{i,k}=1\cap X_{j,l}=1\cap X_{i', k'}=1\cap X_{j',l'} = 1\right]$$ Try to do the same idea as I did for the expectation.
According to Youem's method, $\mathbb{E}_{\pi\in\Pi}(\sum_{ij}a_{ij}d_{\pi(i)\pi(j)})^2$ becomes $$\sum_{ij}\sum_{kl}\sum_{i'j'}\sum_{k'l'}a_{ij}a_{i'j'}d_{kl}d_{k'l'}\mathbb{E}[X_{ik}X_{jl}X_{i'k'}X_{j'l'}] \\ =\frac{1}{n}\sum_{i=1}^n\sum_{k=1}^n\sum_{i'=1}^n\sum_{k'=1}^n a_{ii}^2d_{kk}^2\\+\frac{1}{n(n-1)}\sum_{i=1}^n\sum_{k=1}^n\sum_{i\neq j'}\sum_{k\neq l'} a_{ii}a_{ij'}d_{kk}d_{kl'}\\ +\frac{1}{n(n-1)}\sum_{i=1}^n\sum_{k=1}^n\sum_{i'\neq i}\sum_{k'\neq k} a_{ii}a_{i'i}d_{kk}d_{k'k}\\ +\frac{1}{n(n-1)}\sum_{i\neq j}\sum_{k\neq l}\sum_{i'=1}^n\sum_{k'=1}^n a_{i'j}a_{i'i'}d_{k'l}d_{k'k'}\\ +\frac{1}{n(n-1)}\sum_{i\neq j}\sum_{k\neq l}\sum_{i'=1}^n\sum_{k'=1}^n a_{ii'}a_{i'i'}d_{kk'}d_{k'k'}\\ +\frac{1}{n(n-1)}\sum_{i=1}^n\sum_{k=1}^n\sum_{i'=1}^n\sum_{k'=1}^n a_{ii}a_{i'i'}d_{kk}d_{k'k'}\\ +\frac{1}{n(n-1)}\sum_{i\neq j}\sum_{k\neq l} a_{ii}a_{jj}d_{kk}d_{ll}\\ +\frac{1}{n(n-1)}\sum_{i\neq j}\sum_{k\neq l} a_{ij}a_{ji}d_{kl}d_{lk}\\ +\frac{1}{n(n-1)(n-2)}\sum_{i=1}^n\sum_{k=1}^n\sum_{i'\neq j'}\sum_{k'\neq l'} a_{ii}d_{kk}a_{i'j'}d_{k'l'}1[i\neq i']1[i\neq j'][k\neq k'][k\neq l']\\ +\frac{1}{n(n-1)(n-2)}\sum_{i\neq j}\sum_{k\neq l}\sum_{i'=1}^n\sum_{k'=1}^n a_{ij}d_{kl}a_{i'i'}d_{k'k'}1[i\neq i']1[j\neq i'][k\neq k'][l\neq k']\\ +\frac{1}{n(n-1)(n-2)}\sum_{i\neq j}\sum_{k\neq l}\sum_{i'\neq j'}\sum_{k'\neq l'}a_{ij}a_{i'j'}d_{kl}d_{k'l'}1[i=i']1[j\neq j']1[k=k']1[l\neq l']\\ +\frac{1}{n(n-1)(n-2)}\sum_{i\neq j}\sum_{k\neq l}\sum_{i'\neq j'}\sum_{k'\neq l'}a_{ij}a_{i'j'}d_{kl}d_{k'l'}1[i=j']1[i'\neq j]1[k=l']1[l'\neq k]\\ +\frac{1}{n(n-1)(n-2)}\sum_{i\neq j}\sum_{k\neq l}\sum_{i'\neq j'}\sum_{k'\neq l'}a_{ij}a_{i'j'}d_{kl}d_{k'l'}1[j=i']1[j'\neq i]1[l=k']1[k'\neq l]\\ +\frac{1}{n(n-1)(n-2)}\sum_{i\neq j}\sum_{k\neq l}\sum_{i'\neq j'}\sum_{k'\neq l'}a_{ij}a_{i'j'}d_{kl}d_{k'l'}1[j=j']1[i\neq i']1[l=l']1[k\neq k']\\ +\frac{1}{n(n-1)(n-2)(n-3)}\sum_{i\neq j}\sum_{k\neq l}\sum_{i'\neq j'}\sum_{k'\neq l'}a_{ij}a_{i'j'}d_{kl}d_{k'l'} 1[i\neq j]1[i\neq i']1[i\neq j']1[j\neq i']1[j\neq j']1[i'\neq j']1[k\neq l]1[k\neq k']1[k\neq l']1[l\neq k']1[l\neq l']1[k'\neq l']\\ $$ At least, it could be computed using a program.
stem_mix_good_quality
Say I'm given a probability distribution of two random variables $A$ and $B$. What does it mean to calculate the join probability distribution of $3^{(A-B)}$? The distribution is in fact discrete.
Hint $\ $ $\rm\:p(x)\:$ is constant $\rm = p(0)\:$ since $\rm\:p(x)-p(0)\:$ has infinitely many roots $\rm\:x = 0,3,6,9\ldots$ hence is the zero polynomial. Remark $\ $ I presume that your polynomial has coefficients over some field such as $\rm\:\Bbb Q, \Bbb R, \Bbb C,\:$ where the subset $\rm\{0,3,6,\ldots\}$ is infinite. It may fail otherwise, e.g. $\rm\:p(x\!+\!3) = p(x)\:$ for all polynomials over $\rm\,\Bbb Z/3 =$ integers mod $3.$ Generally, that a polynomial with more roots than its degree must be the zero polynomial, is equivalent to: $ $ the coefficient ring $\rm\,R\,$ is an integral domain, i.e. for all $\rm\:\forall a,b\in R\!:\:$ $\rm\: ab=0\:\Rightarrow\:a=0\ \ or\ \ b=0.$
Math Gems answer is what you're looking for. However, I just wanted to add something interesting. I'm assuming the $x$ in your question is a real number, integer, etc. However, if you allow $x$ to be an element of a finite field, then it doesn't follow that "periodic" polynomials must be zero. For example consider, $p(x)=x^{p}-x \in \Bbb F_{p}[x]$, then: $p(x+3)=(x+3)^{p}-(x+3)=x^{p}+3^{p}-x-3=x^{p}+3-x-3=x^{p}-x=p(x)$. The important facts here are in a field of characteristic $p$, $(x+y)^{p}=x^{p}+y^{p}$ and $a^{p}=a$. Also, the $3$ wasn't particularly important here, and could be replaced with any integer.
stem_mix_good_quality
I have tried to calculate $f'$: $$f'(x)=\sin(x)+x \cos(x)$$ $f'$ is unbounded, so I can't use the Lagrange theorem So, I have used this maggioration ($L \in \mathbb{R}, L>0$): $$\lvert x \sin(x) \lvert \le \lvert x \lvert \le L \lvert x \lvert $$ Is it correct? Thanks!
Hint : suppose that $f$ is a Lipschitz function, $$\exists k >0 | \forall (x,y) \in \mathbb{R}^2, |f(x)-f(y)|\leq k|x-y|$$ what about this inequality when $x=x_n=2\pi n+\pi/2$ and $y=y_n=2\pi n-\pi/2$ ? what happen when "n is big enough" ?
Suppose that there is a positiv $L$ such that $|f(x)-f(y)| \le L|x-y|$ for all $x,y$. Then $|\frac{f(x)-f(y)}{x-y}| \le L$ for all $x \ne y$. This implies: $$|f'(t)| \le L$$ for all $t$ ........
stem_mix_good_quality
I have to calculate $\int_0^{\infty} \frac{\sin x}{x} dx$ using Fubini theorem. I tried to find some integrals with property that $\int_x^{\infty} F(t) dt = \frac{1}{x}$, but I cant find anything else but $F(t) = -\frac{1}{x^2}$ and unfortunetely I can't use this. If I choose $F(t) = -\frac{1}{x}$, then I have: $\displaystyle \int_0^{\infty} \frac{\sin x}{x} dx = -\int_0^{\infty} \sin x \int_{x}^{\infty} \frac{1}{t^2} dt$, but I don't know what next
Integrating by parts twice, we get $$ \begin{align} &\int_0^L\sin(x)\,e^{-ax}\,\mathrm{d}x\tag{0}\\ &=-\int_0^L e^{-ax}\,\mathrm{d}\cos(x)\tag{1}\\ &=1-e^{-aL}\cos(L)-a\int_0^L\cos(x)\,e^{-ax}\,\mathrm{d}x\tag{2}\\ &=1-e^{-aL}\cos(L)-a\int_0^L e^{-ax}\,\mathrm{d}\sin(x)\tag{3}\\ &=1-e^{-aL}(\cos(L)+a\sin(L))-a^2\int_0^L\sin(x)\,e^{-ax}\,\mathrm{d}x\tag{4}\\ &=\frac{1-e^{-aL}(\cos(L)+a\sin(L))}{1+a^2}\tag{5} \end{align} $$ Explanation: $(1)$: prepare to integrate by parts; $u=e^{-ax}$ and $v=\cos(x)$ $(2)$: integrate by parts $(3)$: prepare to integrate by parts; $u=e^{-ax}$ and $v=\sin(x)$ $(4)$: integrate by parts $(5)$: add $\frac{a^2}{1+a^2}$ times $(0)$ to $\frac1{1+a^2}$ times $(4)$ Now we can use Fubini $$ \begin{align} \int_0^L\frac{\sin(x)}x\,\mathrm{d}x &=\int_0^L\sin(x)\int_0^\infty e^{-ax}\,\mathrm{d}a\,\mathrm{d}x\tag{6}\\ &=\int_0^\infty\int_0^L\sin(x)\,e^{-ax}\,\mathrm{d}x\,\mathrm{d}a\tag{7}\\ &=\int_0^\infty\frac{1-e^{-aL}(\cos(L)+a\sin(L))}{1+a^2}\,\mathrm{d}a\tag{8}\\[4pt] &=\frac\pi2-\int_0^\infty\frac{L\cos(L)+a\sin(L)}{L^2+a^2}e^{-a}\,\mathrm{d}a\tag{9} \end{align} $$ Explanation: $(6)$: $\int_0^\infty e^{-ax}\,\mathrm{d}a=\frac1x$ $(7)$: Fubini $(8)$: apply $(5)$ $(9)$: arctangent integral and substitute $a\mapsto\frac aL$ Then, by Dominated Convergence, we have that $$ \begin{align} \lim_{L\to\infty}\int_0^\infty\left|\frac{L\cos(L)+a\sin(L)}{L^2+a^2}\right|\,e^{-a}\,\mathrm{d}a &\le\lim_{L\to\infty}\int_0^\infty\frac{L+a}{L^2+a^2}\,e^{-a}\,\mathrm{d}a\\[4pt] &=0\tag{10} \end{align} $$ Therefore, combining $(9)$ and $(10)$, we get $$ \bbox[5px,border:2px solid #C0A000]{\lim_{L\to\infty}\int_0^L\frac{\sin(x)}x\,\mathrm{d}x=\frac\pi2}\tag{11} $$
$$\int_0^\infty dx \,\frac{\sin x}{x} = \int_0^\infty dx\, \sin x \int_0^\infty dt\, e^{-tx}$$
stem_mix_good_quality
I'm trying to calculate $\text{tr}(\exp(A)) $ for matrix $A$ , and I found several topics, but not sure if I've get it all right. So I wonder if this is a correct way of doing this. This is matrix $A$ , $$ \begin{bmatrix} 0 & -1 & 1 \\ -1 & 0 & 1 \\ 1 & 1 & 0 \\ \end{bmatrix} $$ Following the answer in here , I calculated the eigenvalues, two of which are complex numbers, $$\ \lambda_1 = 1.52 , \lambda_2= -0.76+0.85 i , \lambda_3= -0.76-0.85 i \\ \ tr(e^A) = e^{1.52} + e^{-0.76+0.85 i} + e^{-0.76-0.85 i} \\ \ $$ As suggested here , I've written the following for the complex parts $$\ \ e^{-0.76+0.85 i} + e^{-0.76-0.85 i}= e^{-0.76}.e^{0.85i}+e^{-0.76}.e^{-0.85i}\\ =e^{-0.76}(\cos0.85 + i \sin 0.85+\cos-0.85+i\sin-0.85) = e^{-0.76}.2\cos(0.85) \ $$ So finally we have, $\text{tr}(\exp(A)) = e^{1.52}+e^{-0.76}.2\cos(0.85) $ Thanks in advance!
You made a computational error somewhere. Since $A$ is a real symmetric matrix, its eigenvalues must be real. The eigenvalues of $A$ are all $-2$ and $1$ (twice). So, the eigenvalues of $e^A$ are $e^{-2}$ and $e$ (again, twice), and therefore $$\operatorname{tr}(e^A)=e^{-2}+2e.$$
$A$ is a diagonalizable matrix. Hence, $$ A=Q\Lambda Q^{\mathrm T}, $$ where $Q$ is orthogonal and $\Lambda$ is diagonal with the eigenvalues of $A$ on the diagonal. We have that \begin{align*} \operatorname{tr}(\exp(A)) &=\sum_{k=0}^\infty\frac1{k!}\operatorname{tr}(A^k)\\ &=\sum_{k=0}^\infty\frac1{k!}\operatorname{tr}(Q\Lambda^k Q^{\mathrm T})\\ &=\sum_{k=0}^\infty\frac1{k!}\operatorname{tr}(\Lambda^k Q^{\mathrm T}Q)\\ &=\sum_{k=0}^\infty\frac1{k!}\operatorname{tr}(\Lambda^k)\\ &=\sum_{k=0}^\infty\frac{2+(-2)^k}{k!}\\ &=2e+\frac1{e^{2}} \end{align*} using the fact that $\operatorname{tr}(ABC)=\operatorname{tr}(BCA)$ and the fact that the eigenvalues of $A$ are $1$ , $1$ and $-2$ . I hope this is helpful.
stem_mix_good_quality
In math notations we often uses indices, of which examples are for indexing elements of a set as in $x_i\in X$ , where $X$ is a set for indexing elements of a tuple as in $(x_1, x_2,...x_n)$ as an index that is not an integer $(\bigcap_{i} U_i) \in \tau, i\in\mathbb R, U_i\subseteq X$ , (when defining a topology $\tau$ ) as - some sort of parameter - that describe something, as when defining a ball $B_d(x, r) = \{ d(x,r)\ |\ r\in\mathbb R, x\in X, r>0 \}$ In the first example, the index just makes sure we can keep the $x$ apart, but there is a countable number of elements. In the second example the index indicates an order of elements. In the third example the index is not even countable anymore (since real numbers are not countable) and in the last example, the index is not even a number anymore, but a function. This begs the first question : Is there any coherent definition of what an index represents or is just a question of gusto of the one introducing the respective notation? Can the index be used for anything? Now, I have read in multiple contexts that an indexed 'something' can actually just be seen as an evaluated function. A very common example are time series, which basically are finite sequences, of which one can be represented by a tuple $(y_1, y_2, ..., y_n)$ . Here we can say that we are considering the function $y(i)$ , for which we can - just for example - calculate a regression model. I chose this example, because in fact, here we very often see it as a function of time (which is discretion and thus represented by the index $i$ ). But in all the above examples it seems very reasonable to also consider the respective elements as evaluated functions: $x_i\in X$ could be expressed as $x(i) \in X, i\in\mathbb Z$ the elements in $(x_1, x_2,...x_n)$ could be expressed as $x(i) \in X, i\in\mathbb Z$ $(\bigcap_{i} U_i) \in \tau, i\in\mathbb R, U_i\subseteq X$ could be expressed as $(\bigcap_{i} U(i)) \in \tau, i\in\mathbb R, U(i)\subseteq X$ $B_d(x, r) = \{ d(x,r)\ |\ r\in\mathbb R, x\in X, r>0 \}$ could be expressed as $B(x, r | d) = \{ d(x,r)\ |\ r\in\mathbb R, x\in X, r>0 \}$ or just $B(x, r, d) = \{ d(x,r)\ |\ r\in\mathbb R, x\in X, r>0 \}$ So my second question : How are these any different? Is there a fundamental difference in expressing something as an index or as a function value? Is it just a question of preference and/or convention or is there a special meaning behind having an index in opposite to a function? Especially in the later two examples it seems weird that an index has been used.
I think that https://en.wikipedia.org/wiki/Function_(mathematics)#Index_notation explains it so well, that there is little to add: Index notation Index notation is often used instead of functional notation. That is, instead of writing $f(x)$ , one writes $f_{x}$ . This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element $f_{n}$ is called the nth element of sequence. The index notation is also often used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map $x\mapsto f(x,t)$ would be denoted $f_{t}$ using index notation, if we define the collection of maps $f_{t}$ by the formula $f_{t}(x)=f(x,t)$ for all $x,t\in X$ .
It’s not so much a question of “gusto” as one of legibility and convention. You can define anything you want. Math notation is kinda cool that way.
stem_mix_good_quality
So I've got a complex equation to resolve, but actually I can't really understand how to do it. So I went to WolframAlpha which is always very helpful, which told me how to resolve it with the steps, which is great but I don't understand how it's done. The equation I need to solve is $$ \frac{z^2}{z+1}=\frac{2+4i}{5} $$ WolframAlpha tells me to do this : http://www4b.wolframalpha.com/Calculate/MSP/MSP1128204255g0bd35d5he0000674612e93i7dc9b2?MSPStoreType=image/png&s=62&w=382&h=1869 Though, I don't understand how it goes from $$ 5z^2+(-2-4i)z-2-4i=0 $$ (which I had done by myself) to $$ (2-i)[z+(-1-i)][(1+i)+(2+i)z]=0 $$ It's not in the lesson and I couldn't find info about this anywhere. Could anyone help me out and explain me how I'm supposed to do this and based on what ? Thank you very much
You can compute the roots of the polynomial equation with the quadratic formula in terms of $z$ from your equation. If $a,b \in \mathbb{C}$ are roots of the polynomial equation given, then wolphram has put them into the form $$ (z - a)(z - b) = 0$$ So in order to get the same equation as wolphram has simply compute the roots of the polynomial equation, and set them into the equation above!
$$\frac{z^2}{z+1} = \frac{2+4i}{5} \Longleftrightarrow$$ $$\frac{z^2}{z+1} = \frac{2}{5}+\frac{4}{5}i \Longleftrightarrow$$ $$5z^2 = (2+4i)(z+1) \Longleftrightarrow$$ $$5z^2 = (2+4i)+(2+4i)z \Longleftrightarrow$$ $$(-2-4i)+(-2-4i)z+5z^2 = 0 \Longleftrightarrow$$ $$(2-i)(z+(-1-i))((1+i)+(2+i)z) = 0 \Longleftrightarrow$$ $$(z+(-1-i))((1+i)+(2+i)z) = 0 \Longleftrightarrow$$ $$z+(-1-i))= 0 \vee (1+i)+(2+i)z = 0 \Longleftrightarrow$$ $$z=1+i \vee (2+i)z = -1-i \Longleftrightarrow$$ $$z=1+i \vee z = \frac{-1-i}{2+i} \Longleftrightarrow$$ $$z=1+i \vee z = -\frac{3}{5}-\frac{1}{5}i $$
stem_mix_good_quality
Considering the function $f$ defined on $[0,1]$ by $$ { f }(x)=\begin{cases} 0\quad ,\quad x \in [0,1) \\ 1\quad ,\quad x=1\end{cases} $$ and let $P=\{x_0,x_1,...,x_n\}$ be a partition of $[0,1]$. I am trying to calculate $U(f,P)$. Here is my logic so far, Let $\epsilon>0.$ Let $P_{\epsilon}$ be a partition of the interval $[0,1]$ where $P_{\epsilon}=\{[0,1-2\epsilon], [1-2\epsilon,1-\epsilon],[1-\epsilon,1]\}$ for some fixed $0<\epsilon<1$ Then looking at the definition of $U(f,P)$, $$U(f,P) = \sum_{[x_i,x_{i+1}] \in P} (x_{i+1} - x_i) \sup_{[x_{i+1},x_i]} f(x)$$ So, $f$ is zero on $[0,1)$, hence also on $[0,1-2\epsilon)$ So how can I use my partition I selected to calculate this $U(f,P)$?
It's not clear from the question that you actually get to CHOOSE the partition; and, in this problem, it isn't necessary. So let's stick with the generic $\{x_0,\ldots,x_n\}$, where $0=x_0$ and $1=x_n$. Then $$ \sup_{x\in[x_i,x_{i+1}]}f(x)=\begin{cases}0 & i < n-1\\1 & i=n-1\end{cases}. $$ Therefore $$ U(f,P)=\sum_{i=0}^{n-1}(x_{i+1}-x_i)\sup_{x\in[x_i,x_{i+1}]}f(x)=(x_n-x_{n-1}). $$
You can also say $f$ is zero on $[0,1-\epsilon)$, so the sup of $f$ on your first two of your three intervals is $0$ for each. On the third interval $[1-\epsilon,1]$ the sup becomes $1$ since $f=1$ at $x=1$ (and $0$ for the rest of the third interval). This makes the sum become $\epsilon\cdot 1=\epsilon.$
stem_mix_good_quality
I know using this board for this problem is overkill, but I'm struggling with something that I know should be simple. I'm building kitchen cabinets on two different walls. I'm going to buy the doors for these cabinets from a manufacturer in bulk, and all cabinet doors I buy will be the exact same width, which I can specify. The width of the doors I'm ordering is what I'm trying to calculate. One wall of the kitchen is 15' long, and one is 9' long. (The walls don't connect) I want the maximum number of cabinet doors possible on each wall. The doors must all be the same width because I'm buying in bulk, but also conform to a minimum width and maximum width that a cabinet door must have to be functional. And so I need an equation that let's me input: A base-width (BW) for cabinet doors. A maximum width (MW) for cabinet doors. The size of wall A (WA) The size of wall B (WB) And outputs the width of cabinet doors I should be ordering. So that I can have the maximum amount of doors possible on each wall, while ensuring all doors are the same width, but also within the min/max width range for cabinet doors.
For simplicity I will use the following notation: $L_1, L_2$ for the lengths of wall one and two, $w_{min}$ for the minimum width of a cabinet, $w_{max}$ for the maximum width of a cabinet. Let $c$ be the maximum number of cabinets we can put on the walls. Let's also write $c_1$ for the maximum possible number of cabinets on wall one and $c_2$ for the maximum number of cabinets on wall two. We can achieve $c$ cabinets by making all the cabinets as small as possible, this allows us to calculate $c$ as follows. Note that the walls are not connected so we must treat them separately: $$ c_1 = \left[ \frac{L_1}{w_{min}} \right]$$ $$ c_2 = \left[ \frac{L_1}{w_{min}} \right]$$ $$c = c_1 + c_2 $$ where $[x]$ means the integer part of $x$ (since $x$ is a positive number, $[x]$ is the same as removing the part after the decimal point). The largest size of cabinet we could have that would give us $c_1$ cabinets on wall one is $L_1 / c_1$ . We cannot make the cabinets any bigger on this wall otherwise they would not fit. Similarly the largest we could make the cabinets to have $c_2$ cabinets on wall two is $L_2 / c_2$ . So the largest size we can make the cabinets is $$\min\left(w_{max}, \frac{L_1}{c_1}, \frac{L_2}{c_2}\right).$$
If you measure the width in inches, a width of $w$ will give you $\frac {180}w$ doors along the $15'$ wall and $\frac {108}w$ doors along the $9'$ wall. If the fractions do not come out even, throw away any remainder. To get the most doors you want to choose $w$ as small as possible. This will make the cabinets quite narrow. As you talk of a minimum width, you can just use that. It will give you the most doors.
stem_mix_good_quality
I tried to calculate this integral: $$\int_0^{\frac{\pi}{2}}\arccos(\sin x)dx$$ My result was $\dfrac{{\pi}^2}{8}$ , but actually, according to https://www.integral-calculator.com/ , the answer is $-\dfrac{{\pi}^2}{8}$ . It doesn't make sense to me as the result of the integration is $$x\arccos\left(\sin x\right)+\dfrac{x^2}{2}+C$$ and after substituting $x$ with $\dfrac{{\pi}}{2}$ and $0$ , the result is a positive number. Can someone explain it? Thanks in advance!
Yes, your result is correct. For $x\in[-1,1]$ , $$\arccos(x)=\frac{\pi}{2}-\arcsin(x).$$ Hence $$\int_0^{\pi/2}\arccos(\sin(x))dx= \int_0^{\pi/2}\left(\frac{\pi}{2}-x\right)dx=\int_0^{\pi/2}tdt=\left[\frac{t^2}{2}\right]_0^{\pi/2}=\frac{\pi^2}{8}.$$ P.S. WA gives the correct result. Moreover $t\to \arccos(t)$ is positive in $[-1,1)$ so the given integral has to be POSITIVE!
A geometric argument could be using the identity: $\arccos x=\pi/2-\arcsin x$ . This gives you $\pi/2-x$ as the integrand. If you carefully observe this is a straight with $x$ and $y$ - intercepts at $\pi/2$ . Since Integration gives the area under the curve, the computation of the integral is just transformed to the computation of the area of the triangle $$I=\int_{0}^{\pi/2}\arccos(\sin x)\mathrm dx=\int_{0}^{\pi/2}(\pi/2-x)\mathrm dx=\dfrac{1}{2}\cdot \underbrace{\dfrac{\pi}{2}\cdot\dfrac{\pi}{2}}_{\text{base}\cdot\text{height}}=\dfrac{\pi^2}{8}$$
stem_mix_good_quality
Compute $\operatorname{cov}(X, \max(X,Y))$ and $\operatorname{cov}(X, \min(X,Y))$ where $X,Y \sim N(0,1)$. i think the way to calculate it is to get $$\begin{align} \operatorname{cov}(X, \max(X, Y) + \min(X,Y)) & = \operatorname{cov}(X, X+Y) \\ & = \operatorname{cov}(X, \max(X,Y)) + \operatorname{cov}(x, \min(X,Y)) \\ \end{align}$$ and $$\begin{align} \operatorname{cov}(X, \max(X,Y) - \min(X,Y)) & = \operatorname{cov}(X, \operatorname{abs}(X-Y)) \\ & = \operatorname{cov}(X, \max(X,Y)) - \operatorname{cov}(X, \min(X,Y)) \\ \end{align}$$ although this is pretty much as difficult to solve as $\operatorname{cov}(X, \max(X,Y))$ unless there is some particular trick. Anyone can help with this?
Assume that the random variables $X$ and $Y$ are i.i.d. square integrable with a symmetric distribution (not necessarily gaussian). Let $Z=\max(X,Y)$, then the covariance of $X$ and $Z$ is $\mathbb E(XZ)-\mathbb E(X)\mathbb E(Z)=\mathbb E(XZ)$. Using $Z=X\mathbf 1_{Y\lt X}+Y\mathbf 1_{X\lt Y}$, one sees that $$ \mathbb E(XZ)=\mathbb E(X^2;Y\lt X)+\mathbb E(XY;X\lt Y). $$ What is the value of the last term on the RHS? By symmetry, $\mathbb E(XY;X\lt Y)=\mathbb E(XY;Y\lt X)$ and the sum of these is $\mathbb E(XY)=\mathbb E(X)\mathbb E(Y)=0$ hence $\mathbb E(XY;X\lt Y)=0$. Thus, $$ \mathbb E(XZ)=\mathbb E(X^2F(X)), $$ where $F$ denotes the common CDF of $X$ and $Y$. Since $X$ is distributed as $-X$, $F(-X)=1-F(X)$ and $$ \mathbb E(X^2F(X))=\mathbb E((-X)^2F(-X))=\mathbb E(X^2(1-F(X))=\mathbb E(X^2)-\mathbb E(X^2F(X)). $$ This yields $$ \mathrm{cov}(X,\max(X,Y))=\tfrac12\mathrm{var}(X). $$ On the other hand, $\min(-X,-Y)=-\max(X,Y)$ hence, once again by symmetry, $$ \mathrm{cov}(X,\min(X,Y))=\tfrac12\mathrm{var}(X). $$ Edit: A much simpler proof is to note from the onset that, since $\max(X,Y)+\min(X,Y)=X+Y$, $\mathrm{cov}(X,\max(X,Y))+\mathrm{cov}(X,\min(X,Y))=\mathrm{cov}(X,X+Y)=\mathrm{var}(X)$, and that, by the symmetry of the common distribution of $X$ and $Y$ and the identity $\min(-X,-Y)=-\max(X,Y)$, $\mathrm{cov}(X,\max(X,Y))=\mathrm{cov}(X,\min(X,Y))$. These two elementary remarks yield the result and allow to skip nearly every computation.
The OP states $X,Y$ ~ $N(0,1)$, but doesn't specify whether $X$ and $Y$ are independent or dependent. Whereas the other posters assume independence, instead ... consider here the more general problem that nests same, namely $(X,Y)$ ~ standardBivariateNormal, with joint pdf $f(x,y)$: The general solution to Cov[$X$, max$(X,Y)$] ... obtained here using the mathStatica / Mathematica combo ... is simply: The 'min' case is symmetrical, but here it is anyway:
stem_mix_good_quality
I have a function of several variables defined by k different couples. I want to invert it. I guess this is an inverse problem. But I don't know what to look for to solve it. Here is a more formal explanation of my problem: Let f be a bijective function from $R^n$ to $R^n$. f is characterized by k different couples $[{(x1,x2,...xn),(y1,y2,...yn)}]_k$ verifying $f(x1,x2,...xn)=(y1,y2,...yn)$. For a given (y1,y2,...yn), calculate $f^{-1}(y1,y2,...n)$. I calculate many couples $[{(x1,x2,...xn),(y1,y2,...yn)}]_k$ with finite element method. So I can only determine y from x. But I know this is a bijection. Then I stock this couples in a table. But I want to invert the table to obtain x from y. What kind of method can be applied here ? Thank you.
Step one is to rephrase your problem as "I have a function of several variables defined by $k$ couples$, characterised by $$(x_1,\dots,x_n) = g(y_1,\dots,y_n)$$ and I want to evaluate $g$ at arbitrary inputs." The fact that $g$ is the inverse of another function $f$ is irrelevant. Now depending on what assumptions you want to make about $g$, there are several routes you could go down. If you know that the relations between the $x$'s and $y$'s are exact (ie there is no noise) and you have sufficient resolution, you can look at interpolation methods. The simplest method would be linear interpolation. If you want a smoother interpolation, you could consider bicubic interpolation ($n=2$) or tricubic interpolation ($n=3$) or their higher dimensional variants, but be aware that you will do more `smoothing out' in higher dimensions. Alternatively, if there is noise in your data you could pick a functional form for $g$ (eg perhaps you have reason to think that it's linear, or gaussian, or...) and fit the parameters in order to minimize eg the least-squares error at the points you have data for. If you give some more info about the specific problem you're trying to solve, I will be able to give a more helpful answer.
You can rewrite this as a system of equations where: $g_i(y_1, y_2, ... y_n) - x_i = 0$ If each of these $g_i$'s is a polynomial, you can directly solve this problem using Buchberger's algorithm/Grobner basis. (See the wiki page: http://en.wikipedia.org/wiki/Gr%C3%B6bner_basis in the section labelled "Solving equations")
stem_mix_good_quality
In my statistics book I was encountered by the task to calculate the maximum and minimum of $n$ independent random variables, which are uniformly distributed over $(0,1)$. Knowing the definition of the density function I get that $f_{\mathbb{X}}(x) = {1\over {b-a}}, a=0, b=1$ which gives $f_{\mathbb{X}}(x) = {1\over {1-0}}, 1$ The distribution function then becomes, simply, $x$. To get the maximum of the $n$ variables (let's call this $\mathbb{Z}$) I get $F_{\mathbb{Z}}(x) = \Pi_{i=1}^{n} F_{\mathbb{X}_i}(x) = x^n$ For the minimum ($\mathbb{Y}$) I get $F_{\mathbb{Z}}(x) = 1-\Pi_{i=1}^{n} 1- F_{\mathbb{X}_i}(x) = 1-(1-x)^n$ So, to the get the expected values I have two choices (which are really the same thing); integrate the density functions over $(0,1)$, or just take $F(1)-F(0)$. Either way, I get that $E(\mathbb{Z}) = F_{\mathbb{Z}}(1)-F_{\mathbb{Z}}(0) = 1^n - 0^n = 1$ $E(\mathbb{Y}) = F_{\mathbb{Y}}(1)-F_{\mathbb{Y}}(0) = (1-(1-1)^n) - (1-(1-0)^n) = (1-0)-(1-1) = 1$ My books disgree, claiming that the expected values are $E(\mathbb{Z}) = {n \over {n+1}}$ $E(\mathbb{Y}) = {1\over n}$ Since I can't see how this is true, I'd simply like to know where I went wrong, and what I should've done?
For expected value, you need to first differentiate the CDF, so you should have $f_z(x)=nx^{n-1}$. now you take the expected value and integrate over (0,1) $$E(\mathbb{Z})=\int_0^1{xnx^{n-1}dx}=\int_0^1{nx^n}=\frac{nx^{n+1}}{n+1}|_0^1=\frac{n}{n+1}$$
$f_Z(x)=nx^{n-1}$ then $EZ=\int_0^1 x nx^{n-1}dx=n\int_0^1x^ndx=n\left(\frac{x^{n+1}}{n+1}|_0^1\right)=\frac{n}{n+1}$ $f_Y(x)=n(1-x)^{n-1}$ then $EY=\int_0^1 x n(1-x)^{n-1}dx=n\int_0^1x(1-x)^{n-1}dx$ $=-\int_0^1xd(1-x)^{n}=-x(1-x)^n|_0^1+\int_0^1(1-x)^ndx=-\int_0^1(1-x)^nd(1-x)$ $=-\frac{(1-x)^{n+1}}{n+1}|_0^1=\frac{1}{n+1}$
stem_mix_good_quality
The question is: A half cylinder with the square part on the $xy$-plane, and the length $h$ parallel to the $x$-axis. The position of the center of the square part on the $xy$-plane is $(x,y)=(0,0)$. $S_1$ is the curved portion of the half-cylinder $z=(r^2-y^2)^{1/2}$ of length $h$. $S_2$ and $S_3$ are the two semicircular plane end pieces. $S_4$ is the rectangular portion of the $xy$-plane Gauss' law: $$\iint_S\mathbf E\cdot \mathbf{\hat{n}}\,dS=\frac{q}{\epsilon_0}$$ $\mathbf E$ is the electric field $\left(\frac{\text{Newton}}{\text{Coulomb}}\right)$. $\mathbf{\hat{n}}$ is the unit normal vector. $dS$ is an increment of the surface area $\left(\text{meter}^2\right)$. $q$ is the total charge enclosed by the half-cylinder $\left(\text{Coulomb}\right)$. $\epsilon_0$ is the permitivity of free space, a constant equal to $8.854\times10^{-12}\,\frac{\text{Coulomb}^2}{\text{Newton}\,\text{meter}^2}$. The electrostatic field is: $$\mathbf{E}=\lambda(x\mathbf{i}+y\mathbf{j})\;\text{,}$$ where $\lambda$ is a constant. Use this formula to calculate the part of the total charge $q$ for the curved portion $S_1$ of the half-cylinder: $$\iint_S\mathbf E\cdot \mathbf{\hat{n}}\,dS=\frac{q}{\epsilon_0}=\iint_R\left\{-E_x[x,y,f(x,y)]\frac{\partial f}{\partial x} -E_y[x,y,f(x,y)]\frac{\partial f}{\partial y} +E_z[x,y,f(x,y)] \right\}\,dx\,dy$$ The goal is to find the total charge $q$ enclosed by the half-cylinder, expressed in terms of $\lambda$, $r$ and $h$. The solution should be: $$\pi r^2\lambda h\epsilon_0$$ This is what I've tried: First calculate Gauss' law for $S_1$: \begin{align} f(x,y)&=z=(r^2-y^2)^{1/2}=\sqrt{(r^2-y^2)} \\ \frac{\partial f}{\partial x}&=\frac12(r^2-y^2)^{-\frac12}\cdot 0=0 \\ \frac{\partial f}{\partial y}&=\frac12(r^2-y^2)^{-\frac12}\cdot -2y=-\frac{y}{\sqrt{(r^2-y^2)}}=-\frac yz \\ \\ \mathbf{E}&=\lambda(x\mathbf{i}+y\mathbf{j}) \\ E_x[x,y,f(x,y)]&=\lambda x \\ E_y[x,y,f(x,y)]&=\lambda y \\ E_z[x,y,f(x,y)]&=0 \\ \\ \text{length}&=h \\ \end{align} Using the formula $$\iint_R\left\{-E_x[x,y,f(x,y)]\frac{\partial f}{\partial x} -E_y[x,y,f(x,y)]\frac{\partial f}{\partial y} +E_z[x,y,f(x,y)] \right\}\,dx\,dy$$ we get: \begin{align} &\iint_R\left\{-\lambda x\cdot 0-\lambda y\cdot -\frac{y}{z} + 0\right\}\,dx\,dy \\ &=\iint_R\frac{\lambda y^2}{z}\,dx\,dy \\ &=\lambda\iint_R\frac{y^2}{\sqrt{r^2-y^2}}\,dx\,dy \\ \end{align} Since the length is $h$ and the length is parallel to the $x$-axis: \begin{align} &\lambda \int_R\int_0^h\frac{y^2}{\sqrt{r^2-y^2}}\,dx\,dy \\ &=\lambda\int_R\left[\frac{y^2x}{\sqrt{r^2-y^2}}\right]_0^h\,dy \\ &=\lambda\int_R\frac{y^2h}{\sqrt{r^2-y^2}}\,dy \\ \end{align} Subsitute: \begin{align} y&=r\sin\theta \\ \theta&=\arcsin\left(\frac1r y\right) \\ \frac{dy}{d\theta}&=\frac{d}{d\theta}\left(r\sin\theta\right)=r\cos\theta \\ dy&=r\cos(\theta)\,d\theta \\ \\ &\lambda\int\frac{hr^2\sin^2\theta}{\sqrt{r^2-r^2\sin^2\theta}}\cdot r\cos(\theta)\,d\theta \\ &=\lambda h\int\frac{r^3\sin^2\theta\cos\theta}{r\sqrt{1-\sin^2\theta}}\,d\theta \\ &=\lambda hr^2\int\frac{\sin^2\theta\cos\theta}{\sqrt{\cos^2\theta}}\,d\theta \\ &=\lambda hr^2\int\frac{\sin^2\theta\cos\theta}{\cos\theta}\,d\theta \\ &=\lambda hr^2\int\sin^2\theta\,d\theta \\ &=\lambda hr^2\int\frac{1-\cos2\theta}{2}\,d\theta \\ &=\frac12\lambda hr^2\int1-\cos2\theta\,d\theta \\ &=\frac12\lambda hr^2\int1\,d\theta-\int\cos2\theta\,d\theta \\ &=\frac12\lambda hr^2\left[\theta-\frac12\sin2\theta\right] \\ \\ \text{substitute back } \theta=\arcsin\left(\frac1r y\right)\text{:} \\ &=\frac12\lambda hr^2\left[\arcsin\left(\frac1r y\right)-\frac12\sin\left(2\arcsin\left(\frac1r y\right)\right)\right] \\ \text{the boundaries of }y\text{ are }-r\text{ and }r\text{:} \\ &=\frac12\lambda hr^2\left[\arcsin\left(\frac{y}{r}\right)-\frac12\sin\left(2\arcsin\left(\frac{y}{r}\right)\right)\right]_{-r}^r \\ &=\frac12\lambda hr^2\left(\frac{\pi}{2}-0\right) - \left(-\frac{\pi}{2}-0\right) \\ &=\frac12\pi\lambda hr^2 \end{align} Calculate Gauss' law for $S_2$ and $S_3$: The surfaces of $S_2$ and $S_3$ are equal. Since: $\bullet$ the position of the center of the square part on the $xy$-plane is $(x,y)=(0,0)$, the direction of the electrostatic field at both surfaces is opposite: $(\lambda x \mathbf{i})$, $\bullet$ and the unit normal vectors are in opposite direction, the addition of the result of Gauss' law will not be equal to 0. The surface of each of the surfaces is $\frac12 \pi r^2$. The electric field in the $x$-direction is $\lambda x\mathbf{i}$. $x$ for $S_2$ = $\frac12 h$. $x$ for $S_3$ = $-\frac12 h$. $\mathbf{\hat{n}}$ for $S_2$ = $\mathbf{i}$. $\mathbf{\hat{n}}$ for $S_3$ = $-\mathbf{i}$. Therefore for $S_2$: \begin{align} \mathbf{E}\cdot \mathbf{\hat{n}} \times \text{surface area}&=\lambda x\mathbf{i} \cdot \mathbf{i} \times \frac12 \pi r^2 \\ &=\lambda \frac12 h\mathbf{i} \cdot \mathbf{i} \times \frac12 \pi r^2 \\ &=\frac14 \pi\lambda hr^2 \\ \end{align} And for $S_3$: \begin{align} \mathbf{E}\cdot \mathbf{\hat{n}} \times \text{surface area}&=\lambda x\mathbf{i} \cdot -\mathbf{i} \times \frac12 \pi r^2 \\ &=\lambda (-\frac12 h)\mathbf{i} \cdot -\mathbf{i} \times \frac12 \pi r^2 \\ &=\frac14 \pi\lambda hr^2 \\ \end{align} Calculate Gauss' law for $S_4$: Since $S_4$ lies in the $xy$-plane, the electrostatic field $\mathbf{E}=\lambda(x\mathbf{i}+y\mathbf{j})\;\text{,}$ lies parallel to the surface, thus the result of Gauss' law is $0$. The net result is: \begin{align} \iint_S\mathbf E\cdot \mathbf{\hat{n}}\,dS&=\frac{q}{\epsilon_0} \\ &=\frac12 \pi\lambda hr^2 + \frac14 \pi\lambda hr^2 + \frac14 \pi\lambda hr^2 +0 \\ &=\pi\lambda hr^2 \\ \end{align} The total charge $q$ enclosed in the half-cylinder is thus: $$q=\pi\lambda hr^2 \epsilon_0$$. This solves the problem I was having.
The lower and upper boundaries for $y$ in $S_1$ should be $-r$ and $r$ instead of $0$ and $r$. I was thinking in polar coordinates instead of in Cartesian coordinates... I'll edit the original post accordingly
"the boundaries of $y$ are $0$ and $r$" There's your problem. It should be $-r$ and $r$.
stem_mix_good_quality
I am trying to show that when having an $m\times n$ matrix where $n > m$ (more columns than rows), then $A^TA$ is not invertible, so I have set it up as follows: $A = \begin{bmatrix}a&b&2a\\c&d&2c\\\end{bmatrix}$ Then: $A^TA$ is: $ \begin{bmatrix}a&c\\b&d\\2a&2c\end{bmatrix} *\begin{bmatrix}a&b&2a\\c&d&2c\\\end{bmatrix}$ The resultant matrix is a 3x3 matrix with a bunch of letter terms. How can I show that this matrix is not invertible? I tried to calculate the determinant however it gets super messy? Thanks
$rank(A^TA) \leq m < n$, hence it is not invertible.
As there are already nice answers why $A^T A$ is not invertible, let me give an answer which is more about who to compute such a determinant. There is a formula which is due to Cauchy and Binet, which reads for $A\in Mat(m\times n), B\in Mat(m\times n)$ $$ det(AB) = \sum_{S\in \binom{[n]}{m}} det(A_{[m],S}) \cdot det(B_{S,[m]}),$$ where $$\binom{[n]}{m}=\{ S \subseteq \{1, \dots, n \} : \vert S \vert = m \}$$ and $$ A_{[m],S} = (A_{i,j})_{1\leq i \leq m, \ j\in S}, \quad B_{S,[m]} = (B_{i,j})_{i\in S, \ 1\leq j \leq m}.$$ From this one would get as well your claim, as $\binom{[n]}{m}=\emptyset$ for $m>n$.
stem_mix_good_quality
I'm given a statement to prove: A rotation of π/2 around the z-axis, followed by a rotation of π/2 around the x-axis = A rotation of 2π/3 around (1,1,1) Where z-axis is the unit vector (0,0,1) and x-axis is the unit vector (1,0,0). I want to prove this statement using quaternions, however, I'm not getting the expected answer: The quaternion I calculate for the rotation of 2π/3 around (1,1,1) yields: $ [\frac{1}{2},(\frac{1}{2},\frac{1}{2},\frac{1}{2})] $ The quaternion I calculate for a rotation of π/2 around the z-axis followed by the rotation of π/2 around the x-axis yields: $ [\frac{1}{2},(\frac{1}{2},-\frac{1}{2},\frac{1}{2})] $ If I calculate the rotation π/2 around z-axis , followed by the rotation of π/2 around y-axis , then I get the equivalent quaternions I'm looking for. Is this an issue with the problem statement or am I simply making an error? Note : That I also tried to prove the same statement using rotation matrices instead of quaternions and received the same result.
I think that the claim is wrong, so it was formulated. The given rotations correspond to the quaternions: $$ \begin{split} R_{\mathbf{z},\pi/2}\rightarrow e^{\mathbf{k}\pi/4}=\cos \dfrac{\pi}{4}+\mathbf{k}\sin \dfrac{\pi}{4}=\dfrac{\sqrt{2}}{2}+\mathbf{k}\dfrac{\sqrt{2}}{2}\\ R_{\mathbf{i},\pi/2}\rightarrow e^{\mathbf{i}\pi/4}=\cos \dfrac{\pi}{4}+\mathbf{i}\sin \dfrac{\pi}{4}=\dfrac{\sqrt{2}}{2}+\mathbf{i}\dfrac{\sqrt{2}}{2} \end{split} $$ so that: $$ R_{\mathbf{i},\pi/2}R_{\mathbf{z},\pi/2} \rightarrow (\dfrac{\sqrt{2}}{2}+\mathbf{k}\dfrac{\sqrt{2}}{2})(\dfrac{\sqrt{2}}{2}+\mathbf{i}\dfrac{\sqrt{2}}{2})=\dfrac{1}{2}(1+\mathbf{i}-\mathbf{j}+\mathbf{k}) $$ If we put this quaternion in exponential form we have: $$ e^{\mathbf{i}\pi/4}e^{\mathbf{k}\pi/4}=e^{\mathbf{u}\pi/3} $$ where $\mathbf{u}=\mathbf{i}-\mathbf{j}+\mathbf{k}$, i.e. a rotation of $2\pi/3$ around an axis passing through $(1,-1,1)$, which is the same result found by @Dash and is different from a rotation of the same angle about an axis passing through $(1,1,1)$ that corresponds to the quaternion $$ R_{\mathbf{v},2\pi/3} \rightarrow e^{\mathbf{v}\pi/3} \qquad \mathbf{v}=\dfrac{\mathbf{i}+\mathbf{j}+\mathbf{k}}{\sqrt{3}} $$ And there is no way to have a fair result inverting some rotation (i.e. changing active/passive).
It looks like you are mixing up active v/s passive rotations in your calculations. In other words, you need to be consistent about whether a quaternion represents an operator that rotates a vector to a new position in the same coordinate frame, or represents a rotation of the frame itself, keeping the vector fixed with respect to its original frame. The two operations are inverses of each other. Once you resolve this, I'm sure you'll get a consistent answer.
stem_mix_good_quality
I'm attempting to test the claim: "Every card deck shuffled is unique. A shuffled deck of cards will exist once and never again." Assumption: A perfect world where a deck of cards is perfectly random all of the time. 52 cards are used in the deck. The birthday paradox and it's easy enough to calculate for small numbers. 23 people and 365 birthdays as used in the 50% examples. But how do you approach (or approximate) the birthday paradox for values like 52!? I understand 52! is a large (~226 bit) number but I would like to get a feel of the order of magnitude of the claim. Is it 1% or 0.00001%? To calculate the probability of a shuffle collision would be: (52!)!/(52!^n*(52!-n)!) I understand the formula. But 52!! is incomputable so where do I go from here? How to approach a problem like this? This is just for my own curiosity and not homework. If it can be done for a deck of cards I'd want to give it a try on collisions in crypto keys. (RSA, and AES256 etc.)
A simple estimate: given $n$ random variables, independently and identically distributed evenly over $M$ states, the probability that some two of them are equal is at most $\frac{n(n-1)}{2M}$ . Why? Because that's the expected value for the number of pairs that are equal; there are $\binom{n}{2}$ pairs, each with a probability of $\frac1M$ of being equal. So then, if $n$ is significantly less than $\sqrt{M}$ , the probability of some two being equal is small. Now, we focus on the shuffling problem. How big is $52!$ , really? For that, we have Stirling's formula: $$n!\approx \frac{n^n}{e^n}\cdot\sqrt{2\pi n}$$ Take logarithms, and we get $n(\log n-\log e)+\frac12\log(2\pi n)$ . Calculating that in the spreadsheet I've got open already, I get a base 10 log of about $67.9$ , for about $8\cdot 10^{67}$ possible shuffled decks. So, then, if we take $10^{30}$ shuffles, that's a probability of less than $\frac{5\cdot 10^{59}}{8\cdot 10^{67}}< 10^{-8}$ (one in a hundred million) that any two of them match exactly. That $10^{30}$ - if you had a computer for every person on earth, each generating shuffled decks a billion times per second, the planet would be swallowed up by the sun before you got there.
I thought about the exact same thing and solved it using a different method. My exact problem was: How many shuffles do you need to have a $50\%$ chance that at least one shuffle happened twice? I used a slightly different formula , found here . $n$ is the number of shuffles and $p(same)$ equals $0.5$ in our case. $$n\approx \sqrt{ 2 \times 365 ln(\frac{1}{1 - p(same)})}$$ Then I applied the formula for numbers other than $365$ . Here $\frac{1}{1-p(same)} = 2$ , so the formula becomes: $$n\approx \sqrt{ 2ln(2) \times M}$$ $M$ the number of possibilities. For a birthday it's $365$ , for a card deck it's $52!$ . That gives $1.057 × 10^{34}$ , which is not that far from " one in a hundredth million for $10^{30}$ shuffles "! Actually I didn't use the second formula, I used the first one and went with a much more empirical approach. I used the code given in the article, changing $M$ to increasingly big numbers and plotting the results in Excel ( $n$ as a function of $M$ ) until I had a satisfying trend curve. That gave a square root curve ( $y \approx 1.1773\sqrt{x}$ ). I applied logarithms to $n$ and $M$ , so that $ln(n)$ as a function of $ln(M)$ is linear. Then I used the trend line of that new function ( $0.5x + 0.1632$ ) to get $ln(n)$ for $y = ln(52!)$ , I also found $1.057 × 10^{34}$ . I now realize that I did not choose the easiest method...
stem_mix_good_quality
I want to calculate the divergence of the Gravitational field: $$\nabla\cdot \vec{F}=\nabla\cdot\left( -\frac{GMm}{\lvert \vec{r} \rvert^2} \hat{e}_r\right )$$ in spherical coordinates. I know that in spherical coordinates: $$\begin{aligned} & x=r \sin\theta \cos \phi \\&y=r\sin\theta \sin \phi \\& z=r\cos\theta \end{aligned}$$ and the unit vector are: $$\begin{aligned} & e_r=\begin{pmatrix}\sin\theta\cos\phi\\\sin\theta \sin\phi\\\cos\theta \end{pmatrix} \\ & e_{\theta}=\begin{pmatrix}\cos\theta\cos\phi\\\cos\theta \sin\phi\\-\sin\theta \end{pmatrix}\\&e_{\phi}=\begin{pmatrix}-\sin\phi\\\cos\phi\\0\end{pmatrix}\end{aligned}$$ Now I need to convert my original vector field into spherical coordinates (this is the part I am not really sure about): $$\vec{F}=-\frac{GMm}{x^2+y^2+z^2} \hat{e}_x-\frac{GMm}{x^2+y^2+z^2} \hat{e}_y-\frac{GMm}{x^2+y^2+z^2} \hat{e}_z $$ transforming the coordinates: $x^2+y^2+z^2=(r\sin\theta\cos\phi)^2+(r\sin\theta\sin\phi)^2+(r\cos\theta)^2=r^2$ $$\implies\vec{F}=\frac{-GMm}{r^2}\left(\hat{e}_x +\hat{e}_y +\hat{e}_z \right )$$ How can I transform the unit vectors now? Do I just replace them by the spherical unit verctors? Is there a short really cool way to calculate the divergence of this vector field? I know that the answer should be zero except at $r=0$ the divergence should be undefined.
Since there's only $r$ dependence, \begin{align*} \nabla \cdot \mathbf{F} &= \frac{1}{r^{2}} \frac{\partial}{\partial r} (r^{2} F_{r}) \\ &= \frac{1}{r^{2}} \frac{\partial}{\partial r} (-GMm) \\ &= 0 \end{align*} for $\mathbf{r}\in \mathbb{R}^{3} \backslash \{ \mathbf{0} \}$. May refer to this
$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \color{#f00}{-\nabla\cdot\pars{{1 \over r^{2}}\,{\vec{r} \over r}}} & = -\nabla\cdot\nabla\pars{-\,{1 \over r}} = \nabla^{2}\pars{1 \over r} = \color{#f00}{-4\pi\, \delta\pars{\vec{r}}} \\[5mm] \mbox{because}\ \color{#f00}{\int_{V} \bracks{-\nabla\cdot\pars{{1 \over r^{2}}\,{\vec{r} \over r}}}\dd^{3}\vec{r}} & = -\int_{S}{1 \over r^{2}}\,{\vec{r} \over r}\cdot\dd\vec{\mathsf{S}} = -\int\dd\Omega_{\vec{r}} = \color{#f00}{-4\pi}\,,\qquad\vec{0} \in V \end{align}
stem_mix_good_quality
I'm a physicist with no clue how to calculate $S = \sum_{k=1}^{\infty} \frac{\cos(kx)}{k^2}$. One handbook says the answer is $\frac{1}{12}(3x^2 - 6 \pi x + 2 \pi^2)$ on the interval $0 \leq x \leq 2\pi$, which would give the value everywhere since the sum is periodic in $2\pi$. Altland's Condensed Matter Field Theory gives the value as $\frac{\pi^2}{6} - \frac{\pi |x|}{2} + \frac{x^2}{4} + \dots$, giving no domain and with a weird absolute value, implying the series is valid everywhere and has higher powers. But how can this be correct? The series is periodic and those first three terms give the complete answer on $0 \leq x \leq 2\pi$, so trying to define the result everywhere with an infinite power series seems senseless to me. Anyway, can anybody shed light on how to perform the summation? I've tried writing $S$ as a contour integration, like so: $\frac{1}{2\pi i} \oint dz g(z) \frac{\cos(i z x)}{-z^2} $, where $g(z) = \frac{\beta}{exp(\beta z) - 1}$, a counting function with simple poles at $z = i k$, $k = ..., -2, -1, 0, 1, 2, \dots$, and the contour contains all the poles of $g(z)$ for $k = 1, 2, 3, ...$. Now, this is not my expertise, but I want to learn. The trick now is to pick a different contour (possibly going off to infinity), such that the integral can be performed. I see that the product $g(z) \frac{\cos(i z x)}{-z^2}$ goes to zero at infinity, but I cannot see how to deform the contour such that the integral becomes tractable.
I had evaluated this exact series a long time ago and just stumbled upon this question and noticed that this method wasn’t given, so here is the way I evaluated it. Define $$F(x) = \sum_{n=1}^{\infty} \frac{\cos (n x)}{n^2}$$ where $x \in \mathbb{R}$ . Since $F(x)$ is $2\pi$ -periodic, WLOG, we may confine ourselves to $0<x<2\pi$ . By a simple calculation: \begin{align} F(x) - F(0) &= \int_{0}^{x} \left(-\sum_{n=1}^{\infty} \frac{\sin (n t)}{n}\right)\, dt \\ &= \Im \int_{0}^{x} \left(-\sum_{n=1}^{\infty} \frac{e^{i n t}}{n} \right) \, dt \\ &=\Im \int_{0}^{x} \ln(1-e^{i t}) \, dt \\ &= \Im \int_{0}^{x} \ln \left(2\sin\left(\frac{t}{2}\right)e^{i(t-\pi)/2}\right) \, dt \\ &=\Im \int_{0}^{x} \left[ \ln \left(2\sin\left(\frac{t}{2}\right)\right)+\frac{i(t-\pi)}{2}\right]\, dt \\ &= \int_{0}^{x} \frac{t-\pi}{2} \, dt = \frac{x^2}{4}-\frac{\pi x}{2}\end{align} Since $F(0)=\frac{\pi^2}{6}$ , this implies that $$\boxed{\sum_{n=1}^{\infty} \frac{\cos (n x)}{n^2} = \frac{x^2}{4}-\frac{\pi x}{2}+\frac{\pi^2}{6}}$$ For self-containment, here is a proof that $F(0)=\zeta(2)=\frac{\pi^2}{6}$ via this method: Notice $\cos(\pi n) = (-1)^{n}$ for $n \in \mathbb{N}$ . Recall the trivial property that $$\eta (s) = \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^s} = (1-2^{1-s})\zeta(s)$$ Take $x \to \pi$ : $$-\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^2} - \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{4} - \frac{\pi^2}{2} = -\frac{\pi^2}{4}$$ $$\implies \zeta (2) (2^{-1}-1) - \zeta(2) = -\frac{\pi^2}{4} \implies -\frac{3}{2} \zeta(2) = -\frac{\pi^2}{4}$$ $$\implies \zeta(2) = \frac{\pi^2}{6}$$ $\square$
This is not a very rigorous answer at all. Let us consider $$I=\sum_{k=1}^{\infty} \frac{\cos(kx)}{k^2}\qquad ,\qquad J =\sum_{k=1}^{\infty} \frac{\sin(kx)}{k^2}$$ $$I+iJ=\sum_{k=1}^{\infty} \frac{e^{ikx}}{k^2}=\sum_{k=1}^{\infty} \frac{(e^{ix})^k}{k^2}=\text{Li}_2\left(e^{i x}\right)$$ where appears the polylogarithm function. So, $$I=\Re\left(\text{Li}_2\left(e^{i x}\right)\right)=\frac{1}{2} \left(\text{Li}_2\left(e^{+i x}\right)+\text{Li}_2\left(e^{-i x}\right)\right)$$ $$J=\Im\left(\text{Li}_2\left(e^{i x}\right)\right)=\frac{1}{2} i \left(\text{Li}_2\left(e^{-i x}\right)-\text{Li}_2\left(e^{+i x}\right)\right)$$ Since, at this point, being stuck, I used brute force generating a table of $201$ equally spaced values $(0\leq x \leq 2 \pi)$ and performed a quadratic regression which led to $$I=0.25 x^2-1.5708 x+1.64493$$ for a residual sum of squares equal to $1.11\times 10^{-27}$ which means a perfect fit. The second coefficient is obviously $\frac \pi 2$; concerning the third, if $x=0$ or $x=2\pi$, $I=\frac{\pi^2} 6$. So $$I=\frac 14 x^2-\frac \pi 2 x+\frac{\pi^2} 6$$ which is the formula from the handbook. I hope and wish that you will get some more founded answers (as you are, I am a physicist and not an real mathematician).
stem_mix_good_quality
There are 10 people, 4 women and 6 men. Among the men, 3 are left-handed and among the women 2 are left-handed writers. Two people are chosen without replacement. Calculate the probability that the sample contains at least a right-handed person or at least one woman. I'm trying to solve the following question but I really don't know how to start it! Can someone help me? I believe I need to use combinations as I can choose a right-handed person in the first or second draw (the same for the woman) but I'm stuck in how to compute it. I know that the asked probability is P $=\frac{14}{15} $
Hint: A sample that does not contain "at least one right-handed person or at least one woman" consists only of left-handed men.
We can attempt to do this using combinatorics, the Inclusion-Exclusion Principle, and the complement rule of probability. $$P(\geq 1\ right\ handed\ \cup\ \geq 1\ women) = P(\geq 1\ right\ handed) + P(\geq 1\ women) - P(\geq 1\ right\ handed\ \cap \ \geq 1\ women)$$ Where we have used the Inclusion-Exclusion Principle to get the above equality. Now we'll find each of the terms on the right hand side. We'll use the complement rule of probability to achieve: $$P(\geq 1\ right\ handed) = 1 - P(0 \ right\ handed)$$ $$P(\geq 1\ right\ handed) = 1 - \frac {5 \choose 2}{10 \choose 2} = \frac {7}{9}$$ Where $\frac {5 \choose 2}{10 \choose 2}$ comes from having a total of ${10 \choose 2}$ groups of 2 and ${5 \choose 2}$ comes from choosing 2 people only of the left handed people. We can use the same reasoning to get $$P(\geq 1\ women) = 1 - P(0\ women)$$ $$P(\geq 1\ women) = 1 - \frac{6 \choose 2}{10 \choose 2} = \frac{2}{3}$$ Now we need to take care of over counting which is why we have the subtracting term in our first equation. $$P(\geq 1\ right\ handed\ \cap \ \geq 1\ women) = 1 - P(0\ right\ handed\ \cup \ 0 \ women) $$ $$P(0\ right\ handed\ \cup \ 0 \ women) = P(0\ right\ handed) + P(0 \ women) - \\P(0\ right\ handed\ \cap \ 0 \ women) $$ $$P(0\ right\ handed\ \cup \ 0 \ women) = \frac {5 \choose 2}{10 \choose 2} + \frac {6 \choose 2}{10 \choose 2} - \frac {3 \choose 2}{10 \choose 2} = \frac {22}{45}$$ Putting everything together now, $$P(\geq 1\ right\ handed\ \cup\ \geq 1\ women) = P(\geq 1\ right\ handed) + P(\geq 1\ women) - P(\geq 1\ right\ handed\ \cap \ \geq 1\ women)$$ $$P(\geq 1\ right\ handed\ \cup\ \geq 1\ women) = \frac {7}{9} + \frac {2}{3} - \frac {23}{45} = \frac {42}{45} = \frac{14}{15}$$
stem_mix_good_quality
Let quadrilateral $ABCD$ inscribe in circle whose center is $O$ and let $AC$ and $BD$ intersect at $P$ ,while $Q$ be a point on the straight line $OP$ .The circumcenters of $\Delta QAB$ , $\Delta QBC$ , $\Delta QCD$ , $\Delta QDA$ are $O_1$ , $O_2$ , $O_3$ , $O_4$ respectively.Then how to prove the concurrence of the straight lines $O_1 O_3$ , $O_2 O_4$ , $OP$ ?( $O$ is not coincide with $P$ ) I'm sorry that I can't provide any useful ideas.(I have tried to violently calculate through trigonometric functions,but it only made me crazy) And I'm very sorry for my possible wrong grammars and strange expressions because I'm still an English learner and my first language is very different from the English system. I'd appreciate it if someone could share his ideas about the question.
Here's a slick inversive proof. Let $(QAB)$ meet $(QCD)$ again at $X$ , let $(QBC)$ meet $(QDA)$ again at $Y$ . It suffices to show that the centre of $(QXY)$ (which is $O_1O_3\cap O_2O_4$ ) lies on $OP$ . First, note that $(QAC)$ and $(QBD)$ meet again on line $OP$ . Indeed, let $(QAC)$ meet $OP$ again at $R$ . Then $PQ\cdot PR=PA\cdot PC=PB\cdot PD$ , which implies that $R$ lies on $(QBD)$ too. Now invert centre $Q$ with arbitrary radius, and denote inverses with $'$ . The line $OP$ is fixed, and $ABCD$ maps to another cyclic quadrilateral $A'B'C'D'$ , whose center $J$ still lies on $OP$ . Since $(QAC)$ and $(QBD)$ meet on $OP$ , we know that $R'=A'C'\cap B'D'$ is on $OP$ . Circles $(QAB)$ , $(QCD)$ map to lines $A'B'$ , $C'D'$ respectively, so $X'=A'B'\cap C'D'$ . Similarly, $Y'=B'C'\cap D'A'$ . But by Brokard's theorem, we know that $JR'\perp X'Y'$ , i.e. $OP\perp X'Y'$ . This implies that the centre of $(QXY)$ lies on $OP$ , as desired.
My friend has given his answer.I think I should present it. Let $E=O_1O_3\cap O_2O_4$ ,and $F=AB\cap CD$ , $G=AD∩BC$ .Let $\Gamma$ be a circle whose centre is $E$ ,radius is $EQ$ .(If $E$ is coincide with $Q$ ,then let $\Gamma$ be a null circle.)And $\Gamma$ meets $\bigcirc O_1$ again at $K$ . Then $FB\cdot FA=FC\cdot FD$ ,which implies that $F$ is on the radical axis of $\bigcirc O_1$ and $\bigcirc O_3$ .So $FQ\perp O_1O_3$ ,which means $FQ\perp O_1E$ .So $F$ is on the radical axis of $\bigcirc O_1$ and $\Gamma$ .Then we have that $F,K,Q$ are on the same line,which implies that $FK\cdot FQ=FB\cdot FA$ .Thus $F$ is on the radical axis of $\bigcirc O$ and $\Gamma$ .Similarly, $G$ is on the radical axis of $\bigcirc O$ and $\Gamma$ .So the straight line $FG$ is the radical axis of $\bigcirc O$ and $\Gamma$ .Therefore, $OE\perp FG$ . And by Brokard's theorem,we know that $OP\perp FG$ .Thus, $O,E,P$ are collinear. So,the staight lines $O_1O_3$ , $O_2O_4$ and $OP$ are concurrent at $E$ .
stem_mix_good_quality
Two players have one uneven coin, the probability of getting a head is 2/3. The first person(A) throws three times. The second(B) tosses the coin until he gets tails. a) What is the probability that the former will throw more heads; from the second? b) What is the average number of heads thrown by the two together? Can you help me understand how am i supposed to compare 2 probabilities and how do i calculate the average number of heads thrown by the two?Do i have to calculate the average numbers of heads for every assumption?
The easiest way to this is by defining appropriate random variables: $Y_1$ being the number of heads the first person has throw, and $Y_2$ being the number of heads the second person has thrown. Then by the information: $Y_1\sim Bin(3,\frac{2}{3})$ and $Y_2\sim \text{Geom}(\frac{1}{3})$ I think there is an implicit assumption that $Y_1$ and $Y_2$ are independent. Then the desired probability is: $$ \mathbb{P}(Y_1\geq Y_2)= \mathbb{P}(Y_1=1,Y_2=0)+ \mathbb{P}(Y_1=2,Y_2=0)+ \mathbb{P}(Y_1=2,Y_2=1) + \mathbb{P}(Y_1=3,Y_2=0) +\mathbb{P}(Y_1=3,Y_2=1)+ \mathbb{P}(Y_1=3,Y_2=2)$$ Also when dealing with probability the average number is usually the expectation, and they are probably referring to the expectation: $$ \mathbb{E}[Y_1+Y_2] $$
Player A has Binomial Distribution: P(A) distribution = $({1\over3}+{2\over3})^3 = {1\over27} + {2\over9} + {4\over9} + {8\over27} = 1$ Player B has Geometric Distribution: P(B) distribution = ${1/3 \over 1-2/3} = {1\over3} + {2\over9} + {4\over27} + {8\over81} + \cdots = 1$ P(B<1) = ${1\over3}$ P(B<2) = ${1\over3} + {2\over9} = {5\over9}$ P(B<3) = ${5\over9} + {4\over27} = {19\over27}$ $\begin{align} \text{P(A > B)} &= \text{P(A=1, B<1) + P(A=2, B<2) + P(A=3, B<3)} \cr &= {2\over9}({1\over3}) + {4\over9}({5\over9}) + {8\over27}({19\over27}) \cr &= {2\over27} + {20\over81} + {152\over729} \cr &= {386\over729} \end{align}$ Thus, P(A has more heads than B) = ${386\over729} ≈ 53\%$
stem_mix_good_quality
Show that origin is globally asymptotically stable. $$\begin{eqnarray} x' &=& −(x + y)^3\\ y' &=& (x − y)^3 \end{eqnarray}$$ I know to prove that $V'(x)$ has to be negative which I can prove. However, I can't seem to figure out how to get $V(x)$ . Can anyone put me in right direction to how to calculate $V(x)$ for it.
As @Cesareo wrote, after the change of variables $$ u = x + y\\ v = x - y $$ the system has the form $$ \left\{\begin{array}{lll} \dot u &=& v^3-u^3\\ \dot v &=& -(v^3+u^3). \end{array}\right. $$ I can be seen that the Lyapunov function is $V(u,v)=u^4+v^4$ : its derivative $$ \dot V= 4u^3\dot u+4v^3\dot v =4u^3(v^3-u^3)-4v^3(v^3+u^3) $$ $$ =-4u^6-4v^6$$ is negative definite. Hence, $V(x,y)=(x+y)^4+(x-y)^4$ .
As far as I am aware there are no general steps for finding Lyapunov functions, but for low dimensional systems one can look at the stream plot and try to find a contour line such that all stream lines cross that contour pointing inwards. This contour is defined by setting $V(x,y)$ equal to some constant. Often the Lyapunov function $V(x,y)=x^2+y^2$ can be a good starting guess, which would yield a circle as contour. If this doesn't work one could increase one or more of the powers from two to four, six, ect. which would make the contour more squarish and/or perform a linear coordinate transformation which can rotate and squash the contour . These alteration from the starting guess are usually enough when the time derivatives are polynomials, as is the case in your example.
stem_mix_good_quality
I have $X$ ~ $U(-1,1)$ and $Y = X^2$ random variables, I need to calculate their covariance. My calculations are: $$ Cov(X,Y) = Cov(X,X^2) = E((X-E(X))(X^2-E(X^2))) = E(X X^2) = E(X^3) = 0 $$ because $$ E(X) = E(X^2) = 0 $$ I'm not sure about the $X^3$ part, are my calculations correct?
$\begin{align} \mathsf {Cov}(X,Y) & = \mathsf{Cov}(X,X^2) \\[1ex] & = \mathsf E((X-\mathsf E(X))\;(X^2-\mathsf E(X^2))) \\[1ex] & = \mathsf E(X^3-X^2\mathsf E(X)-X\mathsf E(X^2)+\mathsf E(X)\mathsf E(X^2)) & \text{ by expansion} \\[1ex] & = \mathsf E(X^3)-\mathsf E(X^2)\mathsf E(X)-\mathsf E(X)\mathsf E(X^2)+\mathsf E(X)\mathsf E(X^2) & \text{ by linearity of expectation} \\[1ex] & = \int_{-1}^1 \tfrac 1 2 x^3\operatorname d x -\int_{-1}^1 \tfrac 1 2 x^2\operatorname d x\cdot\int_{-1}^1 \tfrac 1 2 x\operatorname d x & \text{ by definition of expectation} \\[1ex] & = 0 \end{align}$ Reason The integrals of the odd functions are both zero over that domain. $\;\mathsf E(X^3)=\mathsf E(X) = 0$. Note that $\;\mathsf E(X^2) = \int_{-1}^1 \tfrac 12 x^2 \operatorname d x = \tfrac 1 3$
We know it is $E(X^3)$ so: $$E(X^3)=\int_{-1}^{1}x^3f(x)=0$$ So it is correct
stem_mix_good_quality
Calculate the sum of the series: $$S = \sum_{1\leq a<b<c \\a,b,c\in \mathbb{N}} \frac{1}{2^a 3^b 5^c}$$ My attempt: $$S = \sum_{1\leq a<b<c \\a,b,c\in \mathbb{N}} \frac{1}{2^a 3^b 5^c} = \sum _{c=3}^{\infty } \sum _{b=2}^{c-1} \sum _{a=1}^{b-1} \frac{1}{2^a 3^b 5^c}$$ Is it equal? What's next? From Wolfram Mathematica I know that $\sum _{c=3}^{\infty } \sum _{b=2}^{c-1} \sum _{a=1}^{b-1} \frac{1}{2^a 3^b 5^c}= \frac{1}{1624}$.
\begin{eqnarray*} \sum_{1 \leq a < b < c} \frac{1}{2^a 3^b 5^c} = \sum_{a=1}^{\infty} \frac{1}{2^a} \sum_{ b=a+1}^{\infty} \frac{1}{3^b} \sum_{c=b+1}^{\infty} \frac{1}{5^c} \end{eqnarray*} \begin{eqnarray*} = \sum_{a=1}^{\infty} \frac{1}{2^a} \sum_{ b=a+1}^{\infty} \frac{1}{3^b} \frac{1}{5^b \times 4} \end{eqnarray*} \begin{eqnarray*} = \sum_{a=1}^{\infty} \frac{1}{2^a} \frac{1}{15^a \times 14 \times 4} \end{eqnarray*} \begin{eqnarray*} =\color{red}{\frac{1}{29 \times 14 \times 4}} =\frac{1}{1624}. \end{eqnarray*}
$$ S= \sum _{c=3}^{\infty } \sum _{b=2}^{c-1} \sum _{a=1}^{b-1} \frac{1}{2^a 3^b 5^c} = \sum _{c=3}^{\infty } \sum _{b=2}^{c-1} \left(1-\frac{1}{2^{b-1}}\right) \frac{1}{3^b5^c} = \sum _{c=3}^{\infty } \sum _{b=2}^{c-1}\frac{1}{3^b5^c}-2\sum _{c=3}^{\infty } \sum _{b=2}^{c-1}\frac{1}{6^b5^c}, $$ let us say $S_1-2S_2$. Then $$ S_1=\frac{1}{9}\sum _{c=3}^{\infty } \sum _{b=0}^{c-3}\frac{1}{3^b5^c}=\frac{1}{6}\sum _{c=3}^{\infty }\left(1-\frac{1}{3^{c-2}}\right)\frac{1}{5^c}=\frac{2}{3\cdot 5^2}-\frac{1}{6}\sum _{c=3}^{\infty }\frac{1}{3^{c-2}5^c} $$ hence $$ S_1=\frac{2}{3\cdot 5^2}-\frac{1}{6\cdot 3\cdot 5^3}\sum _{c=3}^{\infty }\frac{1}{15^{c-3}}=\frac{2}{3\cdot 5^2}-\frac{1}{14\cdot 6\cdot 5^2}. $$ You can calculate $S_2$ similarly.
stem_mix_good_quality
Let $V$ be the region in $\mathbb{R}^3$ satisfying inequalities $x^2+y^2 \le 1, 0 \le z \le 1$. Sketch $V$ and calculate: $$\displaystyle \int \int \int_{V}(x^2+y^2+z^2)\,dV$$ I wrote the integral as $\displaystyle \int \int \int_{V}(x^2+y^2+z^2)\,dV = 4 \int_0^1 \int_0^{\sqrt{1-y^2}}\int_{0}^{1}(x^2+y^2+z^2)\,{dz}\,{dx}\,{dy}$. Now I want to use polar coordinates for $x$ and $y$, so I let $x = r\cos{\theta}, y= r\sin{\theta}$. But what's the upper bound $\sqrt{1-y^2}$ in polar coordinates? I can't reduce it to something not dependent on $r$.
Probably it is faster to avoid polar coordinates and just exploit the simmetries of the cylinder $V$: $$\begin{eqnarray*}\iiint_V (x^2+y^2+z^2)\,d\mu &=& \iiint_V (x^2+y^2)\,d\mu + \iiint_V z^2\,d\mu\\&=&\iint_{x^2+y^2\leq 1}(x^2+y^2)\,dx\,dy+\pi\int_{0}^{1}z^2\,dz\\&=&\int_{0}^{1}\rho^2\cdot2\pi\rho\,d\rho+\frac{\pi}{3} \\&=&\color{red}{\frac{5\pi}{6}}.\end{eqnarray*}$$
Notice that this is the volume of a cylinder. I would use cylindrical coordinates. You can describe this region equivalently as $0\leq r\leq 1$, $0\leq z \leq1$, $0\leq\theta\leq2\pi$. Then you can take the triple integral as $$\int_{0}^{2\pi} \int_{0}^{1} \int_{0}^{1} r \ dzdrd\theta$$ and solve from there.
stem_mix_good_quality
Number of terms in the expansion of $$\left(1+\frac{1}{x}+\frac{1}{x^2}\right)^n$$ $\bf{My\; Try::}$ We can write $$\left(1+\frac{1}{x}+\frac{1}{x^2}\right)^n=\frac{1}{x^{2n}}\left(1+x+x^2\right)^n$$ Now we have to calculate number of terms in $$(1+x+x^2)^n$$ and divided each term by $x^{2n}$ So $$(1+x+x^2)^n = \binom{n}{0}+\binom{n}{1}(x+x^2)+\binom{n}{2}(x+x^2)^2+......+\binom{n}{n}(x+x^2)^n$$ and divided each term by $x^{2n}$ So we get number of terms $\displaystyle = 1+2+3+.....+n+1 = \frac{(n+1)(n+2)}{2}$ Is my solution is Right, If not then how can i calculate it actually i don,t have solution of that question. Thanks
When you expand $(1 + x + x^2)^n$, the highest degree is $2n$, and the lowest is 0. None of them is vanished, and it can be seen easily that there is one term of any degree between 0 and $2n$. So there should be $2n + 1$ terms. Cheers,
The number of terms in the expansion of $$\left(1+\frac1{x}+\frac1{x^2}\right)^n=\dfrac{(1+x+x^2)^n}{x^{2n}}$$ will be same as in $(1+x+x^2)^n$ As $a^3-b^3=(a-b)(a^2+ab+b^2),$ $$(1+x+x^2)^n=\left(\dfrac{1-x^3}{1-x}\right)^n =(1-x^3)^n(1-x)^{-n}$$ The highest & the lowest power of $x$ in $(1+x+x^2)^n$ are $0,2n$ respectively. Clearly, all the terms present in $1\cdot(1-x)^{-n}$ (using Binomial Series expansion ) as in $(1-x^3)^n(1-x)^{-n}$
stem_mix_good_quality
For reference: Calculate the area of ​​triangle $ABC$ ; if $ED = 16$ ; $AB = 10$ and $D = \angle15^o$ (Answer: $20$ ) My progress: I didn't get much. $\triangle ECD - (15^o, 75^o) \implies EC = 4(\sqrt6-\sqrt2), CD = 4(\sqrt 6+\sqrt 2)$ Incenter Th. $\triangle ABD: \frac{AC}{CI}=\frac{10+BD}{16+EA}\\ S_{CDE} = 4(\sqrt6-\sqrt2)4(\sqrt6+\sqrt2) = 32$ I thought about closing the ABD triangle but as it's any other triangle, I didn't see much of an alternative
The key is to recognize that since $\overline{AC}$ bisects $\angle DAB$ , the altitudes from $C$ to $\overline{DE}$ and $\overline{AB}$ are congruent. Then since $AB$ is $5/8$ times $DE$ , the area of $\triangle ABC$ must be $5/8$ times the area of $\triangle CDE$ . You already have the the area of $\triangle CDE$ , so the area of $\triangle ABC$ follows.
Reflect $B$ across $AC$ , we get new point $B'$ on $AD$ . Clearly triangles $ABC$ and $AB'C$ are congurent so we have to find an area of the later one. But this is just $AB'\cdot CF/2$ . So we need to find $CF$ . Let $G$ be the midpoint of $ED$ , then $\angle EGC = 30^{\circ}$ and $GC = 8$ . Since triangle $CFG$ is half of equlateral triangle we have $$CF = {CG \over 2}=4$$
stem_mix_good_quality
I'm trying to calculate $\prod_k{p_k}$ where $p_k$ are (potentially) very high probabilities of independent, zero-mean, standard normal random variables and $k>100$. However, I'm running into numerical problems using MATLAB (although the same problem occurs in Python/Scipy). Let's say $x=30$, then normcdf(x) returns 1, which is not precise. However, if I use normcdf(-x) (or normcdf(x,'upper') ) instead, I get a value of 4.906713927148764e-198. I was hoping that I could then take 1 minus this value to get a more accurate probability. Unfortunately, the result gets rounded, as soon as I apply the subtraction: >> normcdf(-x) ans = 4.906713927148764e-198 >> 1-(1-normcdf(-x)) ans = 0 Is there any way to work around this issue?
If you really need to compute such tiny probabilities, one way is to use symbolic math and/or variable precision arithmetic . For example, using the vpa function in the Symbolic Math toolbox : X = sym(-300); P = normcdf(X,0,1) P2 = vpa(P) which returns P = erfc(150*2^(1/2))/2 P2 = 7.449006262775352900552391145102e-19547 Of course converting this result back to floating-point ( double(P2) ) results in zero as P2 is less than eps(realmin) . However, it's possible that if you do your calculations in variable precision and convert back to floating-point at the end you may be able gain a bit more accuracy. Just check to make sure that you're not wasting compute cycles.
In general, in 32-bit floating point arithmetic, $$1+\epsilon = 1,$$ where $|\epsilon| < |\epsilon_{\textrm{mach}}|$, the right-hand side being the machine epsilon. I also question the need to compute a probability to 198 decimal places. There are about $10^{80}$ atoms in the universe. I cannot possibly imagine the need to compute probabilities so precisely that they could be used to predict something at the atomic level in two standard and one somewhat smallish universe.
stem_mix_good_quality
I have been working on fitting a plane to 3d points and then calculating the perpendicular distance between each point and the plane using Matlab. So far I can find the plane equation in $Ax+By+Cz+D = 0$ form and calculate the distance using $\frac{Ax_0+By_0+Cz_0+D}{\sqrt{A^2+B^2+C^2}}$. However, in some Matlab codes plane is defined as $Ax+By+C = z$ which seems quite different from the above plane equation. Even though I did some research about difference of these equations, I could not find any satisfactory answer. Could you please explain me the difference between these two plane definitions and could you please inform me about the distance between any point and plane $Ax+By+C = z$ ? I am looking forward to hearing from you. Thanks in advance
The two plane equations are nearly equivalent since you can change one into the other with some algebraic manipulation. Starting with the new equation with lowercase coefficients for clarity: $$ax + by + c = z$$ Subtract $z$ from both sides: $$ax + by - z + c = 0$$ This has the same form as the equation you've been using. By matching up coefficients $$Ax + By + Cz + D = 0$$ we can see that $A=a$ , $B=b$ , $C=-1$ , and $D=c$ . So, the distance formula becomes $$distance = \frac{Ax_0 + By_0 + Cz_0 + D}{\sqrt{A^2 + B^2 + C^2}}=\frac{ax_0 + by_0 - z_0 + c}{\sqrt{a^2 + b^2 + 1}}$$ Going the other way, starting from $$Ax + By + Cz + D = 0$$ Subtract $Cz$ from both sides $$Ax + By + D = -Cz$$ Divide by $-C$ $$-\frac A C x - \frac B C y - \frac D C = z$$ Matching up coefficients with $$ax + by + c = z$$ results in $a=-\frac A C$ , $b = -\frac B C$ , and $c = -\frac D C$ . Notice that since we divided by $C$ to get this other formula, this form of the plane equation cannot represent planes where $C=0$ , which are parallel to the $z$ -axis. So, the second form of the equation describes only a subset of planes compared to the first.
I think it may be a glitch or something. As far i know the equation of a plane in 3d is Ax+By+Cz=D where (A,B,C)are any points on the plane. Here I think that the Z should be in the left size with a minus sign similarly c in the right side. Thus it should makes a equation Ax+By-Z=-C where the points which are gonna lie on the plane should (A,B,-1).
stem_mix_good_quality
$$\begin{vmatrix} a^2+\lambda^2 & ab+c\lambda & ca-b\lambda \\ ab-c\lambda & b^2+\lambda^2& bc+a\lambda\\ ca+b\lambda & bc-a\lambda & c^2+\lambda^2 \end{vmatrix}.\begin{vmatrix} \lambda & c & -b\\ -c& \lambda & a\\ b & -a & \lambda \end{vmatrix}=(1+a^2+b^2+c^2)^3.$$ Then value of $\lambda$ is options:: (a)$\; 8\;\;$ (b) $\; 27\;\;$ (c)$ \; 1\;\;$ (d) $\; -1\;\;$ actually as I have seen the question. Then I used Multiply these two determinant. but This is very tidious task. So I want a better Method by which we can easily calculate value of $\lambda$. So please explain me in detail.
Since this is a multiple choice question, and the two sides are meant to be equal as functions of $a,b,c$, it remains true when you substitute $a=b=c=0$, giving $\lambda^9 = 1$.
Compute some cofactors of the second matrix. Compare with corresponding elements of the first matrix. This should lead to a solution without a lot of brute force. Added: What I had in mind was the following. Let $$M_1=\begin{bmatrix} a^2+\lambda^2 & ab+c\lambda & ca-b\lambda \\ ab-c\lambda & b^2+\lambda^2& bc+a\lambda\\ ca+b\lambda & bc-a\lambda & c^2+\lambda^2 \end{bmatrix}$$ and let $$M_2=\begin{bmatrix} \lambda & c & -b\\ -c& \lambda & a\\ b & -a & \lambda \end{bmatrix}$$ Then the equation amounts to $\det M_1\cdot \det M_2=(1+a^2+b^2+c^2)^3.$ It's quick to compute the cofactors of $M_2$ since they're all $2\times 2$ determinants. You'll find that $M_1$ is the cofactor matrix of $M_2.$ So $M_1^TM_2=(\det M_2)I$ and therefore $$\det M_1\cdot\det M_2=\det(M_1^TM_2)=\det((\det M_2)I)=(\det M_2)^3.$$ The determinant of $M_2$ is also quick to compute; it's $\lambda(\lambda^2+a^2+b^2+c^2).$ Now equate the cube of this expression to $(1+a^2+b^2+c^2)^3.$
stem_mix_good_quality
I need to calculate, or at least to find a good estimate, for the following sum $$ \sum_{n_1+n_2 + \dots + n_k = N}\frac{1}{(n_1+1)(n_2+1)\dots (n_k+1)},\quad (1)$$ where $n_i \ge 1$ . These numbers represent volumes of particular hyperpyramids in a hypercube, therefore the title of the question. Update: The motivation section below contains an arithmetic error, but the required sum seems to appear nonetheless in an updated formula, and I also guess that this kind of sum may appear quite naturally in all sorts of tasks. Motivation: I have independent equidistributed $\mathbb{R}$ -valued random variables $\xi_1,\dots,\xi_{N+1}$ with $P(\xi_i = \xi_j) = 0$ . Denote by $\Diamond_i$ either $<$ or $>$ . Then, provided $\sharp\{\Diamond_i \text{ is} >\} = k$ the probability of the event $$P\left(\xi_1\Diamond_1\xi_2, \xi_3\Diamond_2\max(\xi_1,\xi_2),\dots, \xi_{N+1}\Diamond_N\max(\xi_1,\dots,\xi_{N})\right) = \frac{1}{(n_1+1)(n_2+1)\dots (n_k+1)}, \quad(2)$$ where $n_1 + \dots + n_k = N$ and $n_i \ge 1$ and correspond to the places where $\Diamond_i$ is a $>$ . By design, all events of the form $\sharp\{\Diamond_i \text{ is} >\} = k$ are mutually exclusive, so $P(\sharp\{\Diamond_i \text{ is} >\} = k)$ is the sum of all possible events of the from $(2)$ , which gives $(1)$ . Extended task : What I am actually about to calculate is $P(\sharp\{\Diamond_i \text{ is} >\} \le k)$ , which thus gives a formula $$\sum_{l=1}^k \sum_{n_1+n_2 + \dots + n_l = N}\frac{1}{(n_1+1)(n_2+1)\dots (n_l+1)}.\quad (3)$$ This formula, though more complex, may have some nice cancellations in it, perhaps.
Note that $$ \left( {\ln \left( {\frac{1}{{1 - x}}} \right)} \right)^{\,m} = \sum\limits_{0\, \le \,k} {\frac{m!}{k!}{k\brack m}\,x^{\,k} } $$ where the square brackets indicate the (unsigned) Stirling N. of 1st kind . From that one obtains \begin{align*} {n\brack m} &=\frac{n!}{m!}\sum_{\substack{1\,\leq\,k_j\\k_1\,+\,k_2\,+\,\cdots\,+\,k_m\,=\,n}}\frac{1}{k_1\,k_2\,\cdots\,k_m} \\&=\frac{n!}{m!}\sum_{\substack{0\,\leq\,k_j\\k_1\,+\,k_2\,+\,\cdots\,+\,k_m\,=\,n-m}}\frac{1}{(1+k_1)(1+k_2)\cdots(1+k_m)} \end{align*} which is an alternative definition for such numbers. In the referenced link you can also find the asymptotic formulation.
The sum in $(1)$ is equal to the coefficient of $x^N$ in $$\Big(\sum_{n=1}^{\infty}\frac{x^n}{n+1}\Big)^k=\Big(\frac{-x-\ln(1-x)}{x}\Big)^k.$$ This alone can already be used for computations. A closer look at $$\Big(\frac{-x-\ln(1-x)}{x^2}\Big)^k=\sum_{n=0}^{\infty}a_{n,k}x^n$$ (the sum in $(1)$ is thus $a_{N-k,k}$ ) reveals a better-to-use recurrence $$a_{n,k}=\frac{k}{n+2k}\sum_{m=0}^{n}a_{m,k-1}.\qquad(k>0)$$ This can also be used for estimates and asymptotic analysis (if needed).
stem_mix_good_quality
So is there any common method to solve this kind of problem? Calculate that $$\lim_{x\to +\infty} \left( \frac{\ln^4 x}{4}-\int_0^x \frac{\ln^3 t}{1+t} \, dt \right)$$ Before this I have seen a similar problem which was solved by using digamma function or beta function. So is that mean we can always use special functions to solve them?
Note that $$\frac{\ln^4 x}{4}=\int_{1}^{x}\frac{\ln^3 t}{t}dt.$$ So \begin{align*} \frac{\ln^4 x}{4}-\int_0^x \frac{\ln^3 t}{1+t}dt &=\int_{1}^{x}\frac{\ln^3 t}{t}dt-\int_0^x \frac{\ln^3 t}{1+t}dt\\ &=\int_{1}^{x}\frac{\ln^3 t}{t}dt-\int_0^1 \frac{\ln^3 t}{1+t}dt-\int_1^x \frac{\ln^3 t} {1+t}dt\\ &=\int_{1}^{x}\frac{\ln^3 t}{t(1+t)}dt-\int_0^1 \frac{\ln^3 t}{1+t}dt. \end{align*} Then $$\lim_{x\to +\infty} \left( \frac{\ln^4 x}{4}-\int_0^x \frac{\ln^3 t}{1+t}dt \right) =\int_{1}^{+\infty}\frac{\ln^3 t}{t(1+t)}dt-\int_0^1 \frac{\ln^3 t}{1+t}dt.$$ By variable substitution $t=\dfrac{1}{s}$ , $$\int_{1}^{+\infty}\frac{\ln^3 t}{t(1+t)}dt=-\int_0^1 \frac{\ln^3 s}{1+s}ds.$$ Henceforce, $$\lim_{x\to +\infty} \left( \frac{\ln^4 x}{4}-\int_0^x \frac{\ln^3 t}{1+t}dt \right) =-2\int_0^1 \frac{\ln^3 t}{1+t}dt\ \left(=\frac{7\pi^4}{60}\right).$$ So our main goal is to calculate improper integral: $$\color{blue}{\int_0^1 \frac{\ln^3 t}{1+t}dt\ \left(=-\frac{7\pi^4}{120}\right)}.$$ Here is a proof: \begin{align*} I&=\int_0^1\frac{-\ln^3t}{1+t}dt\cr &=-\sum_{n=0}^\infty \int_{0}^1(-t)^n \ln^3 t\,dt\cr &=-\sum_{k=0}^\infty \int_{0}^1(-t)^{2k} \ln^3 t\,dt-\sum_{k=0}^\infty \int_{0}^1(-t)^{2k+1} \ln^3 t\,dt\cr &=\color{red}{-\sum_{k=0}^\infty \int_{0}^1t^{2k} \ln^3 t\,dt+\sum_{k=0}^\infty \int_{0}^1 t^{2k+1} \ln^3 t\,dt}\cr &\color{red}{=\sum_{k=0}^\infty\frac{6}{(2k+1)^4}-\sum_{k=0}^\infty\frac{6}{(2k+2)^4}}\cr &=\sum_{k=0}^\infty\frac{6}{(2k+1)^4}+\sum_{k=0}^\infty\frac{6}{(2k+2)^4}-2\sum_{k=0}^\infty\frac{6}{(2k+2)^4}\cr &=6\zeta(4)-\frac{6}{8}\zeta(4)\cr &=\frac{21}{4}\zeta(4)\cr &=\frac{7\pi^4}{120}. \end{align*} Indeed, we use the following result (red part), the change of variables $x=e^{-t}$ shows that \begin{align*}\color{red}{ \int_0^1x^n\ln^p(1/x)\,dx} &\color{red}{=\int_0^\infty e^{-(n+1)t}t^pdt}\\ &\color{red}{=\frac{1}{(n+1)^{p+1}}\int_0^\infty e^{-u}u^pdu}\\ &\color{red}{=\frac{\Gamma(p+1)}{(n+1)^{p+1}}.} \end{align*}
By using Puiseux series I get that $\int_{0}^{x}\frac{\ln^3 t}{1+t} dt=(\frac{\ln^4 x}{4}-\frac{7\pi^4}{60})+O(\frac{1}{x})$ So it may be $\frac{7\pi^4}{60}$ at last.
stem_mix_good_quality
Having difficulty with the following homework assignment: $$ \begin{array}{c|c|c|c} \text{Test} & \text{Disease} & \text{No Disease} & \\ \hline \\ + & 436 & 5 & 441\\ - & 14 & 495 & 509\\ \hline \\ & 450 & 500 & 950 \end{array}$$ What is the probability that a randomly selected individual will have an erroneous test result? That is, what is the probability that an individual has the disease and receives a negative test result ($-$) or the individual does not have the disease and receives a positive test result ($+$)? I thought the answer should be: $$\begin{align} a &= P(-\text{ and }D) + P(+\text{ and }D)\\ & = P(-) \cdot P(D) + P(+)\cdot P(ND)\\ & = P(-)\cdot\frac{450}{950} + P(+)\cdot\frac{500}{950} \end{align}$$ How do you calculate $P(-)$ and $P(+)$? The answer is given as $0.010021$. Any help is appreciated. Thank you!
The total number of individuals is $N=950$. A test may be erroneous, either because the patient has no disease and the test is positive, and there are $N_{+,ND}=5$ such individuals, or because the patient has the disease and the test is negative, and there are $N_{-,D}=14$ such individuals. The total number of individuals with an erroneous test is $N_e=N_{+,ND}+N_{-,D}=5+14=19$, hence the probability that a randomly selected individual will have an erroneous test result is $$ N_e/N=19/950=0.02. $$ Your solution uses $P(-\,\text{and}\,D)=P(-)P(D)$ and $P(+\,\text{and}\,ND)=P(+)P(ND)$. These identities are not true here. They would hold if and only if the result to the test and the health of the individuals were independent. The rational number one can think about, which is closest to the result $0.010021$ which you mention, seems to be $5/499=0.01002004$, but even this number does not round up to $0.010021$, and, anyway, I have no idea what erroneous reasoning could produce this answer from the data $436$, $5$, $14$ and $495$.
I've played with the numbers and noticed that the total number of patients getting erroneous results is $19\cdot (14+5)$ and $950$ divided by $19$ is $50$. I have no idea what to do with that, maybe you or someone else will have, or it may be only a coincidence!
stem_mix_good_quality
Let $f:\Bbb R\to \Bbb R$ a continous function and $a,b\in \Bbb R,a<b$ . Calculate $\int_a^b f(x)\,dx$ if $f(tx)\ge f(x)$ for any $t>0$ and $x\in \Bbb R$ I obtained just that $f(x)\ge f(0)$ but I do not think it is helpful.
We have \begin{align} f(-3) &= mn + 2m + 5n + 2 > 0, \\ f(-3/2) &= \frac{-20mn + 26m - 4n + 5}{16} < 0,\\ f(0) &= mn - m - n - 7 > 0, \\ f(m) &= -m^3 n+m^3-5 m^2 n+3 m^2-7 m n-m-n-7 < 0, \\ f(2m + 2n) &= 8 m^4+28 m^3 n+32 m^2 n^2+12 m n^3+28 m^3+78 m^2 n+66 m n^2\\ &\quad +16 n^3+26 m^2+51 m n+24 n^2-m-n-7\\ & > 0. \end{align} (Note: Simply letting $m = 4 + s, n = 4 + t$ for $s, t \ge 0$ , all the inequalities are obvious.) Thus, $f(x) = 0$ has four real roots $x_1, x_2, x_3, x_4$ located in the following intervals respectively $$x_1 \in (-3, -3/2), \quad x_2 \in (-3/2, 0), \quad x_3 \in (0, m), \quad x_4 \in (m, 2m+2n).$$ We need to prove that $x_1 + x_2 < -3$ or $x_2 < -3 - x_1$ . Since $-3 - x_1 \in (-3/2, 0)$ , it suffices to prove that $f(-3-x_1) > 0$ . Since $f(x_1) = 0$ , it suffices to prove that $f(-3-x_1) - f(x_1) > 0$ that is $$(2x_1 + 3)\Big((m+2n)x_1^2 + 3(m+2n)x_1 + m+2n + 3\Big) > 0.$$ It suffices to prove that $$x_1 > - \frac{3}{2} - \sqrt{\frac{5m + 10n - 12}{4m + 8n}}.$$ It suffices to prove that $$f\left(- \frac{3}{2} - \sqrt{\frac{5m + 10n - 12}{4m + 8n}}\right) > 0$$ that is $$\frac{m^3+2 m^2 n+2 m n^2+4 n^3-m^2-10 m n-16 n^2+3 m+6 n+9}{(m+2 n)^2} > 0$$ which is true. (Note: Simply letting $m = 4 + s, n = 4 + t$ for $s, t\ge 0$ , this inequality is obvious.) We are done. $\phantom{2}$ For @Math_Freak: The Maple code for the last two equations is given by f := x^4+(-m-2*n+6)*x^3+(m*n-5*m-8*n+10)*x^2+(3*m*n-7*m-8*n)*x+m*n-m-n-7 x := -3/2+Q f1 := collect(expand(f), Q) f2 := subs({Q^2 = (5*m+10*n-12)/(4*m+8*n), Q^3 = (5*m+10*n-12)*Q/(4*m+8*n), Q^4 = ((5*m+10*n-12)/(4*m+8*n))^2}, f1) factor(f2)
Let $\alpha_{i,m,n}$ be the $i^\text{th}$ root of $f(x;m,n)$ in ascending order. It is shown that for $m>4,n>4$ $$ \begin{align} &\alpha_{1,\infty,\infty}<\alpha_{1,m,n}<\alpha_{1,4,4}\\ &\alpha_{2,4,4}<\alpha_{2,m,n}<\alpha_{2,\infty,\infty} \end{align} $$ Numerically, $$ \begin{align} &-2.618\ldots<\alpha_{1,m,n}<-2.476\ldots\\ &-0.621\ldots<\alpha_{2,m,n}<-0.3819\ldots \end{align} $$ leading to the bound $$ \alpha_{1,m,n}+\alpha_{2,m,n}<-2.858\ldots $$ The proof is via observing the change of sign of $f$ at the boundaries of the intervals $(\alpha_{1,\infty,\infty}~,\alpha_{1,4,4})$ and $(\alpha_{2,4,4}~,\alpha_{2,\infty,\infty})$ , as tabulated below: $x$ $sgn(f(x,m,n))$ $\alpha_{1,\infty,\infty}$ $+1$ $\alpha_{1,4,4}$ $-1$ $\alpha_{2,4,4}$ $-1$ $\alpha_{2,\infty,\infty}$ $+1$ The infinity roots are $(-3\pm\sqrt{5})/2$ .
stem_mix_good_quality
in a 3 day i have a national exam so i am trying to work hard i have one question from this link http://www.naec.ge/images/doc/EXAMS/exams-2011-gat-5-ivlisi-.pdf see problem 76. because it is written into Georgian i will translate it right now, problem is following:as you see right Parallelepiped which has dimension width=10cm and height=6cm is divided into 6 equal right Parallelepiped,we are asked to find volume of each one,sure one of the very simple method is find length calculate volume of original and then divide by 6 i think that for original Parallelepiped,width=10,height=6,and length=10 from figure it seems that base is regular ,so volume of each will be 100cm^3 but i am not sure please help me
Since all of them are identical, from the image you can conclude that the height of each parallelepiped is $\frac{6}{3}=2$ and the width is $10$ and depth $\frac{10}{2}=5$. Hence the volume of each parallelepiped will be $2\cdot10\cdot5=100$ cm$^3$.
You were right to conclude that the full parallelepiped is $10 \times 10\times 6$. It follows from that, exactly as you wrote, that each piece has volume $100$ cm$^3$. You did not have the "wrong way", you had another right way. So your academic year is about over? Over the last few months, I have enjoyed trying to figure out problems in Georgian that you did not translate.
stem_mix_good_quality
I need to calculate this improper integral. $$\int_{1}^{\infty} \sin \left( \sin\left( \frac {1}{\sqrt{x}+1} \right) \right) dx$$ How do I prove that $$ \sin \left( \sin\left( \frac {1}{\sqrt{x}+1} \right) \right) $$ has an asymptotic equivalence with: $$ \frac{1}{\sqrt{x}} $$ for $x\rightarrow \infty$ And by the p-test that it diverges?
You don't need an asymptotic equivalence. Since for any $y\in[0,\pi/2]$ $$\sin y\geq\frac{2y}{\pi}$$ holds by convexity, $$\int_{N}^{+\infty}\sin\sin\frac{1}{\sqrt{x}+1}\,dx \geq \frac{4}{\pi^2}\int_{N}^{+\infty}\frac{dx}{\sqrt{x}+1}$$ holds for any $N$ big enough, hence the starting is divergent.
Yes, as $x\rightarrow \infty\;$ you have the asmptotic relations $$\sin \left( \sin\left( \frac {1}{\sqrt{x}+1} \right) \right) \sim \sin \left( \sin\left( \frac {1}{\sqrt{x}} \right) \right) \sim \sin\left( \frac {1}{\sqrt{x}} \right) \sim \frac {1}{\sqrt{x}} $$ because for small $z\rightarrow 0\;$ you have $\sin(z) \sim z;\;$and therefore the integral $$\int_{1}^{\infty} \sin \left( \sin\left( \frac {1}{\sqrt{x}+1} \right) \right) dx$$ diverges. You can get some better estimate with a CAS e.g. $$\sin \left( \sin\left( \frac {1}{\sqrt{x}+1} \right) \right) = \sqrt\frac{1}{x}-\frac{1}{x}+\frac{2}{3}\left(\frac{1}{x}\right)^{3/2}+O\left(\frac{1}{x^2}\right)$$
stem_mix_good_quality
I once saw a function for generating successively more-precise square root approximations, $f(x) = \frac{1}{2} ({x + \frac{S}{x}})$ where S is the square for which we are trying to calculate $\sqrt S$ . And the function works really well, generating an approximation of $\sqrt 2 \approx f^3(1) = \frac{577}{408} \approx 1.414215$ . This fascinated me, so I tried extending the same logic to further radicals, starting with cube roots. My first guess was $f_2(x) = \frac{1}{3} ({x + \frac{S}{x}})$ , but when I tried an approximation for $\sqrt[3] 3$ , I got $\sqrt[3] 3 \approx f_2^2(1) = \frac{43}{36} \approx 1.194444$ , which is a far cry from $\sqrt[3] 3 \approx Google(\sqrt[3] 3) \approx 1.44225$ . How can I extend this logic for $n^{a\over b}$ where $ b > 2$ ? Was I accurate all-along and just needed more iterations? Or is the presence of $\frac{1}{2}$ in $f(x) $ and in $n^\frac{1}{2}$ a coincidence? Disclaimer: I am not educated in calculus.
Let me tell you how to differentiate a polynomial, multiply the exponent by the coefficint and reduce $1$ from the exponent. For example, if $f(x)=5x^3+7$ , then $f'(x)=15x^2$ [we multiplied $3$ by $5$ to get $15$ , and we reduced $1$ from the exponent, it was $3$ , then it became $2%$ ]. The $7$ just cancelled because it is a constant. Another example, if $f(x)=x^5-8$ , then $f'(x)=5x^4$ [we multipled 5 by the coefficient $1$ to get $5$ , and we reduced 1 from the exponent, it was $5$ , then it became $4$ ]. The $8$ is constant, so it is cancelled. Now you want to approximate $^3\sqrt{3}$ This means you want to find a number, if you cube it you get $3$ , therefore you want to solve the equation; $x^3=3$ , moving all terms to the left we get $x^3-3=0$ , denoting the left hand side by $f(x)$ You need to approximate the root of the equation $x^3-3=0$ Let $f(x)=x^3-3$ , therefore $f'(x)=3x^2$ [as you know now] Newtons methods for approximating root is: $x_n=x_{n-1}-\frac{f(x_{n-1})}{f'(x_{n-1})}$ , where $x_0$ is the initial guess, Let $x_0=1.5$ $x_1=1.5-\frac{f(1.5)}{f'(1.5)}=1.5-\frac{1.5^3-3}{3\times1.5^2}=1.5-\frac{3.375-3}{3\times2.25}=1.5-\frac{0.375}{6.75}=1.44444$ Now you have $x_1=1.44444$ , you can calculate $x_2$ in the same way; $x_1=1.44444-\frac{f(1.44444)}{f'(1.44444)}=1.44444-\frac{1.44444^3-3}{3\times1.44444^2}=1.44444-\frac{3.01369-3}{3\times2.08641}=1.44444-\frac{0.01369}{6.25923}=1.44225$ Here it is a good approximation. if you see that it is not a good approximation, just find $x_3$ or $x_4$ or until you reach a good approximation. Suppose you want to approximate $^7\sqrt{5}$ (the seventh root of five) Then you want to find a number, if you raise it to the power $7$ you get $5$ , this means you want to solve the equation $x^7=5$ , moving terms to the left we get $x^7-5=0$ Now $f(x)=x^7-5$ and $f'(x)=7x^6$ Let the initial guess, $x_0=1.2$ So $x_1=1.2-\frac{f(1.2)}{f'(1.2)}=1.2-\frac{1.2^7-5}{7\times1.2^6}=1.26778$ Again, $x_2=1.26778-\frac{f(1.26778)}{f'(1.26778)}=1.26778-\frac{1.26778^7-5}{7\times1.26778^6}=1.2585$ Again, find $x_3,x_4,...$ unit you get a continent accuracy
Please see the link description at the very bottom Let's first derive a formula for calculating square roots manually without calculus (and later for more general cases of various roots). Let's denote our numbers as follows $\,\sqrt{S}=x_1+a\,$ where $\,x_1\,$ is the first approximate value by our choice and $\,a\,$ is a tolerance (an error of approximation). Therefore, after squaring we obtain $\,S=x_1^2+2\,x_1a+a^2.\,\,$ We choose $\,x_1\,$ to be much greater than $\,a\,$ so that we can cast away $\,a^2\,$ for the approximation: $$\,S\approx x_1^2+2\,x_1a\,,\quad a\approx\frac{S-x_1^2}{2\,x_1}$$ As we previously denoted $\,\sqrt{S}=x_1+a\,$ : $$\,\sqrt{S}=x_1+a\approx x_1+\frac{S-x_1^2}{2\,x_1}=\frac{2\,x_1^2+S-x_1}{2\,x_1}=\frac{x_1^2+S}{2\,x_1}\,$$ We use the same technique for each step ( $a$ gets smaller and smaller), and at the step $\,n+1\,$ we get a more general expression for our formula: $$\sqrt{S}\approx x_{n+1}=\frac{x_n^2+S}{2\,x_n}\quad or \quad \sqrt{S}\approx\frac{1}{2}\bigg(x_n+\frac{S}{x_n}\bigg)$$ Now let's derive such a formula for the cube root. $\,\sqrt[3]{S}=x_1+a\,$ where $\,x_1\,$ is the first approximate value by our own choice and $\,a\,$ is a tolerance (an error of approximation). By raising to the third power we obtain $\,S=x_1^3+3\,x_1^2a+3\,x_1a^2+a^3.\,\,$ Again, we choose $\,x_1\,$ to be much greater than $\,a\,$ so that we can discard $\,a^2\,$ and $\,a^3\,$ for our approximation: $$\,S\approx x_1^3+3\,x_1^2a\,,\quad a\approx\frac{S-x_1^3}{3\,x_1^2}$$ As we previously denoted $\,\sqrt[3]{S}=x_1+a\,$ : $$\,\sqrt[3]{S}\approx x_2=x_1+a= x_1+\frac{S-x_1^3}{3\,x_1^2}=\frac{3\,x_1^3+S-x_1^3}{3\,x_1^2}=\frac{2\,x_1^3+S}{3\,x_1^2}\,$$ Similarly for $\,x_{n+1}\,$ we get $$\,\sqrt[3]{S}\approx x_{n+1}=x_n+\frac{S-x_n^3}{3\,x_n^2}=\frac{3\,x_n^3+S-x_n^3}{3\,x_n^2}=\frac{2\,x_n^3+S}{3\,x_n^2}\,$$ So, $$\,\sqrt[3]{S}\approx x_{n+1}=\frac{2\,x_n^3+S}{3\,x_n^2}\,$$ In the same way we can derive the formula for the $k$ -th root of $S$ : $$\,\sqrt[k]{S}\approx x_{n+1}=\frac{(k-1)\,x_n^k+S}{k\,x_n^{k-1}}\,$$ Unlike the general formula we have just derived, there's an even more general formula (Newton's binomial) for $(1+a)^x$ where $x$ is any fractional or negative number. For positive integer powers Newton's binomial is finite, otherwise it is an infinite series. Here is a link (please go back to the very top of this answer) illustrating a far more general case. However, I want to express that it is important to be able to derive formulas and understand all underlying procedures and derivations rather than plugging in numbers and hoping they will fit by hook or by crook after some attempts.
stem_mix_good_quality
Suppose we have random graph $G(n,p)$ from a uniform distribution with $n$ vertices and independently, each edge present with probability $p$ . Calculating it's expected number of isolated vertices proves quite easy, chance of single vertex to be isolated is equal to $(1-p)^{n-1}$ , then using linearity of probability, expected number of isolated vertices is equal to $n\times(1-p)^{n-1}$ . However, I am tasked to calculate the variance of this number, or at least decent approximation of it, without any idea how to proceed.
I think indicators are easier to work with, as opposed to generating functions, no? Let $(I_i:1\leqslant i \leqslant n)$ be a sequence of Bernoulli random variables, where $I_i$ if and only if vertex $i$ is isolated. Then, $\mathbb{E}[I_i]= (1-p)^{n-1}\triangleq r$ . Now, let $N=\sum_{i =1}^n I_i$ , the number of isolated vertices. Then, $$ {\rm var}(N) = \sum_{i=1}^n {\rm var}(I_i) + 2\sum_{i <j}{\rm cov}(I_i,I_j) = n{\rm var}(I_i)+ n(n-1){\rm cov}(I_i I_j). $$ Now, ${\rm var}(I_i)=\mathbb{E}[I_i^2]-\mathbb{E}[I_i]^2 = r-r^2=(1-p)^{n-1}(1-(1-p)^{n-1})$ . Next, for ${\rm cov}(I_iI_j)=\mathbb{E}[I_iI_j]-\mathbb{E}[I_i]\mathbb{E}[I_j] = \mathbb{E}[I_iI_j]-(1-p)^{2n-2}$ . Now, for the first object, note that, $I_iI_j=1$ if and only $I_i=I_j=1$ , and $0$ otherwise. Note that, $\mathbb{P}(I_iI_j =1)= (1-p)^{2n-3}$ , since the probability that $I_i$ and $I_j$ are both isolated is the probability that, there are no edges between $(n-2)$ vertices to $\{I_i,I_j\}$ , and there is no edge between $I_i$ and $I_j$ . Since the edges are independent, we conclude. Thus, the answer is $$ n(1-p)^{n-1}(1-(1-p)^{n-1}) + n(n-1)p(1-p)^{2n-3}. $$
Let $P_{n,k}$ be the probability of exactly $k$ isolated vertices in $G(n,p)$ . Look at what happens when we add a new vertex gives: $$ P_{n+1,k}=q^n P_{n,k-1} + (1-q^{n-k})q^k P_{n,k} + \sum_{i=1}^{n-k}\binom{k+i}{i}p^iq^kP_{n,k+i} $$ where $q=1-p$ as usual the first term is the new vertex being isolated the second term is new vertex not isolated but there are $k$ isolated vertices we started off from $G(n,p)$ (so there is an edge from vertex $n+1$ to one of the $n-k$ vertices which gives the $1-q^{n-k}$ factor, and $n+1$ cannot join to any of the $k$ isolated vertices in $[n]$ so the other factor $q^k$ the sum is for starting with a graph of $k+i$ isolated vertices and this new vertex is neighbour to exactly $i$ of these. Using this recurrence, you can show the probability generating function of the number of isolated vertices $$ G_n(z):=\sum_{k=0}^n P_{n,k}z^k $$ satisfies $$ G_n(z)=q^{n-1}(z-1)G_{n-1}(z)+G_{n-1}(1+q(z-1)). $$ This has closed form solution $$ G_n(z)=\sum_{k=0}^n\binom{n}{k}q^{nk-\binom{k}{2}}(z-1)^k $$ and so you obtain $$ \operatorname{Var}[\#\text{isolated vertices}]=nq^{n-1}((1-q^{n-1})+(n-1)pq^{n-2}). $$
stem_mix_good_quality
As we know, triangular numbers are a sequence defined by $\frac{n(n+1)}{2}$ . And it's first few terms are $1,3,6,10,15...$ . Now I want to calculate the sum of the sum of triangular numbers. Let's define $$a_n=\frac{n(n+1)}{2}$$ $$b_n=\sum_{x=1}^na_x$$ $$c_n=\sum_{x=1}^nb_x$$ And I want an explicit formula for $c_n$ . After some research, I found the explicit formula for $b_n=\frac{n(n+1)(n+2)}{6}$ . Seeing the patterns from $a_n$ and $b_n$ , I figured the explicit formula for $c_n$ would be $\frac{n(n+1)(n+2)(n+3)}{24}$ or $\frac{n(n+1)(n+2)(n+3)}{12}$ . Then I tried to plug in those two potential equations, If $n=1$ , $c_n=1$ , $\frac{n(n+1)(n+2)(n+3)}{24}=1$ , $\frac{n(n+1)(n+2)(n+3)}{12}=2$ . Thus we can know for sure that the second equation is wrong. If $n=2$ , $c_n=1+4=5$ , $\frac{n(n+1)(n+2)(n+3)}{24}=5$ . Seems correct so far. If $n=3$ , $c_n=1+4+10=15$ , $\frac{n(n+1)(n+2)(n+3)}{24}=\frac{360}{24}=15$ . Overall, from the terms that I tried, the formula above seems to have worked. However, I cannot prove, or explain, why that is. Can someone prove (or disprove) my result above?
The easiest way to prove your conjecture is by induction. You already checked the case $n=1$ , so I won’t do it again. Let’s assume your result is true for some $n$ . Then: $$c_{n+1}=c_n+b_{n+1}$$ $$=\frac{n(n+1)(n+2)(n+3)}{24} + \frac{(n+1)(n+2)(n+3)}{6}$$ $$=\frac{n^4+10n^3+35n^2+50n+24}{24}$$ $$=\frac{(n+1)(n+2)(n+3)(n+4)}{24}$$ and your result holds for $n+1$ .
One approach is to calculate $5$ terms of $c_n$ , recognize that it's going to be a degree-4 formula, and then solve for the coefficients. Thus: $$c_1 = T_1=1 \\ c_2 = c_1 + (T_1+T_2) = 5 \\ c_3 = c_2+(T_1+T_2+T_3) = 15 \\ c_4 = c_3 + (T_1+T_2+T_3+T_4) = 35 \\ c_5 = c_4 + (T_1+T_2+T_3+T_4+T_5) = 70$$ Now we can find coefficients $A,B,C,D,E$ so that $An^4+Bn^3+Cn^2+Dn+E$ gives us those results when $n=1,2,3,4,5$ . This leads to a linear system in 5 unknowns, which we can solve and obtain $A=\frac1{24},B=\frac14,C=\frac{11}{24},D=\frac14,E=0$ . Thus taking a common denominator, we have $$c_n=\frac{n^4+6n^3+11n^2+6n}{24}=\frac{n(n+1)(n+2)(n+3)}{24}$$ So that agrees with your result. Another way is to use the famous formulas for sums of powers. Thus, we find $b_n$ first: $$b_n = \sum_{i=1}^n \frac{i(i+1)}{2} = \frac12\left(\sum i^2 + \sum i\right) = \frac12\left(\frac{n(n+1)(2n+1)}{6}+\frac{n(n+1)}{2}\right)\\ =\frac{n^3+3n^2+2n}{6}$$ Now, we find $c_n$ : $$c_n = \sum_{i=1}^n \frac{i^3+3i^2+2i}{6}=\frac16\sum i^3 + \frac12\sum i^2 + \frac13\sum i \\ = \frac16\frac{n^2(n+1)^2}{4} + \frac12\frac{n(n+1)(2n+1)}{6} + \frac13\frac{n(n+1)}{2} \\ = \frac{n^4+6n^3+11n^2+6n}{24}=\frac{n(n+1)(n+2)(n+3)}{24}$$ So we have confirmed the answer 2 different ways. As is clear from the other solutions given here, there are other ways as well.
stem_mix_good_quality
In my book I am reading I sometimes see that the writer simplifies most of the answers most of the time. Take the following example. I calculated an answer to the following $\sqrt{72}$, the book has the answer $6\sqrt{2}$. Now these two answers are exactly the same. I would like to know how to get $\sqrt{72}$ to $6\sqrt{2}$, how does one calculate this? Is there a formula you use or a certain method which I am not aware of?
You factor $72=2^3\cdot 3^2$. Then you take the highest even power of each prime, so $72=(2\cdot 3)^2\cdot 2$ You can then pull out the square root of the product of the even powers. $\sqrt {72} = \sqrt{(2\cdot 3)^2\cdot 2}=(2\cdot 3)\sqrt 2=6 \sqrt 2$
$\forall(a,b, \gamma)\in \mathbb{R}_+^2\times \mathbb{N}$: $$a^\gamma b^\gamma =(ab)^\gamma$$ So, $\sqrt{ab} = \sqrt{a}\sqrt{b}$, and here : $$\sqrt{72} = \sqrt{36\times2} = \sqrt{36}\sqrt{2} = 6\sqrt{2}$$
stem_mix_good_quality
I have an exam on calculus next week and I'm confused with the usage of simple and double integrals. As far as I know, if you want to calculate the area under the curve of a function $f(x)$ between $a$ and $b$, you have to calculate an integral between those two points: $$\int_{a}^{b}f(x)dx$$ If you want to calculate the integral between those points having as upper and lower boundaries two functions, namely $f(x)$ and $g(x)$, you have to calculate the integral of the subtraction of those functions: $$(1)\int_{a}^{b}(f(x)-g(x))dx$$ where $f(x)$ is the upper function and $g(x)$ the lower one. However, I'm confused with double integrals because I've been told that by calculating a double integral of a function, you're calculating the volume under the curve of that function . It makes sense to me, but I've seen some examples in which my teacher calculates the area between one or more functions by using a double integral . Another thing that I don't understand is if I'm calculating the double integral of a function inside a given domain what am I really calculating? For example, if I'm calculating $\iint_{D}\sin (y^{3})\text{dx dy}$ in the following domain: where $y=\sqrt{x}$ and $x\in [0,1]$. What will that integral give me? In relation to that, if I just simply calculate $\iint_{D}dxdy$ , how will it make a difference? Will I calculate the whole area of D? And my last question has to do with the formula (1). As I previously said, sometimes my teacher uses double integrals to calculate the area between two functions. Then, don't I need to use formula (1)? Are they perhaps equivalent? For example, given the domain: $$D=\left \{ (x,y)\in \mathbb{R}^{2}:(y-x^{2}-1)(y-2x^{3})<0,x\in[0,2] \right \}$$ In order to calculate its area, she would split the problem into double integrals, one for $y-x^{2}-1=0$ and another one for $y-2x^{3}=0$, and find the points where they intersect: Like this: $$\iint_{D}dxdy = \iint_{D_{1}}dxdy + \iint_{D_{2}}dxdy$$ Can't I use my formula (1) in this case? Thank you so much in advance.
The area of $D$ can be calculated by both $\iint_D dx dy$ and by $\int_0^1 (1-\sqrt{x}) dx$. The way I like to think about it is in terms of "dimensional analysis". In the double integral, you're integrating $1$ across the domain of $D$, so it gives you the area of $D$. In essence, that $dx dy$ bit tells you that you are integrating two dimensions, kinda like multiplying two dimensions. So it gives you an area. Similarly, you have the height $(1-\sqrt{x})$ for a given infinitesimal length $dx$. When multiplied and added together, you get an area, because a length times a width is an area. Another way I like to think about it is by thinking about putting little spaghetti strands for $dx$ or $dy$. If you do it the "double-integral" way, you're standing up spaghetti strands of size $1$ on each tiny $dxdy$ square. The resulting volume is the area of $D$ times 1 "spaghetti height". For the single integral, you lay down spaghetti of length $(1- \sqrt{x_0})$ at $x = x_0$, doing that in each part of $D$. The resulting area is the area of $D$. So, they are two different integrals, with two different interpretations. The way you want to represent it largely depends on the problem you're trying to solve, the techniques you know, and how you want to communicate your results. EDIT: Using the spaghetti method, imagine spaghetti of height $\sin(y^3)$ for each square, so it would be $y$-dependent. You can sorta see the wavy-ness of the sin wave differ with $y$. You do need to calculate $\iint_D \sin(y^3) dy dx$, which is different from $\iint_D dy dx =$ area of $D$. You can use equations as your endpoints; I usually set it up like $$\int_0^1\int_{1-\sqrt{x}}^1 \sin(y^3) dy dx$$ and then I solve it inside-out.
By a double integral in the form $$\iint_{D}f(x,y)dxdy$$ we are evaluating the (signed) volume between the function $z=f(x,y)$ and the x-y plane. Then by the following double integral $$\iint_{D}1\cdot dxdy$$ we are evaluating the volume of a cilinder with base the domain $D$ and height $1$ which correspond (numerically) to the area of the domain $D$. For the last question, yes the integral is additive thus we can divide it as $$\iint_{D}dxdy = \iint_{D_{1}}dxdy + \iint_{D_{2}}dxdy$$ and we can also use (1) splitting the integral accordingly.
stem_mix_good_quality
I read this formula in some book but it didn't provide a proof so I thought someone on this website could figure it out. What it says is: If we consider 3 non-concurrent, non parallel lines represented by the equations : $$a_1x+b_1y+c_1=0$$ $$a_2x+b_2y+c_2=0$$ $$a_3x+b_3y+c_3=0$$ Then the area of the triangle that these lines will enclose is given by the magnitude of : $$\frac{det\begin{bmatrix}a_1 & b_1 & c_1\\a_2 & b_2 & c_2\\a_3 & b_3 & c_3\end{bmatrix}^2}{2C_1C_2C_3}$$ Where $C_1,C_2,C_3$ are the co-factors of $c_1,c_2,c_3$ respectively in the above matrix. What I'm wondering is, where did this come from? And why isn't it famous? Earlier we had to calculate areas by finding the vertices and all but this does it in a minute or so and thus deserves more familiarity.
Clearly, we can scale the coefficients of a given linear equation by any (non-zero) constant and the result is unchanged. Therefore, by dividing-through by $\sqrt{a_i^2+b_i^2}$, we may assume our equations are in "normal form": $$\begin{align} x \cos\theta + y \sin\theta - p &= 0 \\ x \cos\phi + y \sin\phi - q &= 0 \\ x \cos\psi + y \sin\psi - r &= 0 \end{align}$$ with $\theta$, $\phi$, $\psi$ and $p$, $q$, $r$ (and $A$, $B$, $C$ and $a$, $b$, $c$) as in the figure: Then $$C_1 = \left|\begin{array}{cc} \cos\phi & \sin\phi \\ \cos\psi & \sin\psi \end{array} \right| = \sin\psi\cos\phi - \cos\psi\sin\phi = \sin(\psi-\phi) = \sin \angle ROQ = \sin A$$ Likewise, $$C_2 = \sin B \qquad C_3 = \sin C$$ Moreover, $$D := \left|\begin{array}{ccc} \cos\theta & \sin\theta & - p \\ \cos\phi & \sin\phi & - q \\ \cos\psi & \sin\psi & - r \end{array}\right| = - \left( p C_1 + q C_2 + r C_3 \right) = - \left(\;p \sin A + q \sin B + r \sin C\;\right)$$ Writing $d$ for the circumdiameter of the triangle, the Law of Sines tells us that $$\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = d$$ Therefore, $$\begin{align} D &= - \left( \frac{ap}{d} + \frac{bq}{d} + \frac{cr}{d} \right) \\[4pt] &= -\frac{1}{d}\left(\;ap + b q + c r\;\right) \\[4pt] &= -\frac{1}{d}\left(\;2|\triangle COB| + 2|\triangle AOC| + 2|\triangle BOA| \;\right) \\[4pt] &= -\frac{2\;|\triangle ABC|}{d} \end{align}$$ Also, $$C_1 C_2 C_3 = \sin A \sin B \sin C = \frac{a}{d}\frac{b}{d}\sin C= \frac{2\;|\triangle ABC|}{d^2}$$ Finally: $$\frac{D^2}{2C_1C_2C_3} = \frac{4\;|\triangle ABC|^2/d^2}{4\;|\triangle ABC|/d^2} = |\triangle ABC|$$
Here is a proof of a special case. Let the three lines be $L_1,L_2,L_3$. A special case is $L_1$ and $L_2$ as the $x$- and $y$-axes respectively, and $L_3$ of the form $\dfrac xa+\dfrac yb=1$. Then the three lines form a right triangle with area $\dfrac{1}{2} ab$. On the other hand, the formula above gives an area of $$ \frac{1}{2 C_1 C_2 C_3}\left|\begin{matrix}1&0&0\\0&1&0\\a^{-1} & b^{-1} & -1\end{matrix}\right|^2=\frac{(-1)^2}{2(-a^{-1})(b^{-1})(1)}=-\frac12 ab$$ Since the triangle has a negative orientation (the order of the three lines is clockwise around the triangle) this matches the result above. This may seem a good deal weaker than the general case. But we can map any generic set of lines to the set here by the following transformations: Translate one of the intersections to the origin, rotate the triangle into the upper half-plane such that one of its lines becomes the $x$-axis, and finally do a horizontal shear to make one of the remaining lines the $y$-axis. So all that should remain is to show that the above formula is preserved by these transformations---which, I'll confess, I don't know how to do off the top of my head. So as yet this is an incomplete proof of the general case.
stem_mix_good_quality
My brother gave me the following problem: Let $f:[1;\infty)\to[1;\infty)$ be such that for $x≥1$ we have $f(x)=y$ where $y$ is the unique solution of $y^y=x$. Then calculate: $$ \int_0^e f(e^x)dx $$ I couldn't figure out something helpful, so any help is highly appreciated. I think he has the problem from some contest, but I don't know, from which.
Substitute $x=u\ln u$. Note that $1\ln 1 = 0$ and $e\ln e = e$. Thus $$ \int_0^e f(e^x) \,dx = \int_1^e f(e^{u\ln u}) (1+\ln u)\,du = \int_1^e u(1+\ln u)\,du \\ = \int_1^e (u + u\ln u)\,du = \left[\tfrac14 u^2 + \tfrac12 u^2\ln u\right]_1^e = \tfrac34 e^2 - \tfrac14 $$
First of all we have to solve $y^y=x$ for $y$. Using techniques from here , we get $$f(x)=y=\frac{\ln x}{W(\ln x)},$$ where $W$ is the Lambert W function . Now if $x>0$, then $$f(e^x)=\frac{x}{W(x)}.$$ Form here we have to know , that $$\int \frac{x}{W(x)} \, dx = \frac{x^2(2W(x)+1)}{4W^2(x)} + C.$$ This integral comes from a tricky substitution. Between $0$ and $e$ integral limits it has the value $$\int_0^e f(e^x) \, dx = \int_0^e \frac{x}{W(x)} \, dx = -\frac{1}{4}+\frac{3}{4}e^2.$$
stem_mix_good_quality
I was asked to find the derivative of $\arccos$ $x$ with the definition of derivative . I know I have to form this limit. $f^{'}(c)= $ $\displaystyle{\lim_{h\to0}\dfrac{f(h+c)-f(c)}{h}}$ or $f^{'}(c)= $ $\displaystyle{\lim_{x\to c}\dfrac{f(x)-f(c)}{x-c}}$ which $-1<c<1$ (two limits are actually the same) I formed the first limit which is $\displaystyle{\lim_{h\to0}\dfrac{\arccos(h+c)-\arccos(c)}{h}}$ and the second limit which is $\displaystyle{\lim_{x\to c}\dfrac{\arccos(x)-\arccos(c)}{x-c}}$ I tried to use this equation: $$\arccos(x)+\arccos(y)=\arccos\left(xy-\sqrt{(1-x^2)(1-y^2)}\right) $$ but I failed and except that, I have literally NO idea how to calculate these limits.
Hint: It's much easier to find the derivative of $\arcsin(x)$ and then use the property $\arcsin(x) + \arccos(x) = \frac \pi 2 \implies \frac{d}{dx}\arcsin(x) = -\frac{d}{dx}\arccos(x)$ This answer nicely illustrates how to find the derivative of $\arcsin(x)$ by ab-initio methods.
Recast the terms of the difference quotient into $\arcsin$ using $\arccos x = \frac{\pi}{2}-\arcsin x$ to facilitate the inequalities below. $$\begin{align} \frac{\Delta \arccos x}{\Delta x} &= \frac{\arcsin x+\arcsin\left(-\left(x+h\right)\right)}{h} \\ &=\frac{\arcsin u}{h}\tag{*} \end{align}$$ where $u=x\sqrt{1-\left(x+h\right)^{2}}-\left(x+h\right)\sqrt{1-x^{2}}$ . Given $\tan x > x$ for $x\in (0,1)$ , we have $\tan x = \dfrac{\sin x}{\sqrt{1-\sin^2 x}}>x$ and thus, $\sin x > \dfrac{x}{\sqrt{1+x^2}}$ . With the well known identity $x > \sin x $ , this gives both upper and lower bounds for sine. Since a function and its inverse are symmetric across $y=x$ , we can reflect sine and its bounds across the line $y=x$ to give an inequality of their inverses $$\begin{align} \frac{|x|}{|\sqrt{1-x^2}|}&> |\arcsin x|> |x| \\ \frac{\left|u\right|}{\left|h\right|}\cdot \frac1{\sqrt{1-u^{2}}}&> \left|\frac{\Delta \arccos x}{\Delta x}\right|> \frac{\left|u\right|}{|h|}\tag{by *} \end{align}$$ As $h\to 0$ , we have $u\to 0$ , so $\dfrac{1}{\sqrt{1-u^2}}\to 1$ and by the squeeze theorem, $\dfrac{\mathrm{d}}{\mathrm{d}x}\arccos x = \pm \lim\limits_{h\to0}\dfrac{\left|u\right|}{|h|}$ . From here, it suffices to "rationalise" the expression using the difference of squares and the conjugate, $u^*=\left(x\sqrt{1-\left(x+h\right)^{2}}+\left(x+h\right)\sqrt{1-x^{2}}\right)$ . Lastly, correct the sign. $$\begin{align} \dfrac{\mathrm{d}}{\mathrm{d}x}\arccos x &=\pm \lim \frac{|u|}{|h|}\cdot \frac{|u^*|}{|u^*|} \\ &=\pm \lim \frac{|h||-h-2x|}{|h||u^*|} \\ &\to\frac{-1}{\sqrt{1-x^2}} \end{align} $$ .
stem_mix_good_quality
Every week I hop on a treadmill and figure out how long I need to run. I am given the pace: Pace = 7:41/mile = (7 minutes + 41 seconds) per mile I need to add this up to calculate how long I should run to run 1.5 miles. I use the formula 7:41 + (7:41/2) = ? 1.5 mile I find this somewhat difficult to calculate in my head, especially while I am starting my warmup. Converting to seconds doesn't make it any easier. Do you have any suggestions as to how I can do this more efficiently?
I understand your question as this: "How do I efficiently divide numbers by $2$ in sexagismal. (Base 60)" Suppose you have $a*60+b$ as your time. In your case, $a=7$, $b=41$. To divide by two, just do it the way you normally would, but carrying is by $60$ instead of $10$. (Base $60$ instead of base $10$) Divide $a$ by two. If it is even, no problem. If it is odd, then you "carry" a 60 over to $b$. So when dividing $7:41$, we get $3: \frac{41+60}{2}$. Then you just divide $b$ by $2$ (or $b+60$ if we had to carry). So to divide $7:41$ by two, we get $3:50.5$. Lets try another. How about $16:56$? Well the first term is even so we just get $8:28$. What about $27:32$? Well, the first part will be $13$, we add $60$ to $32$ to get $92$, then divide this by two, so the answer is $13:46$. You try one: What is $9:51$ divided by two? (answer at end) I hope this was helpful, Important Note: Notice that this works for other numbers besides just dividing by $2$. Dividing by any number in base $60$ is the same, we just carry $60$'s instead of $10$'s. Even more generally, notice this works for any base $b$. Division base $b$ is just done by carrying $b$ instead of $10$. Answer: $9:51$ divided by two is $4:55.5$. We divide $9$ by two, and get $4$, and carry the $60$ over to the $51$ to get $111$, which is $55.5$ after division by $2$.
My answer is going to be more specific to the calculation you're doing (a + 1/2 * a), so if you have the pace 7:41 and you want to find 7:41 + 1/2 * 7:41. First you do 7/2 = 3.5, and add it to the original time 7:41+3.5 = 10.5:41, then if necessary, normalize the .5 to 10:71 Second you add the seconds 10:71 + 41/2 = 10:91. Finally, normalize it to 11:31. An example run of it in your head: A B C 7 ... ... (read the minute from the panel) 7 3.5 ... (divide by 2) 10.5 ... ... (add it to the minute) 10.5 41 ... (read the panel again for the second) 10 71 ... (normalize the .5 by adding 30 to the seconds) 10 71 41 (read the panel again for the second) 10 71 20 (divide by 2) 10 91 ... (add to the seconds) 11 31 ... (normalize) This might be easier for some people than doing a base-60 division first as the steps are IMO simpler to work with in (my) head. So the algorithm is basically: Read the time Add the minute part Add the seconds part
stem_mix_good_quality
This is a kind of abstract question regarding the mechanisms and logic of mathematics. First, let me try to explain what I tried to convey with the topic title. Let's say I have a value that gets decreased to a lower one, and I want to calculate the percentage difference, like for example 13 goes down to 8. 13 - 8 = 5 So now I would divide the the difference of 5 to the original value of 13, which is what the topic is about. 5 / 13 = 0.3846 And then of course I'd multiply the 0.3846 by 100 to get the proper percentage difference between 13 and 8. 0.3846 * 100 = 38.46 At which point I know the percentage difference is 38.46. But the part that really I don't understand, is that there must be a logical reason for why it makes sense to divide the difference of 5 to the original value of 13. I can understand we do it because it works, but I don't understand why exactly it works. I hope this question makes sense, basically I'm trying to say that on an intuitive level or a logical reasoning level, I can't seem to understand why the difference is split to the original value, other than "it works because reasons".
What does it mean to say something like "the price of the item was reduced by $20\%$"? This is just a meaningless string of words until we assign it a meaning that people who use these words can agree upon. The agreed-upon meaning happens to be that the reduced price of the item is $P - \frac{20}{100} P$, where $P$ was the original price. The quantity $\frac{20}{100}P$ is $20\%$ of $P$, which is the amount by which the price was reduced. More generally, if an initial quantity has value $P$ and is reduced by $x\%$, the reduced quantity, let's name it $P'$, has value $$P' = P - \frac{x}{100} P. \tag1$$ Using the numbers in the example in the question, the initial value of the quantity is known to be $13$, and the reduced value is known to be $5$. At that point in the calculation we have not determined the percentage amount of the reduction, but if we say it is an $x\%$ reduction, then we can set the original quantity $P$ to $13$ and the reduced quantity $P'$ to $8$ in Equation $(1)$, so we know that $$ 8 = 13 - \frac{x}{100}\times 13. \tag2 $$ This equation implies $$ \frac{x}{100}\times 13 = 13 - 8 = 5, $$ which implies $$ x = \frac{5}{13}\times 100 = 36\frac{6}{13}. \tag3 $$ Therefore the percentage reduction is $36\frac{6}{13}\%$, which is approximately $38.46\%$. The reason we have division by $13$ in Equation $(3)$ is because the definition of "reduce by $x\%$" means that Equation $(2)$ is true, and the division by $13$ is one step of a correct method to solve for $x$ when Equation $(2)$ is true.
The other two answers are far too long-winded. Your question has a very simple answer. The reason you divide 5 by 13 is that we measure percentage change relative to the initial value. Here is another example. Suppose my bank account has \$100 in it. I then buy a shirt for \$25, so my account now has only \$75 in it. By what percent did my account balance decrease? What this question really means is: the actual decrease is what percent of the initial value? In this example, the answer is 25%. The actual decrease is \$25, and the starting balance was \$100. The percentage change measures the actual change relative to the starting point.
stem_mix_good_quality
I know that modern sciences have many many applications for the number PI, many of them outside of geometry, but I do not understand what practical applications had this constant in the ancient world. What motivated the Greeks, Babylonians and the Egyptians to try to calculate this number?
Pi appears in equations for volumes of regular solids, as well as in the area of a circle, among many other locations in mathematics. Knowing how to find these geometric pieces of information was valuable to ancient civilizations due to the precision required in their many construction projects of scales varying from small to colossal. For example, knowing how much stone is needed to construct a pillar of certain dimensions requires knowledge of pi.
If you are interested in the volume of a heap of wheat (which looks like a cone) or in the content of a granary (which may look like a cylinder), then you need approximations of $\pi$. The Babylonians used $\pi = 3$ in plane geometry, and the approximation $\pi = 3 \frac18$ only occurs in connection with computing the volume of solids.
stem_mix_good_quality
Please calculate the limit $\lim\limits_{t \to 0+} {1\over 2}\cdot \left({\pi \over t}\cdot\frac{1+e^{-2\pi t}}{1 - e^{-2\pi t}} - {1\over t^2}\right)$ and provide the corresponding procedure. The answer is $\pi^2 \over 6$ . I tried L'Hospital's Rule but I failed. The following is not necessary to read: The background of this problem: this problem arose from a problem related to the Fourier Transform. I tried to use Poisson's summation formula to get $\sum\limits_{n = -\infty}^\infty \frac{1}{n^2 + t^2} = {\pi \over t}\cdot\frac{1+e^{-2\pi t}}{1 - e^{-2\pi t}}$ . (This can be proved by let $f(x) = \frac{1}{x^2+t^2}$ , where $t>0$ is a parameter. Then $\hat{f}(n) = {\pi \over t}\cdot e^{-2\pi |n| t}$ .) Then $\sum\limits_{n = 1}^\infty {1\over n^2} = \frac{\pi^2}{6} $ should be a corollary.
By L'Hospital: With $u=\pi t$ , the expression is $$\frac{\pi^2}2\left(\frac1{u\tanh u}-\frac1{u^2}\right)=\frac{\pi^2}2\left(\frac{u-\tanh u}{u^2\tanh u}\right).$$ As $\dfrac{\tanh u}u$ tends to one, we can replace the denominator by $u^3$ . Then $$\lim_{u\to0}\frac{u-\tanh u}{u^3}=\lim_{u\to0}\frac{1-1+\tanh^2u}{3u^2}=\frac13.$$ Finally, $$\frac{\pi^2}6.$$
Expanding @labbhattarcharjee's comment, we use $\coth x\approx\frac{1}{x}+\frac{x}{3}$ to write the limit as $$\lim_{t\to0^+}\frac12\left(\frac{\pi}{t}\coth(\pi t)-\frac{1}{t^2}\right)\lim_{t\to0^+}=\frac12\left(\frac{\pi}{t}\frac{1}{\pi t}+\frac{\pi}{t}\frac{\pi t}{3}-\frac{1}{t^2}\right)=\frac{\pi^2}{6}$$ (the first term cancels the last one).
stem_mix_good_quality
I have a smooth function that takes as input a Brownian motion $B_t$ . My question is how does one find the time derivative of the expectation? In other words, how do you calculate $\frac{d}{dt} \mathbb{E} f(B_t)$ .
We know that: $B_t \sim N(0,t)$ Thus: $$\mathbb{E}[f(B_t)] = \dfrac{1}{\sqrt{2\pi t}}\int_{-\infty}^\infty f(x)e^{\frac{-x^2} {2t}}dx$$ To calculate $\dfrac{d}{dt}\mathbb{E}[f(B_t)]$ , use the following property: $$\dfrac{d}{dt}\left(\int_{a(t)}^{b(t)}g(x,t) dx\right) = \int_{a(t)}^{b(t)}\dfrac{\partial g(x,t)}{\partial t} dx+ b'(t)g(b(t),t)-a'(t)g(a(t),t)$$ Therefore, $$\begin{align} \dfrac{d}{dt}\mathbb{E}[f(B_t)] &= \dfrac{d}{dt}\left(\dfrac{1}{\sqrt{2\pi t}}\int_{-\infty}^\infty f(x)e^{\frac{-x^2} {2t}}dx\right)\\ &= -\dfrac{1}{2t\sqrt{2\pi t}}\int_{-\infty}^\infty f(x)e^{\frac{-x^2} {2t}}dx+\dfrac{1}{2t^2\sqrt{2\pi t}}\int_{-\infty}^\infty x^2f(x)e^{\frac{-x^2} {2t}}dx \end{align}$$
$\mathbb{E}[ f(B_t) ] = \frac{1}{ \sqrt{ 2 \pi t}} \int_{\mathbb{R}} f(x) e^{-x^2/{2t}} dx =: g(t)$ Note how all that matters is the pdf at time $t$ . You can now differentiate $g(t)$ using product rule + under the integral sign. It is definately not the same thing as $\mathbb{E} [ (d/dt) f(B_t)]$ . As you point out, this latter expression doesn't make sense.
stem_mix_good_quality
How to calculate the probability that four specific, distinct numbers from the range 1 - 3000 occur at least once in a fixed sample of 400 random numbers from the range 1-3000? The numbers in the sample can repeat as they were randomly generated. My intuition would be that it is basically a set of four "scans" of the 400 numbers, so the probability of hitting the 1/3000 searched number in each of the scans is roughly 400/3000 = 2/15. This would give the total probability count as (2/15)x(2/15)x(2/15)x(2/15) = 16/50625 = 0,000316. However, I'm not sure if this accounts (and if it should account) for the fact that it is a fixed sample so it's not "re-rolled" for each of the four scans. Thanks for any advice.
Use inclusion/exclusion principle: Include the number of combinations with at least $\color\red0$ missing values: $\binom{4}{\color\red0}\cdot(3000-\color\red0)^{400}$ Exclude the number of combinations with at least $\color\red1$ missing values: $\binom{4}{\color\red1}\cdot(3000-\color\red1)^{400}$ Include the number of combinations with at least $\color\red2$ missing values: $\binom{4}{\color\red2}\cdot(3000-\color\red2)^{400}$ Exclude the number of combinations with at least $\color\red3$ missing values: $\binom{4}{\color\red3}\cdot(3000-\color\red3)^{400}$ Include the number of combinations with at least $\color\red4$ missing values: $\binom{4}{\color\red4}\cdot(3000-\color\red4)^{400}$ Finally, divide by the total number of combinations, which is $3000^{400}$: $$\frac{\sum\limits_{n=0}^{4}(-1)^n\cdot\binom{4}{n}\cdot(3000-n)^{400}}{3000^{400}}\approx0.000239$$
Label the $4$ numbers and let $E_{i}$ denote the event that number with label $i\in\left\{ 1,2,3,4\right\} $ does not occur in the sample. Then you are looking for $\Pr\left(E_{1}^{c}\cap E_{2}^{c}\cap E_{3}^{c}\cap E_{4}^{c}\right)=1-\Pr\left(E_{1}\cup E_{2}\cup E_{3}\cup E_{4}\right)$. With inclusion/exclusion and symmetry we find that this equals: $$1-4\Pr\left(E_{1}\right)+6\Pr\left(E_{1}\cap E_{2}\right)-4\Pr\left(E_{1}\cap E_{2}\cap E_{3}\right)+\Pr\left(E_{1}\cap E_{2}\cap E_{3}\cap E_{4}\right)$$ Can you take it from here?
stem_mix_good_quality
I have no idea where to start on this question or where to begin to find the answer. And I'm not sure how to calculate the probability. Here is the full question: The number of chocolate chips in an 18-ounce bag of chocolate chip cookies is normally distributed with a mean of 1238 chocolate chips and a standard deviation of 122 chocolate chips. What is the probability that a randomly selected 18-ounce bag of chocolate chip cookies contains fewer than 1300 chocolate chips? Thanks for your help
Let $X$ be the number of chips in a randomly chosen bag. Then $$X\sim N(\mu,\sigma^2)$$ where $\mu$ is the mean and $\sigma$ is the standard deviation. A basic fact about normal distribution is that then $$Z=\frac{X-\mu}{\sigma}$$ has a standard normal distribution. Now $$P(X<1300)=P\Bigl(\frac{X-\mu}{\sigma}<\frac{1300-\mu}{\sigma}\Bigr) =P\Bigl(Z<\frac{1300-\mu}{\sigma}\Bigr)\ .$$ See if you can fill in the values of $\mu$ and $\sigma$ and look up the required probability in a standard normal distribution table.
The density of probability following a normal distribution is : $$ f(x)=\frac{1}{\sigma\sqrt{2\pi}}exp{\frac{-(x-\mu)^2}{2\sigma^2}} $$ Here $\mu=1238$ and $\sigma=122$ You want to know what's the probability for a chocolate bag to have less than 1300 chocolate chips so : $$ p(X\leq 1300)=\int_{-\infty}^{1300}\frac{1}{\sigma\sqrt{2\pi}}exp{\frac{-(x-\mu)^2}{2\sigma^2}} ~dx $$ For more information, click here .
stem_mix_good_quality
What is a simple formula to find 2 intermediate values between 2 known values? f(1)=a, f(2)=?, f(3)=?, f(4)=b If there would be only 1 unknown, then it would be mean ((a+b)/2), but what to do when there are 2 intermediate values to calculate?
Define $$f (x) := a + \left(\frac{b-a}{3}\right) (x-1)$$ and then evaluate $f (2)$ and $f (3)$. You should obtain $$f (2) = \frac{2 a + b}{3}, \quad{} f (3) = \frac{a + 2 b}{3}$$ which are weighted averages of $a$ and $b$. If you had $4$ known values, you would be able to use a cubic interpolating polynomial, but since you only have $2$, you must use an affine polynomial.
The four numbers are: $a=\frac{3a+0}3, \frac{2a+b}3, \frac{a+2b}3, \frac{0+3b}3=b$
stem_mix_good_quality
I have a geodesic (connecting points A and B) on a sphere. There's a another point C on the same sphere. So, I know distance AB. If necessary, AC and BC can also be computed. I need to find the shortest distance between the point C and the geodesic line AB. My initial attempt was to use an idea like Lagrange multipliers to find the shortest arc length, however, it is difficult to solve the differential equation. How can I calculate the shortest distance between C and AB? Thanks.
It's just the chain rule, but it's kind of hidden. In the calculus of variations, like in a lot of calculus, people are really lax about specifying what depends on what. To clarify what's going on here I'm going to introduce some intermediate variables $X$, $Y_1$, and $Y_2$ (which are all functions of $x$) defined as $X(x) = x$ $Y_1(x) = y(x)$ $Y_2(x) = y'(x)$ And instead of writing $F(y, y', x)$ we will write it as $F(Y_1, Y_2, X)$. Now let's look at how $F$ depends on $x$. $x$ maps to a triple $(Y_1, Y_2, X)$ by the above rules, and these map to F by whatever rule we have for $F$. Symbolically $$F(x) = F(Y_1(x), Y_2(x), X(x)).$$ So writing out the chain rule for a function with three intermediate variables we get $$\frac{d F}{d x} = \frac{\partial F}{\partial X} \frac{d X}{d x} + \frac{\partial F}{\partial Y_1} \frac{d Y_1}{d x} + \frac{\partial F}{\partial Y_2} \frac{d Y_2}{d x}$$ Now looking back at the way that we defined our intermediate variables we can make following simplifications $$\frac{d X}{d x} = 1$$ $$\frac{d Y_1}{d x} = \frac{d}{d x} y = y'$$ $$\frac{d Y_2}{d x} = \frac{d}{d x} y' = y''.$$ Substitute these into the chain rule and go back to treating $F$ as a fuction of $x$, $y$, and $y'$ instead of $X$, $Y_1$, and $Y_2$ and you get formula in your question. (The notes make it a bit harder to see by using one order when writing $F$ as a function of $x$, $y$, and $y'$, and a different order for writing the terms of the chain rule, but the order of the terms added together for the chain rule actually doesn't matter.) Oh, and you can always do this; it doesn't depend on $F$ being independent of $x$. That's used in the later part of the solution.
Well...It seems that F doesn't explicitly depend on x while y does and ${\partial F}\over{\partial x}$ can be reformulated as according to $y^{'}{{\partial F}\over{\partial y}}$
stem_mix_good_quality
I want to calculate the probability of getting heads at least $5$ times in a row from $30$ throws. Probability of getting a head is $1/2$ . Now, probability of getting 2 heads in a row is $1/4$ , etc. My idea was to do it like a normal probability etc. N = $2^{30}$ and $n=30, r=5$ $nCr = 142506$ $\frac{n}{N} = 0.0001327$ The problem is, that this is the answer for exactly $5$ heads from $30$ throws, no? Is there a better way than to calculate it one by one?
I give here a possible approach. Probably, it is not the fastest in your case, but it is an example of a quite powerful method (transfer matrix approach) that has many applications (for instance in statistical physics). Let us denote as $P_N$ probability of finding at least $5$ consecutive successes in $N$ trials. Let us assume that the probability of a success is $\gamma$ . If $N=5$ , clearly you have that $P_5 = \gamma^5$ . Now, what happen if $N=6$ ? Well, we have to consider the probability of having $5$ successes in the first $5$ trials. Then you have the case in which the first attempt was a failure, and the next $5$ were successful. Equivalently, this mean that the last $4$ attempts of the first $5$ trials were successful (although the first attempt wasn't), and than the $6$ -th trial was successful as well. Now, I will denote as $+$ a success and as $-$ a failure. I will call $Q_5^4$ the probability of having the following $5$ trials: $-++++$ . From the previous discussion we have that $$P_6 = P_5 + \gamma Q^4_5\,.$$ Now, let us generalise this notation. We call $Q_N^n$ (with $N\geq 5$ and $n\in\{0, 1,2,3,4\}$ ) the probability that, given $N$ attempts, there is no $5$ consecutive successful trials, the last $n$ trials were successful, and the $(n+1)$ -th last attempt failed. For instance, $Q^3_7$ will be the probability of having one event in the form $\dots -+++$ , composed of $7$ total trials and without $5$ successes in a row. It is not hard to see that $$Q^4_6=Q^3_5\gamma\,,$$ $$Q^3_6=Q^2_5\gamma\,,$$ $$Q^2_6=Q_1^5\gamma\,,$$ $$Q^1_6 = Q^0_5\gamma\,,$$ and $$Q^0_6 = (1-P_5)(1-\gamma) = (Q^4_5+Q^3_5+Q^2_5+Q^1_5+Q^0_5)(1-\gamma)\,.$$ Now, let us define the vector $$R_N = \begin{pmatrix}P_N\\Q_N^4\\ Q_N^3\\ Q_N^2\\ Q_N^1\\ Q_N^0\end{pmatrix}\,.$$ Now, it is not hard to check that $R_6 = \Gamma R_5$ , where $$\Gamma = \begin{pmatrix} 1 & \gamma & 0 & 0 & 0 & 0 \\ 0 & 0 & \gamma & 0 & 0 & 0 \\ 0 & 0 & 0 & \gamma & 0 & 0 \\ 0 & 0 & 0 & 0 & \gamma & 0 \\ 0 & 0 & 0 & 0 & 0 & \gamma \\ 0 & 1-\gamma & 1-\gamma & 1-\gamma & 1-\gamma & 1-\gamma \end{pmatrix}\,.$$ Actually, it is not hard to see that this is true for all $N\geq 5$ , in the sense that $$R_{N+1} = \Gamma R_N\,.$$ Now, you can compute exactly $R_5$ and you get $$R_5 = \begin{pmatrix} \gamma^5\\(1-\gamma)\gamma^4\\(1-\gamma)\gamma^3\\(1-\gamma)\gamma^2\\(1-\gamma)\gamma\\1-\gamma\end{pmatrix}\,.$$ You are actually interested in $P_{30}$ with $\gamma=1/2$ . This can be obtained by computing $R_{30}$ . You have $$R_{30} = \Gamma^{25} R_5\,.$$ I evaluated this expression with Mathematica and I got $$R_{30}\simeq \begin{pmatrix}0.3682\\ 0.0215\\ 0.0423\\0.0831\\0.1635\\0.3214\end{pmatrix}$$ that means that $P_{30}\simeq .3682$ . Note that this method can be easily generalised to different values of successes in a row, as you will always have matrices $\Gamma$ with a very similar structure, just of different size.
The answer involves using the 5-step Fibonacci sequence (nickname Pentanacci - http://oeis.org/A001591 ). Each of the prior 5 numbers tracks how many throw sequences have 0, 1, 2, 3, 4 heads in a row and the sum tracks the cumulative throws that didn't generate 5 in a row. The complement of that yields the answer. see the following discussing 2 and 3 heads in a row. https://wizardofvegas.com/forum/questions-and-answers/math/14915-fibonacci-numbers-and-probability-of-n-consecutive-losses/ Let me know if you just need this hint as an approach or if you need further detail.
stem_mix_good_quality
I'm trying to solve the problem in the image but having trouble formulating the problem mathematically: There are $20$ identical laptops on a trolley, out of which $12$ have a hard disk with capacity $160$ GB and $8$ with a capacity of $320$ GB. A teacher randomly takes two laptops from the trolley. A student then takes a laptop from the trolley to complete a project. Given that the student took a laptop with $160$ GB, find the probability that the teacher took both laptops with $320$ GB. so any help will be appreciated. I've tried the following: let $A$ denote the event of choosing a 160GB laptop, and $B$ denote the event of choosing a 320GB laptop. Then $P(A)= 12/20, P(B)=8/20.$ Next, let $C$ denote the event of choosing 2 laptops, regardless of their memory (160GB or 320GB). Then $P(C)=2/20$ (right?) Then $B \cap C$ will denote the event of choosing two 320GB laptops (right?). But then how do I calculate $P(B \cap C)?$ Also, how do I calculate the probability of $D:=$ choosing two 160GB laptops and $P(A|D)$ ? I understand that at the end we'll need to calculate $P(D|A)$ , and hence we'll need to use the formula $P(D|A)= \frac{P(D \cap A)}{P(A)}= 20/12 * P(D \cap A)=20/12 * P(A|D) P(D).$ Here's where I'm confused: how to I calculate $ P(A|D), P(D)?$
Let the variable $L_1$ be $0$ or $1$ depending on whether the first laptop taken is small or big. Let the variable $L_2$ be $0$ or $1$ depending on whether the second laptop taken is small or big. Let the variable $L_3$ be $0$ or $1$ depending on whether the third laptop taken small or big. The question is to compute $P(L_1 = 1, \, L_2 = 1 \,|\, L_3 = 0)$ . $$P(L_1 = 1, \, L_2 = 1 \,|\, L_3 = 0) = P(L_1 = 1, \, L_2 = 1, \, L_3 = 0) / P(L_3 = 0) $$ You can probably figure out how to compute the numerator of the right hand side. To compute the denominator, note that the three laptops chosen are a uniform random trio. Therefore each laptop in the chosen trio has the same claim to be small or big as the other two in the trio. Therefore $P(L_3 = 0) = P(L_1 = 0) = 12/20$ . That the three laptops chosen are a random trio arises from a more general phenomenon. If you consider all size $N$ subsets of a set, and choose one of those subsets uniformly at random, it's the same as if you chose $N$ elements one a time uniformly. You can show this using induction.
I don't think the student picking a laptop with a 160 GB hard drive after the fact affects the probability of the teacher choosing two laptops with 320 GB hard drives. If the question was what is the probability of the teacher choosing two 320 GB laptops and then a student choosing a 160 GB, that would be a different story. But, let's focus on the question at hand. Let A represent the event of choosing a laptop with a 320 GB hard drive on the first try and B represent the event of choosing a laptop with a 320 GB hard drive on the second try. The probability of choosing 320 GB on the first try and 320 GB on the second try is $$P(A\cap B) = P(B|A)P(A).$$ The probability of A is calculated as follows: $$P(A) = \frac{8}{20} = \frac{2}{5}.$$ Since there are 7 laptops with 320 GB hard drives remaining of 19 total laptops, the probability of B given A is calculated as follows: $$P(B|A) = \frac{7}{19}$$ . Therefore, the probability of both A and B occurring is: $$P(A\cap B) = P(B|A)P(A) = (\frac{7}{19})(\frac{2}{5}) = \frac{14}{95}.$$
stem_mix_good_quality
I am facing difficulties in calculating the Moore-Pensore pseudoinverse of a positive semidefinite matrix $A$, where $A$ is self-adjoint and $\langle A u, u \rangle \geq 0$ for all $u \in \mathcal{H}$, where $\mathcal{H}$ is a complex Hilbert space. For example, $$A = \begin{bmatrix} 1&-1\\ -1&1\end{bmatrix}$$ is a positive semidefinite matrix. How to calculate the Moore-Penrose pseudoinverse of $A$?
Computing the singular value decomposition (SVD) of symmetric, rank-$1$ matrix $\rm A$, $$\mathrm A = \begin{bmatrix} 1 & -1\\ -1 & 1\end{bmatrix} = \begin{bmatrix} 1\\ -1\end{bmatrix} \begin{bmatrix} 1\\ -1\end{bmatrix}^\top = \left( \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1\\ -1 & 1\end{bmatrix} \right) \begin{bmatrix} 2 & 0\\ 0 & 0\end{bmatrix} \left( \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1\\ -1 & 1\end{bmatrix} \right)^\top$$ Hence, the pseudoinverse of $\rm A$ is $$\mathrm A^+ = \left( \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1\\ -1 & 1\end{bmatrix} \right) \begin{bmatrix} \frac12 & 0\\ 0 & 0\end{bmatrix} \left( \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1\\ -1 & 1\end{bmatrix} \right)^\top = \color{blue}{\frac14 \mathrm A}$$
Given a rank-one matrix of any dimension $$\eqalign{ &A = xy^H \quad\;{\rm where}\; &x\in{\mathbb C}^{m\times 1},\;y\in{\mathbb C}^{n\times 1} }$$ there is a closed-form expression for its Moore-Penrose inverse $$ A^+ = \frac{yx^H}{(x^Hx)(y^Hy)}$$ $A$ is obviously a rank-one matrix, therefore $$\eqalign{ x &= y= z= \left(\begin{array}{rr}1\\-1\end{array}\right), \quad z^Hz = 2 \\ A^+ &= \frac{zz^H}{(2)(2)} = \frac{A}{4} \\ }$$
stem_mix_good_quality
Let: $$f: (x,y) \mapsto \frac{x^2y}{x^2+y^2}$$ and $\:f(0,0)=0$ It's partial derivatives are: $$\frac{\partial f}{\partial x} (x,y)=\frac{2xy^3}{(x^2+y^2)^2}$$ $$\frac{\partial f}{\partial y} (x,y)=\frac{x^2(x^2+y^2)}{(x^2+y^2)^2}$$ How can I prove that is has partial derivatives at $0$ and calculate them? Would showing that $f$ is continues at $0$ and stating that the partial derivatives at $0$ are also null suffice? Or should I try showing that the partial derivatives are equal at $0$?
It is actually clear when I write down the weak form: \begin{align} -\int_\Omega\Delta w\ v&=\int_\Omega fv+\int_\Omega\Delta G\ v\\ \int_\Omega\nabla w\nabla v&=\int_\Omega fv-\int_\Omega\nabla G\nabla v\\ \int_\Omega\nabla w\nabla v&=\int_\Omega(fv-\nabla G\nabla v)\ \ \ \forall v\in H^1_0 \end{align} So I have equation in the form \begin{equation} a(w,v)=F(v)\ \ \ \forall v\in H^1_0, \end{equation} where \begin{align} a(w,v)&=\int_\Omega\nabla w\nabla v\\ F(v)&=\int_\Omega(fv-\nabla G\nabla v) \end{align} The ellipticity (coercivity) of $a(\cdot,\cdot)$ follows from the Friedrichs inequality, which I can now use, because $w\in H^1_0(\Omega)$. Boundedness is trivial in this case. As for the functional $F$, it is also bounded, because $f,\ \nabla G\in L^2(\Omega)$. My mistake was assuming, that I need $G\in L^2(\Omega)$, but as it turns out, I only need $\nabla G\in L^2(\Omega)$, which is satisfied, because $G\in H^1(\Omega)=W^{1,2}(\Omega)$. Now I can use the Lax-Milgram theorem to show, that there is a unique $w\in H^1_0$ such that \begin{equation} a(w,v)=F(v)\ \ \ \forall v\in H^1_0, \end{equation} and the solution to the equation is then $u=w+G$.
You have $F \in H^{-1}(\Omega)$, and you can use the Lax-Miligram Theorem to get existence and uniqueness (actually it's just the Riesz-Representation Theorem in this case).
stem_mix_good_quality
Is there a way to calculate the expected number of unique states visited in a Markov chain after n steps? For example, say I have 4 states {A,B,C,D} with transition probabilities 1/4 across the board; i.e., the transition matrix is uniformly 1/4. The initial condition is {0,1,0,0}. What is the expected number of states visited after 3 steps? This is a simplified example. The problem I'm working on is much more complicated (many more states and steps, and heterogeneity in transition probabilities). So I'm looking for a general solution. I thought that I could do something like this: E[unique states visited] = P(visited A) + P(visited B) + P(visited C) + P(visited D) = 1-P(haven't visited A) + 1-P(haven't visited B) + 1-P(haven't visited C) + 1-P(haven't visited D) = 1-(1-P(A,step 1))(1-P(A,step 2))(1-P(A,step 3)) + ... But this gives me the wrong answer - I'm guessing because P(A,step 1) is not independent of P(A,step 2), etc.
For this problem in particular, at least, we can find the solution. Let $p(k)$ be the probability that you transition to a state you haven't visited yet, given that there are $k$ states left that you haven't visited yet. In the case of this problem, $p(k) = \frac{k}{4}$. For the sake of choosing a convention, I'll suppose that you are considered to have already visited the state you start on. Let $E(k, t)$ be the expected number of states you will have visited given that there are $k$ states that you haven't visited yet, and you have $t$ transitions remaining. The base case is that $E(k, 0) = 4-k$. Now, let's consider the recursive case. If your current number of unvisited states is $k$, you will with probability $\frac{k}{4}$ transition to a new state, so that $k\mapsto k-1$. Otherwise, with probability $1-\frac{k}{4}$, you will visit a state that you have already visited. So $E(k, t) = \frac{k}{4} E(k-1, t-1) + \frac{4-k}{4} E(k, t-1).\qquad(t>0)$ If you solve this recurrence relation for $k=3$ initial unvisited states and $t=3$ steps, you get $175/64 \approx 2.73$ as the expected number of states visited. If you consider the intial state to be unvisited as well ($k=4$), the number drops to $148/64 \approx 2.3125$. For Markov models with more complicated probabilities, I can only think of a brute-force solution: starting from a Markov chain with $n$ states, create a new Markov chain where each new state represents a subset of states that you've visited so far, along with the state you're currently on. You can compute the transition probabilities of this new chain from the probabilities of the old chain; then you can compute the expected number of visited states in the old chain from the expected final state of this new chain.
Let us consider some special cases where it is easy to determine the correct value: Atomic starting distribution (one 1 and the rest 0). After 0 steps the expected number of states visited should always be 1. After 1 step the expected number of places visited should be 1 + 1-(chance to stay). Assuming $\bf v$ is our initial state vector $\bf P$ is our transition probability matrix, $\bf 1$ is the column vector of ones, the product sign is assumed to mean Schur/Hadamard (element-wise) product . A method which fulfills these is to take $${\bf 1}^T \left( {\bf 1}- \prod_{i=0}^n\left({\bf 1}-{\bf P}^i{\bf v}\right) \right)$$ This is a matrix way to write sum over vector, since scalar product with the 1 vector is a sum. I think this does something similar to what you meant in the question. For our example: $${\bf P} = \left[\begin{array}{cccc}0.25&0.25&0.25&0.25\\0.25&0.25&0.25&0.25\\0.25&0.25&0.25&0.25\\0.25&0.25&0.25&0.25\end{array}\right], {\bf 1} = \left[\begin{array}{c}1\\1\\1\\1\end{array}\right], {\bf v} = \left[\begin{array}{c}0\\1\\0\\0\end{array}\right]$$ After 0 ($n=0$): $1.00$ - we always start somewhere. After 1 ($n=1$): $1.75$ - the $0.25$ missing to reach $2.00$ is because chance to stay is $25\%$ After 3 ($n=3$): $2.7344 \approx 2.73$ as the other answer by @user326210 . After 16 ($n=16$): $3.9699<4$ - reasonable since we can't visit more states than there exist! The first two and the last make sure it passes our sanity check and the third conforms with previous more theory focused answer by @user326210 . This is of course by no means a proof, more of a practical contribution.
stem_mix_good_quality
I need help with the following problem. "Let $C : y^2 = x^3 − 5x^2 + 6x$ be a cubic curve with the standard group law. Find a meromorphic function on $C$ having the pole of order two at $B=(1,\sqrt{2})$ and one of the zeros at $A=(0,0)$." If $C$ is given as $\mathbb{C}/\Lambda$, I can construct the associated Weierstrass's $\wp$ function and use Abel's theorem to construct a meromorphic function with prescribed poles and zeroes. Unfortunately, I couldn't use that in the problem above because I cannot calculate two things I would additionally need: the periods of the $\wp$ function, and the Abel-Jacobi map.
Let $A_M$ be the set $[0,M]\times [0, M]$ and $I_M=\int_{A_M}{f}$. From Fubini's Theorem \begin{align} I_M&=\int_{y=0}^{y=M}\left[\int_{x=0}^{x=M}(x+y)e^{-(x+y)}dx\right]dy \\ &=\int_{y=0}^{y=M}\left[-(x+y+1)e^{-(x+y)}\right]_{x=0}^{x=M}dy \\ &=\int_{y=0}^{y=M}\left[(y+1)e^{-y}-(M+y+1)e^{-(M+y)}\right]dy \\ &=\left[-(y+2)e^{-y}+(M+y+2)e^{-(M+y)}\right]_{y=0}^{y=M} \\ &=-(M+2)e^{-M}+2+2(M+1)e^{-2M}-(M+2)e^{-M} \\ &=2-2(M+2)e^{-M}+2(M+1)e^{-2M} \end{align} Now, \begin{align} \int_A{f}&=\lim_{M\rightarrow \infty}{I_M} \\ &=\lim_{M\rightarrow \infty}{\left[2-2(M+2)e^{-M}+2(M+1)e^{-2M}\right]} \\ &=2. \end{align}
HINT : Rewrite: $$ \int_{0}^{\infty} \int_{0}^{\infty} (x+y)e^{-x-y} dx dy =\int_{0}^{\infty} e^{-y} \int_{0}^{\infty} (x+y)e^{-x} dx dy\tag1 $$ Note that $$ \int_0^\infty z^n e^{-z}\ dz=(n+1)!\tag2 $$ Use $(2)$ to evaluate $(1)$.
stem_mix_good_quality
My question may seem naive, but I couldn't find its answer in books, websites, etc. Assume that I want to calculate numerically the absolute value of the following integral $$I = \int_0^T\exp(if(t)),$$ where $f(t)$ is a real function of $t$ . Which one of the following is the answer? $\quad|I|^2 = I\cdot I^*$ $\quad|I|^2 = \displaystyle\int_0^T\mathrm dt\int_0^t\mathrm dt'\exp(-if(t))\exp(if(t'))$ Any comment or help would be highly appreciated.
Let $g(x)=\log(x+\sqrt{x^2+1})$ . Note that $$f(x)=\frac{e^x-e^{-x}}{2}=\frac{e^{2x}-1}{2e^x}$$ Thus $$\begin{align} f(g(x))&=\frac{e^{2\log(x+\sqrt{x^2+1})}-1}{2e^{\log(x+\sqrt{x^2+1})}} \\ &=\frac{(x+\sqrt{x^2+1})^2-1}{2(x+\sqrt{x^2+1})} \\ &=\frac{x^2+2x\sqrt{x^2+1}+(x^2+1)-1}{2(x+\sqrt{x^2+1})} \\ &=\frac{2x(x+\sqrt{x^2+1})}{2(x+\sqrt{x^2+1})} \\ &=x \end{align}$$ Do not forget to also check $g(f(x))=x$ . Then we can guarantee that $g=f^{-1}$ . $\textbf{Edit: }$ I add another way to find the inverse function of $f$ . Recall that $y=f(x)\,\Leftrightarrow \, x=f^{-1}(y)$ . Then, set $$y=\frac{e^x-e^{-x}}{2}$$ (we want to isolate the $x$ from here) so: $$y=\frac{e^{2x}-1}{2e^x}$$ $$2ye^x=e^{2x}-1$$ $$(e^x)^2-2y(e^x)-1=0$$ then, we use the general formula to solve quadratic equations and gives us $$e^x=\frac{2y \pm \sqrt{4y^2+4}}{2}=y \pm \sqrt{y^2+1}$$ we are left with the positive option (which is obtained with $+$ ) because $e^x>0$ for all $x$ . Thus $$e^x=y+\sqrt{y^2+1}$$ and then $$x=\log\big(y+\sqrt{y^2+1}\big)$$ which is nothing more than $f^{-1}(y)$ .
Let $y=f(x)=\dfrac{e^x-e{-x}}2$ so that $x=f^{-1}(y)$ $$(e^x)^2-2e^x y-1=0$$ $e^x=\dfrac{2y\pm\sqrt{4y^2+4}}2$ As $e^x>0,e^x=y+\sqrt{y^2+1},x=?$
stem_mix_good_quality
The p-th total variation is defined as $$|f|_{p,TV}=\sup_{\Pi_n}\lim_{||\Pi_n||\to n}\sum^{n-1}_{i=0}|f(x_{i+1}-f(x_{i})|^p$$ And I know how to calculate the first total variation of the standard Brownian motion. But when dealing with high order TV, there are some problem. At first we assume that p is even. First I define $$\xi_i=|B_{\frac{i+1}{n}}-B_{\frac{i}{n}}|^p$$ then we can get that $$E[\xi_i]=\left(\frac1n\right)^{\frac p2}(p-1)!!$$ and $$E[\xi_i^2]=\left(\frac1n\right)^{p}(2p-1)!!$$ Next, we define $V_n=\sum^{n-1}_{i=0}\xi_i$ Then we have$$E[V_n]=\sum^{n-1}_{i=0}\left(\frac 1n\right)^{\frac p2}(p-1)!!$$ But there's something wrong in the following step, when calculating $E[V_n^2]$ $$\begin{align} E[V_n^2] &= E\left[\sum^{n-1}_{i=0}\xi_i\sum^{n-1}_{j=0}\xi_j\right]\\ &=E\left[\sum^{n-1}_{i=0}\sum^{n-1}_{j=0}\xi_i\xi_j\right]\\ &=\sum^{n-1}_{i=0}\sum^{n-1}_{j=0}E\left[\xi_i\xi_j\right]\\ &=\sum^{n-1}_{i=0}E[\xi_i^2]+\sum_{i\neq j}E[\xi_i]E[\xi_j]\\ &=\left(\frac1n\right)^{p-1}(2p-1)!!+n(n-1)\left(\frac1n\right)^{p}\left[(p-1)!!\right]^2 \end{align}$$ And then the question is I have no idea that how to deal with this awesome equation. Do I need to brute it out or if there is any method more efficient to prove it?
It is widely known that for $p=2$ the quadratic variation $$S_{\Pi} := \sum_{t_i \in \Pi} |B_{t_{i+1}}-B_{t_i}|^2$$ converges to $T$ in $L^2$ as $|\Pi| \to 0$. Here $\Pi = \{0=t_0<\ldots< t_n \leq T\}$ denotes a partition of the interval $[0,T]$ with mesh size $$|\Pi| = \max_i |t_{i+1}-t_i|.$$ For a proof of this statement see any book on Brownian motion (e.g. René L. Schilling/Lothar Partzsch: Brownian Motion - An Introduction to Stochastic Processes , Chapter 9). Since $L^2$-convergence implies almost sure convergence of a subsequence, we may assume that $$S^{\Pi_n} \to T \qquad \text{almost surely} \tag{1}$$ for a (given) sequence of partitions $(\Pi_n)_n$ satisfying $|\Pi_n| \to 0$. Now, as $p>2$, $$\begin{align*} \sum_{t_i \in \Pi_n} |B_{t_{i+1}}-B_{t_i}|^p &= \sum_{t_i \in \Pi_n} |B_{t_{i+1}}-B_{t_i}|^{p-2} |B_{t_{i+1}}-B_{t_i}|^2 \\ &\leq \sup_{s,t \leq T, |s-t| \leq |\Pi_n|} |B_t-B_s|^{p-2} S_{\Pi_n}. \tag{2}\end{align*}$$ Since $(B_t)_{t \geq 0}$ is a Brownian motion, we know that $t \mapsto B_t(\omega)$ is continuous almost surely; hence, $[0,T] \ni t \mapsto B_t(\omega)$ is uniformly continuous. This implies in particular $$\sup_{s,t \leq T, |s-t| \leq |\Pi_n|} |B_t-B_s| \stackrel{n \to \infty}{\to} 0 \qquad \text{almost surely}$$ as the mesh size $|\Pi_n|$ tends to $0$. Combining $(1)$ and $(2)$ yields $$\lim_{n \to \infty} \sum_{t_i \in \Pi_n} |B_{t_{i+1}}-B_{t_i}|^p = 0.$$ This finishes the proof. Remark: For a more general statement see this question .
Note that $2$ -variation is not the same as quadratic variation. For the quadratic variation you take the limit as the partition gets finer, whereas for $p$ -variation you take the supremum over all partitions. In particular, the Brownian motion has finite quadratic variation on any finite interval but infinite $2$ -variation ( link ). Now coming to your question and assuming the interval $[0,1]$ , we have that $$\|B\|_{p-\text{var}} \geq |B_1|,$$ because the supremum is larger or equal to taking the partition that only contains $0$ and $1$ . So, the $p$ -variation of Brownian motion for any $p\geq 1$ is clearly not converging to zero in any sense.
stem_mix_good_quality
Let $\Phi_5$ be the 5th cyclotomic polynomial and $\Phi_7$ the 7th. These polynomials are defined like this: $$ \Phi_n(X) = \prod_{\zeta\in\mathbb{C}^\ast:\ \text{order}(\zeta)=n} (X-\zeta)\qquad\in\mathbb{Z}[X] $$ I want to calculate the splitting field of $\Phi_5$ and the splitting field of $\Phi_7$ over $\mathbb{F}_2$. In $\mathbb{F}_2[X]$ we have $$ \Phi_5(X) = X^4 + X^3 + X^2+X+1 $$ and $$ \Phi_7(X) = (X^3+X+1)(X^3+X^2+1) $$ My question is: what are the splitting fields of the polynomials? I already know it should be of the form $\mathbb{F}_{2^k}$ for some $k\in\mathbb N$. Also the degree of every irreducible factor of a cyclotomic polynomial in $\mathbb{F}_q[X]$ is equal to the order of $q\in(\mathbb{Z}/n\ \mathbb{Z})^\ast$, assuming $(q,n)=1$.
Since we want the degree of an irreducible factor to be equal to one, we want $$ \text{order} (2^k) =1 $$ in $(\mathbb{Z} / 5\mathbb{Z})^\ast$. The only element with this order is 1. Therefore we search the smallest $k$ such that $2^k\equiv 1\mod 5$. A bit puzzzling gives us $$ 2^1=2\\ 2^2=4\\ 2^3=8=3\\ 2^4=16=1. $$ Therefore the splitting field of $\Phi_5$ should be $\mathbb{F}_{2^4}$. Is this correct?
Hint : If $f(X)$ is irreducible in $F[X]$ then $F[X]/(f(X))$ is a field. any polynomial $f(X)$ has root in $F[X]/(f(X))$ (which need not be a field in general) What is the cardinality of $\mathbb{F}_2[X]/(X^4 + X^3 + X^2+X+1)$. How many finite fields of cardinality $n$ can you list out for a given $n$. Splitting field of $f(X)g(X)$ is contains splitting field of $f(X)$ Splitting field of $(X^3+X+1)(X^3+X^2+1)$ is contains splitting field of $(X^3+X^2+1)$ As $(X^3+X^2+1)$ is irreducible in $\mathbb{F}_2[X]$ its splitting field would be (???) What is splitting field of $(X^3+X+1)$. Do you see some relation between splitting field of $(X^3+X+1)$ and of $(X^3+X^2+1)$. Can you now conclude??
stem_mix_good_quality
Let $n \in \mathbb{N}_{>0}$. Then we have $$\begin{align*} &3 \equiv 3 \bmod 15\\&3^2 \equiv 9 \bmod 15\\&3^3 \equiv 12 \bmod 15\\&3^4 \equiv 6 \bmod 15\\&3^5 \equiv 3 \bmod 15\end{align*}$$ Hence all remainders modulo $15$ of $3^n$ should be $3,9,12$ and $6$. Why is this the case? Also, How could I calculate $$3^{1291}\equiv ? \bmod 15$$
As to why: there isn't a much better reason than "because you've just demonstrated that's what happened". But here's a little: $3^n$ is always a multiple of $3$. Since $15$ is also a multiple of $3$, the remainder of $3^n$ when divided by $15$ must always be a multiple of $3$ - so $3^n$ must be $0,3,6,9$, or $12$ mod $15$. But $15$ is also a multiple of $5$, and $3^n$ is not, so $3^n$ cannot be a multiple of $15$; so $3^n$ can't be $0$ mod $15$. As for $3^{1291}$: you've established more than just that $3^n$ is $3,9,12$, or $6$ mod $15$; you've established that it takes those values in order . So, in other words, you know that $3^1$, $3^5$, $3^9$, $3^{13}$, and so on all come to $3$ mod $15$. In general, you know that every fourth power of three after the first - so $3^{4n+1}$ for any $n$ - is $3$ modulo $15$. That will get you close to $3^{1291}$.
The reason here that values of $3^k$ are so restricted is because $15$ is a multiple of $3$. Taking values $\bmod 15$ will therefore not change the status of being a multiple of $3$. Also $3^k$ is never a multiple of $5$, so you won't get the value of $0\bmod 15$. This only leaves the four values you found. As you can see, the sequence $3^k \bmod 15$ runs in a cycle of four, so you can discard a multiple of $4$ from the exponent to get back to a low value. Here you can discard $4\times322=1288$ to find that $3^{1291}\equiv 3^3 \bmod 15$ and look up your result.
stem_mix_good_quality