subject
stringclasses 2
values | topic
stringlengths 4
138
| question
stringlengths 3
1.14k
| answer
stringlengths 1
45.6k
|
---|---|---|---|
Physics
|
|particle-physics|standard-model|leptons|
|
Would it be possible to generate muons using neutrinos and electrons?
|
<p>Yes, there are two possibilities for inverse muon decay: <span class="math-container">$\nu_\mu e^- \rightarrow \nu_e \mu^-$</span> and <span class="math-container">$\bar{\nu}_e e^- \rightarrow \bar{\nu}_\mu \mu^-$</span>. The first process has been observed, but not yet the second.</p> <p>The standard model <span class="math-container">$\nu_\mu e^- \rightarrow \nu_e \mu^-$</span> <a href="https://doi.org/10.1103/PhysRevD.104.092010" rel="nofollow noreferrer" title="Constraining the NuMI neutrino flux using inverse muon decay reactions in MINERvA, PHYSICAL REVIEW D 104, 092010 (2021)">lowest-order cross-section</a> for a muon neutrino - electron interaction with centre-of-mass energy squared <span class="math-container">$s$</span> is (for <span class="math-container">$M_W^2\gg s \gg m_\mu^2$</span>):</p> <p><span class="math-container">$$ \sigma(\nu_\mu e^- \rightarrow \nu_e \mu^-) \approx \frac{G^2_F}{\pi} s $$</span></p> <p>I believe this process was first observed by the <a href="https://doi.org/10.1016/0370-2693(80)90127-6" rel="nofollow noreferrer" title="Experimental Study of Inverse Muon Decay, Physics Letters B, Volume 93, Issues 1–2, 1980, Pages 203-209,">CHARM</a> experiment at CERN and later by the <a href="https://doi.org/10.1016/0370-2693(90)91099-W" rel="nofollow noreferrer" title="Inverse muon decay, νμ+e→μ−+νe, at the Fermilab Tevatron,Physics Letters B, Volume 252, Issue 1, 6 December 1990, Pages 170-176">CCFR</a> and other experiments at Fermilab. The subsequent <a href="https://doi.org/10.1016/0370-2693(95)01298-6" rel="nofollow noreferrer" title="A precise measurement of the cross section of the inverse muon decay νμ + e− → μ− + νe, Physics Letters B, Volume 364, Issue 2, 14 December 1995, Pages 121-126">CHARM II</a> experiment observed <span class="math-container">$15758 \pm 324$</span> <span class="math-container">$\nu_\mu e^- \rightarrow \nu_e \mu^-$</span> events, in agreement with the expected number.</p> <p>The cross-section for <span class="math-container">$\bar{\nu}_e e^- \rightarrow \bar{\nu}_\mu \mu^-$</span> is <a href="https://doi.org/10.1103/PhysRevD.107.093005" rel="nofollow noreferrer" title="Radiative corrections to inverse muon decay for accelerator neutrinos, PHYSICAL REVIEW D 107, 093005 (2023)">comparable</a> (for <span class="math-container">$M_W^2\gg s \gg m_\mu^2$</span>): <span class="math-container">$$ \sigma(\bar{\nu}_e e^- \rightarrow \bar{\nu}_\mu \mu^-) \approx \frac{G^2_F}{3\pi} s $$</span> but it has not yet been observed because it is almost impossible to make a pure enough high-energy anti-electron-neutrino beam. For electrons at rest, the minimum neutrino energy threshold for <span class="math-container">$\bar{\nu}_e e^- \rightarrow \bar{\nu}_\mu \mu^-$</span> is about <span class="math-container">$m_\mu^2/2E_\nu = 10.9$</span> GeV. Such <a href="https://doi.org/10.48550/arXiv.1805.01373" rel="nofollow noreferrer" title="History of accelerator neutrino beams, Eur. Phys. J. H 44, 271–305 (2019) https://doi.org/10.1140/epjh/e2019-90032-x">high energy neutrinos are produced</a> by smashing protons into targets which produce lots of charged pions and kaons whose decays mostly produce muon neutrinos and anti-neutrinos. For example, the NuTeV "anti-neutrino" beam was <span class="math-container">$98\%$</span> <span class="math-container">$\bar{\nu}_\mu$</span> and only <span class="math-container">$1.6\%$</span> <span class="math-container">$\bar{\nu}_e$</span>, and although they <a href="https://arxiv.org/abs/hep-ex/0104029" rel="nofollow noreferrer" title="Search for the Lepton Family Number Violating Process…, J. A. Formaggio et al. Phys. Rev. Lett. 87, 071803 – Published 27 July 2001, https://doi-org/10.1103/PhysRevLett.87.071803">observed 24 events</a> consistent with <span class="math-container">$\bar{\nu} e^- \rightarrow \bar{\nu} \mu^-$</span>, this number was consistent with backgrounds such as beam impurities and muon charge misidentification. These backgrounds were about <span class="math-container">$7$</span> times greater than the expected <span class="math-container">$\bar{\nu}_e e^- \rightarrow \bar{\nu}_\mu \mu^-$</span> signal.</p>
|
Physics
|
|fourier-transform|dirac-delta-distributions|discrete|volume|lattice-model|
|
How to understand this Dirac delta function?
|
<p>Consider Fourier series expansion of the periodic function <span class="math-container">$$\sum_{\ell,m,n}\delta(x-\ell L)\delta(y-m M)\delta(z-nN)=\sum_{\ell,m,n}c_{\ell mn}e^{i2\pi\ell x/L}e^{i2\pi m y/M}e^{i2\pi n z/N}$$</span> Multiply both sides by <span class="math-container">$e^{-i2\pi\ell' x/L}e^{-i2\pi m' y/M}e^{-i2\pi n' z/N}$</span>, integrate within a single "unit cell" of volume <span class="math-container">$LMN$</span>, and use the orthogonality of these functions to conclude <span class="math-container">$$c_{\ell'm'n'}=\frac{1}{LMN}$$</span> and thus <span class="math-container">$$\sum_{\ell,m,n}\delta(x-\ell L)\delta(y-m M)\delta(z-nN)=\frac{1}{LMN}\sum_{\ell,m,n}e^{i2\pi\ell x/L}e^{i2\pi m y/M}e^{i2\pi n z/N}$$</span> or in your language <span class="math-container">$$\sum_{\mathbf R}\delta(\mathbf r - \mathbf R) = \frac 1 V \sum_{\mathbf k}e^{i\mathbf k \cdot \mathbf r}$$</span> If we are only evaluating this function within a single "unit cell", e.g. the one containing <span class="math-container">$\mathbf r = 0$</span>, then only one term of the sum on the left-hand side is non-zero and we are left with <span class="math-container">$$\delta(\mathbf r) = \frac 1 V \sum_{\mathbf k}e^{i\mathbf k \cdot \mathbf r}.$$</span></p> <p>I've skipped some steps, and you can ask reasonable questions like "does a periodic impulse train even have a Fourier series", but this is one way to obtain this relation. Also see <a href="https://en.wikipedia.org/wiki/Dirac_delta_function#Fourier_kernels" rel="nofollow noreferrer">here</a>.</p> <hr /> <p>A more direct approach is to note that <span class="math-container">$$\sum_{\ell = -{\ell_0}}^{\ell_0} e^{i2\pi\ell x/L}=\frac{\sin[(l_0 + \frac 1 2)2\pi x/L]}{\sin(\pi x/L)}.$$</span> This follows from the <a href="https://en.wikipedia.org/wiki/Geometric_series#Sum" rel="nofollow noreferrer">partial sum of the geometric series</a> and the fact that <span class="math-container">$\sin z = (e^{iz}-e^{-iz})/(2i)$</span>. Lor large <span class="math-container">$\ell_0$</span>, this is an oscillatory function bounded by the envelope provided by the denominator, with a very sharp peak at <span class="math-container">$x = 0$</span>. It can be shown that its integral on any interval containing <span class="math-container">$x = 0$</span> converges to <span class="math-container">$1/L$</span> as <span class="math-container">$\ell_0\to\infty$</span>, consistent with <span class="math-container">$\delta(x)/L$</span>.</p> <p>While the value of the sum does not strictly converge at <span class="math-container">$x\ne 0$</span>, the sum gets increasingly oscillatory for large <span class="math-container">$\ell_0$</span> such that its integral on any interval not containing <span class="math-container">$x = 0$</span> converges to zero. As such, in a distribution sense we are justified in writing <span class="math-container">$$\frac 1 L\sum_{\ell = -\infty}^{\infty} e^{i2\pi\ell x/L}=\delta(x).$$</span></p> <p>This can be extended in the straightforward way to 3D to arrive at the result obtained by the Fourier series approach.</p> <hr /> <p>In your last line of equations, you are evaluating <span class="math-container">$\int_V \delta(0) d^3\mathbf r'$</span>, so it is no surprise you get an infinity.</p>
|
Physics
|
|quantum-mechanics|quantum-spin|
|
Stern-Gerlach filter: many atoms vs single atom cases
|
<p>A picture would have been nice.</p> <p>The state after the first SG device is:</p> <p><span class="math-container">$$ \psi_1 = \delta(x)\delta(z-z_0)|\uparrow\rangle $$</span></p> <p>meaning it spin up in the <span class="math-container">$z$</span> direction and forms an (idealized) beam moving in the <span class="math-container">$y$</span> direction (ignoring <span class="math-container">$y$</span> dependence--you can make it a plane wave or a single atom in a plane wave or a wave packet. It's at <span class="math-container">$x=0$</span> and <span class="math-container">$z=z_0$</span>.)</p> <p>Now you pass the <span class="math-container">$SG_x$</span>. It does not measure the spin, rather it entangles it with position:</p> <p><span class="math-container">$$ \psi_2 = \frac 1 {\sqrt 2}\Big( \delta(x-x_0)\delta(z-z_0)|\leftarrow\rangle + \delta(x+x_0)\delta(z-z_0) |\rightarrow\rangle\Big)$$</span></p> <p>This means half the amplitude is spin left (right) in the left (right) beam.</p> <p>If you now place a screen, half the atoms will hit the left spot and half the atoms will hit the right spot.</p> <p>If you have one atom, its impact is 50/50. A coin-toss, as they say.</p> <p>If it hits the left (right) spot, I think it was in the left (right) beam. If on the other hand, you have an inverse <span class="math-container">$SG_x$</span> device--and this is the quantum magic of the Stern-Gerlach experiment-- an measure its spin-up state, it will recombine the beam/atom into:</p> <p><span class="math-container">$$ \psi_3 = \psi_1 $$</span></p> <p>which means your single atom was in <em>both</em> the upper and lower beams (a la Young's Double Slit experiment with electrons).</p> <p>The basic quantum rule, which is a discreet version of Feynman's Path Integral formulation, is that to get from an initial state to a final state (with no detecting/decoherence in between), you take all allowed paths, and add the amplitudes...and then square for a probability.</p> <p>This is much safer than trying to intuit what a single atom is doing--because if you detect it in the upper beam, it was always in the upper beam, and if you recombine the beams so both paths are possible, it took both paths.</p> <p>(Note that this implies some kind of delay choice--how did the atom know it was going to be detected so it had to be in the upper beam? To paraphrase Marsalas Wallace, "That's classical thinking messing with your find. ^*&#@ Classical Thinking". Rather, just invoke the coherence length/time of the beam, and if the detection was outside the coherence length/time between the detector and the split beam: the state decoheres and there is no quantum woo, and if it was inside that window: then it was at both places at once so of course it "knew" what to do).</p> <p>Finally, due to simplicity of your set-up (a pure state going into the 2nd device) I was able to use spin/position eigenstates.</p> <p>For a more general SG problem (say, if you had not blocked the spin down beam and sent it to something else)...then you really need to use density matrices, esp. when you start of an unpolarized beam, since it is a mixed state and can only be described by <span class="math-container">$\rho$</span>.</p>
|
Physics
|
|statistical-mechanics|condensed-matter|probability|statistics|
|
Interpretation of a probability that does not normalize to one in stat mech?
|
<p>A long talk with a mathematician friend has cleared things up.</p> <p>First things first, <span class="math-container">$P^{(n)}( \widetilde{\textbf{X}})$</span> is simply not a probability distribution. A probability must indeed by normalized to one.</p> <p>Then, what is it?</p> <p>Imagine I gathered all the <span class="math-container">$_NP_n$</span> permutations of the <span class="math-container">$N$</span> particles and I wanted to check how many of these <em>permutations</em> fall in the range <span class="math-container">$\textbf{X}_1 + d\textbf{X}_1,\textbf{X}_2 + d\textbf{X}_2, ... ,\textbf{X}_n + d\textbf{X}_n $</span>. Let <span class="math-container">$M$</span> be the number of permutations that ''succeed" by falling within the desired range. Then we have</p> <p><span class="math-container">$$E[M] = \sum_{i=0}^{_NP_n}P(M=i)*i $$</span> Here, i is the number of permutations within the portion of phase space. We sum over the possible number of sets that can be in the desired portion of phase space from 0 to <span class="math-container">$_NP_n$</span>. Now, whether or not each permutation is in the desired area can be thought of as a Bernoulli trial. Thus, the probability of having i successes can be modelled with a Binomial distribution:</p> <p><span class="math-container">$$E[M] = \sum_{i=0}^{_NP_n} \frac{_NP_n !}{i!(_NP_n - i)!}(\widetilde{P_i})^i (1-\widetilde{P_i})^{_NP_n - i} * i $$</span></p> <p>where <span class="math-container">$\widetilde{P_i}$</span> is the probability for any given permutation to succeed by being in the desired range. I.e. it is the marginal probability of success for n out the N particles to be at the desired location this is the marginal probability discussed in the question above. Note that in general this is not the same for every permutation, but we have assumed that it is. This implies that the system is symmetric with respect to interchange of particles. Next we do some algebraic manipulation of this expression, pulling out terms and noting the i=0 term contributes 0 to our sum: <span class="math-container">$$E[M] = _NP_n \widetilde{P_i} \sum_{i=1}^{_NP_n} \frac{(_NP_n-1)!}{(i-1)!(_NP_n - i)!}(\widetilde{P_i})^{i-1} (1-\widetilde{P_i})^{_NP_n - i} * i $$</span> let <span class="math-container">$j = i-1$</span>: <span class="math-container">$$E[M] = _NP_n \widetilde{P_j} \sum_{j=0}^{_NP_n-1} \frac{(_NP_n-1)!}{j!(_NP_n - 1-j)!}(\widetilde{P_j})^{j} (1-\widetilde{P_j})^{_NP_n -1-j } * j $$</span> Then we use the binomial theorem: <span class="math-container">$$E[M] = _NP_n \widetilde{P_j} (\widetilde{P_j} + (1-\widetilde{P_j})^{_NP_n-1} $$</span> and finally we arrive at: <span class="math-container">$$E[M] = _NP_n \widetilde{P_j} = _NP_n \frac{\int e^{-\beta U(\widetilde{\textbf{X}})} d\textbf{X}_{n+1}d\textbf{X}_{n+2} ... d\textbf{X}_{N} } {Z_N} = P^{(n)}( \widetilde{\textbf{X}}) $$</span></p> <p>Thus <span class="math-container">$P^{(n)}( \widetilde{\textbf{X}})$</span> is the expected number of permutations that fall within the specified range. The first one of these <span class="math-container">$P^{(1)}( \widetilde{\textbf{X}})$</span> is the <em>Number density</em> of particles, so if you want to know the real expected value of the number of particles in a given region, you can integrate <span class="math-container">$P^{(1)}( \widetilde{\textbf{X}})$</span> over that region. <span class="math-container">$P^{(1)}( \widetilde{\textbf{X}})$</span> can be thought of as having units of <span class="math-container">$\frac{[Number][Probability]}{[ \widetilde{\textbf{X}}]}$</span></p> <p>The wikipedia page <a href="https://en.wikipedia.org/wiki/Radial_distribution_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Radial_distribution_function</a> has the best derivation I have found so far, although it still confuses a bit expected value vs. probability.</p>
|
Physics
|
|atomic-physics|physical-chemistry|spectroscopy|moment-of-inertia|vibrations|
|
Why does the rotational level spacing increase when moment of inertia decrease?
|
<p>In quantum mechanics, when considering a particular quantum state, increasing I means that the same amount of quantum angular momentum is distributed over a larger "space" (moment of inertia), thus lowering the energy state. In contrast, in the classical perspective, increasing I while keeping angular velocity constant implies adding more kinetic energy due to the distribution of mass and the effort to maintain that rotational speed.</p>
|
Physics
|
|special-relativity|estimation|rocket-science|
|
How far can we go in space?
|
<p>Since your objective is to go as far as possible, rather than rendezvous with a particular destination, then I will assume you do not need the spaceship to decelerate. The best way to utilise the limited energy budget is to accelerate as hard as you can until all the energy is used up, then coast for the remainder of the mission.</p> <p>Accelerating at a constant rate of <span class="math-container">$1$</span> g reaches close to light speed in less than one year. To simplify calculations I am going to assume that the acceleration period is negligible compared to the rest of the <span class="math-container">$50$</span> year mission. So let's assume that you coast at a constant top speed for <span class="math-container">$50$</span> years in the reference frame of the spaceship. What is this top speed ?</p> <p>The relativistic formula for kinetic energy is</p> <p><span class="math-container">$E_k = (\gamma - 1)mc^2$</span></p> <p>Assuming <span class="math-container">$100\%$</span> efficiency converting the <span class="math-container">$10^{21}$</span> J energy budget to kinetic energy (and not making any allowance for maintaining life support systems) then we find that the <span class="math-container">$\gamma$</span> value corresponding to maximum speed for a <span class="math-container">$100$</span> kg spaceship (did you mean <span class="math-container">$100$</span> kg ? that seems very small) is</p> <p><span class="math-container">$\displaystyle \gamma_{max} = \frac {10^{21}} {9 \times 10^{18}} +1 \approx 112$</span></p> <p>This is equivalent to a top speed of about <span class="math-container">$99.996 \%$</span> of the speed of light. With a <span class="math-container">$\gamma$</span> value of <span class="math-container">$112$</span> then <span class="math-container">$50$</span> years in the reference frame of the spaceship will be equivalent to <span class="math-container">$112 \times 50 = 5600$</span> years in the reference frame of Earth. So, to a good approximation, the spaceship will have travelled about <span class="math-container">$5600$</span> light years by the end of its <span class="math-container">$50$</span> year mission. This is about a fifth of the distance from the Earth to the centre of our galaxy.</p>
|
Physics
|
|electromagnetism|potential|vector-fields|dipole|
|
Vector Magnetic Potential and pointwise current density source
|
<blockquote> <p>The notes I'm following proceed to define the current density as: <span class="math-container">$$\mathbf{J}=\delta(x)\delta(y)\delta(z)\hat{\mathbf{a}}$$</span> which I totally agree it only exists at the origin, but my question is: what is the unit measure of that particular current density? To me it looks to be [<span class="math-container">$m^{−3}$</span>], which is not a current density. Is this legitimate in any case?</p> </blockquote> <p>You are right. Since <span class="math-container">$\mathbf{J}$</span> has unit <span class="math-container">$\text{Am}^{-2}$</span> and <span class="math-container">$\hat{\mathbf{a}}$</span> (being a unit-vector) is supposed to be unit-less, a factor <span class="math-container">$C$</span> with unit <span class="math-container">$\text{Am}$</span> is missing. So let us insert this factor. <span class="math-container">$$\mathbf{J}=\delta(x)\delta(y)\delta(z)C\hat{\mathbf{a}}$$</span></p> <p>Then a solution of <span class="math-container">$$\nabla^2\mathbf{A}+k^2\mathbf{A}=-\mu \mathbf{J}$$</span> would be <span class="math-container">$$\mathbf{A}=\mu C\hat{\mathbf{a}}\frac{e^{ikr}}{4\pi r}$$</span></p>
|
Physics
|
|cosmology|temperature|space-expansion|universe|cosmic-microwave-background|
|
Can we measure temperature in an isothermal Universe?
|
<p>Recording an experimental result generates entropy. In a perfectly isothermal universe at maximum entropy, this is impossible. Thus no experiment of <em>any</em> kind is theoretically possible in such a hypothetical situation. Nothing to see here.</p>
|
Physics
|
|quantum-mechanics|commutator|semiclassical|poisson-brackets|
|
The question about commutator $[\hat{x},\hat{p}]=i\hbar$ at $\hbar\rightarrow 0$ seemingly can't match with Poisson bracket $\{x,\,p\}=1$
|
<p>There is a systematic invertible change of language (<a href="https://en.wikipedia.org/wiki/Wigner%E2%80%93Weyl_transform" rel="nofollow noreferrer">Weyl correspondence</a>) between Hilbert space operators and phase-space q-number variables, <span class="math-container">$$ \hat A \leftrightarrow A, \qquad \hat B \leftrightarrow B,\\ {1\over i\hbar} [\hat A, \hat B] \leftrightarrow \{\{A,B\}\}=\{ A,B\}+ O(\hbar^2) $$</span> where {{•,•}} is the <a href="https://en.wikipedia.org/wiki/Moyal_bracket" rel="nofollow noreferrer">Moyal bracket</a>.</p> <p>In your case, <span class="math-container">$$ {1\over i\hbar} [\hat x, \hat p] \leftrightarrow \{\{x,p\}\}=\{ x,p\}=1, $$</span> where no limit has been taken. The subleading terms in ℏ happen to vanish identically.</p>
|
Physics
|
|newtonian-mechanics|rotational-dynamics|projectile|aerodynamics|
|
Spin velocity in table tennis
|
<p>Top/bottom spin rotates as if the ball were rolling toward/away from you, and sidespin rotates about a vertical axis as if the ball were a top. Rotation about the third axis (which extends in a line directly away from you) can be called cork or corkscrew spin. It won't affect the trajectory of the ball as much as other types of spin, but can cause an unexpected bounce.</p>
|
Physics
|
|quantum-field-theory|resource-recommendations|yang-mills|effective-field-theory|1pi-effective-action|
|
Quantum effective action for Yang-Mills theories
|
<p>Your action seems identical to the effective action in section 2.4.2 of <a href="http://www.damtp.cam.ac.uk/user/tong/gaugetheory.html" rel="nofollow noreferrer">David Tong's notes on gauge theory</a> (modulo the sign in front of <span class="math-container">$[F_{\mu \nu}, \cdotp]$</span>).</p> <p>It can be obtained using the background field method to integrate out quantum fluctuations. The first determinant corresponds to the gauge field, and the second to the ghost field. The covariant derivatives <span class="math-container">$D^\mu_A$</span> are with respect to the (classical) background field.</p> <p>His notes are excellent, I suggest you take a look. I believe the action, and ultimately also the Yang-Mills beta function, are also derived in (the slightly more convoluted) section 16.6 of Peskin & Schroeder, but your notation more closely resembles that of Tong.</p> <hr /> <p>I may be wrong, and am unable to post comments, but I hope this is of some help. Would be useful if you provided the source of your equation.</p>
|
Physics
|
|cosmology|temperature|entropy|cosmic-microwave-background|
|
Do we have to consider decoupled particles in the computation of $g_{*}(T)$?
|
<p><span class="math-container">$g_*$</span> is the effective number of relativistic degrees of freedom for the energy density. The idea of <span class="math-container">$g_*$</span> is that given the temperature <span class="math-container">$T$</span> of the photons, the total energy density in radiation is <span class="math-container">$$\rho=\frac{\pi^2}{30}g_* T^4.$$</span> Decoupled particles must be included in <span class="math-container">$g_*$</span> because they still contribute to the energy density. The same is true of <span class="math-container">$g_{*s}$</span>, the effective number of relativistic degrees of freedom for the entropy density. The idea of <span class="math-container">$g_{*s}$</span> is that if the photon temperature is <span class="math-container">$T$</span>, then the total entropy density in radiation is <span class="math-container">$$s=\frac{2\pi^2}{45}g_{*s}T^3.$$</span> That also implies all relativistic species should be accounted for.</p> <p>Thus, it is nonstandard to define a <span class="math-container">$g_{*s}$</span> that only includes some of the species, as you do when you write</p> <blockquote> <p><span class="math-container">$g_{*s}(t_{e}-\epsilon)=2+\frac{7}{8}\cdot2\cdot2$</span></p> </blockquote> <p>Nevertheless this approach works, because since the neutrinos are decoupled, the entropy <span class="math-container">$\propto sa^3\propto g_{*s} a^3 T^3$</span> of the photon-electron bath and that of the neutrinos are separately conserved.</p>
|
Physics
|
|operators|conformal-field-theory|correlation-functions|wick-theorem|
|
Radial ordering in CFT
|
<ol> <li><p>The main point is that the sum in OP's eq. (1) is only <a href="https://en.wikipedia.org/wiki/Absolute_convergence" rel="nofollow noreferrer">(absolutely) convergent</a> if <span class="math-container">$|y|>|z|$</span>, i.e. if OP's LHS <span class="math-container">$$\hat{T}(y)\hat{T}(z)~=~{\cal R}[\hat{T}(y)\hat{T}(z)]$$</span> is radially ordered.</p> </li> <li><p>(Absolute) convergence is needed in the last equality of OP's eq. (1) if we want to sum (a third derivative of) a geometric series and obtain OP's RHS.</p> </li> <li><p>Moreover, note the following consistency check: If we include a radial ordering <span class="math-container">${\cal R}$</span> on the LHS of OP's eq. (1), then both the LHS and the RHS are manifestly symmetric under <span class="math-container">$y\leftrightarrow z$</span>.</p> </li> </ol>
|
Physics
|
|quantum-field-theory|boundary-conditions|second-quantization|casimir-effect|
|
On the boundary conditions of the Casimir effect and quantization of the wave vector
|
<p>The Fourier series of a function <span class="math-container">$f(x)$</span> with periodic boundary conditions <span class="math-container">$f(x+L)=f(x)$</span> has the form <span class="math-container">$$f(x)=\sum\limits_{n \in \mathbb{Z}} a_n \, e^{2\pi i n x/L}.$$</span> On the other hand, a function <span class="math-container">$g(z)$</span> with boundary conditions <span class="math-container">$g(0)=g(d)=0$</span> has the series expansion <span class="math-container">$$g(z)=\sum\limits_{\ell \in \mathbb{N}} c_\ell \, \sin(\ell \pi z/d).$$</span> In the first case, <span class="math-container">$\exp (2\pi i n x/L)$</span> and <span class="math-container">$\exp(-2\pi i n x/L)$</span> are linear independent (thus the summation over all integers <span class="math-container">$n \in \mathbb{Z}$</span>), in the second case, you have <span class="math-container">$\sin(-\ell \pi z/d)= -\sin(\ell \pi z/d)$</span> and the summation goes only over <span class="math-container">$\ell \in \mathbb{N}=\{1,2,3,\ldots\}$</span>.</p> <p>The explicit form of the electric field of your problem is most easily obtained from the associated vector potential <span class="math-container">$\vec{A}$</span> in the Coulomb gauge, fulfilling <span class="math-container">$$ \vec{\nabla} \cdot \vec{A} =0, \qquad \square \vec{A}=0, \qquad \vec{A}=\vec{A}^\ast \tag{1} \label{1} $$</span> together with the boundary conditions <span class="math-container">$$\begin{align} \vec{A}(t,x,y,z)&=\vec{A}(t,x+L,y,z)=\vec{A}(t,x,y+L,z) \tag{2} \label{2} \\[3pt] A_{x,y}(t,x,y,0)&=A_{x,y}(t,x,y,d)=0, \tag{3} \label{3} \\[3pt] \nabla_z A_z(t,x,y,0)&=\nabla_z A_z(t,x,y,d)=0. \tag{4} \label{4} \end{align} $$</span> Eq. \eqref{2} corresponds to periodic boundary conditions in <span class="math-container">$x$</span>- and <span class="math-container">$y$</span>-direction, whereas \eqref{3} and \eqref{4} guarantee <span class="math-container">$\vec{E}_{||}=0$</span>, <span class="math-container">$\dot{\vec{B}}_\perp=0$</span> and <span class="math-container">$\vec{\nabla} \cdot \vec{A} =0$</span> on the surface of the plates at <span class="math-container">$z=0$</span> and <span class="math-container">$z=d$</span>.</p> <p>The general solution of this system of equations is given by <span class="math-container">$$\begin{align} A_{x,y}(t,\vec{r})&= \! \sum\limits_{\vec{n}} \sum\limits_s \mathcal{N}(\vec{n}) \, \varepsilon_{x,y}(\vec{n},s) \sin\frac{n_z \pi z}{d}\left[i e^{\frac{2\pi i}{L}(n_x x +n_y y)} e^{-i \omega(\vec{n})t}a(\vec{n},s) + \text{c.c.} \right], \tag{5} \label{5}\\[5pt] A_z(t, \vec{r}) &= \! \sum\limits_{\vec{n}} \sum\limits_s \mathcal{N}(\vec{n}) \, \varepsilon_z(\vec{n},s) \cos \frac{n_z \pi z}{d} \left[e^{\frac{2\pi i}{L}(n_x x+n_y y)} e^{-i \omega(\vec{n})t} a(\vec{n},s) +\text{c.c.} \right], \tag{6} \label{6} \end{align}$$</span> where <span class="math-container">$\vec{r}=(x,y,z)$</span>, <span class="math-container">$\vec{n}=(n_x,n_y,n_z) \in \mathbb{Z}\times \mathbb{Z} \times \mathbb{N}$</span>. Defining <span class="math-container">$$\vec{k}(\vec{n})= \left(\frac{2\pi n_x}{L}, \frac{2\pi n_y}{L},\frac{\pi n_z}{d} \right),\qquad n_{x,y} \in \mathbb{Z}, \; n_z \in \mathbb{N}, \tag{7} \label{7}$$</span> the angular frequency <span class="math-container">$\omega(\vec{n})$</span> is given by <span class="math-container">$$\omega(\vec{n})= |\vec k(\vec{n})| = \sqrt{ \left(\frac{2\pi}{L}\right)^{\! 2}(n_x^2+n_y^2)+\left(\frac{\pi }{d}\right)^2 n_z^2}. \tag{8} \label{8} $$</span> The two (normalized) polarization vectors <span class="math-container">$\vec{\varepsilon}(\vec{n},1)$</span>, <span class="math-container">$\vec{\varepsilon}(\vec{n},2)$</span> (corresponding to the two possible linear polarizations for a given vector <span class="math-container">$\vec{k}(\vec{n})$</span>) satisfy <span class="math-container">$$\vec{k}(\vec{n}) \cdot \vec{\varepsilon}(\vec{n},s) =0, \qquad \vec\varepsilon(\vec{n},s)\cdot \vec\varepsilon(\vec{n},s^\prime) = \delta_{s s^\prime}. \tag{9} $$</span> Finally, <span class="math-container">$\mathcal{N}(\vec{n})$</span> is a conveniently chosen normalization factor to be discussed later.</p> <p>Taking advantage of \eqref{7}, the vector potential can be written in the slightly more compact form <span class="math-container">$$\begin{align}A_{x,y}(t, \vec{r})&= \sum\limits_{\vec{k}, s} \mathcal{N}(\vec{k}) \, \varepsilon_{x,y}(\vec{k}, s) \sin(k_z z) \left[i e^{i(k_x x+k_y y)} e^{-i \omega(\vec{k}) t} a(\vec{k},s) +\text{c.c.} \right], \tag{10} \label{10} \\[5pt] A_z(t, \vec{r}) &= \sum\limits_{\vec{k},s} \mathcal{N}(\vec{k}) \, \varepsilon_z(\vec{k}, s) \cos(k_z z) \left[e^{i(k_x x+k_y y)} e^{-i \omega(\vec{k}) t} a(\vec{k},s) + \text{c.c.} \right]. \tag{11} \label{11} \end{align}$$</span> The <em>correct</em> expression for the electric field <span class="math-container">$\vec{E}= -\dot{\vec{A}}$</span> is thus given by <span class="math-container">$$\begin{align} E_{x,y}(t, \vec{r}) &=-\sum\limits_{\vec{k},s} \mathcal{N}(\vec{k})\, \omega(\vec{k})\, \varepsilon_{x,y}(\vec{k},s) \sin(k_z z) \left[ e^{i(k_x x+k_y y)} e^{-i \omega(\vec{k}) t} a(\vec{k},s) + \text{c.c.} \right], \tag{12} \label{12} \\[5pt] E_z(t, \vec{r}) &= i \sum\limits_{\vec{k},s} \mathcal{N}(\vec{k}) \, \omega(\vec{k}) \, \varepsilon_z(\vec{k},s) \cos(k_z z) \left[e^{i(k_x x+k_y y)} e^{-i \omega(\vec{k}) t} a(\vec{k},s) + \text{c.c.} \tag{13} \label{13} \right]. \end{align} $$</span> Note the expression for <span class="math-container">$\vec{E}$</span> displayed in your question is apparently in conflict with the boundary conditions discussed above (in particular, <span class="math-container">$\vec{E}_{||}=0$</span> at the plates is <em>not</em> fulfilled).</p> <p>The quantization of the system is now straightforward. Choosing <span class="math-container">$\mathcal{N}(\vec{k}) $</span> such that the energy <span class="math-container">$$ H= \frac{1}{2}\int\limits_0^L\! \!dx \int\limits_0^L \!\!dy \int\limits_0^d \!\!dz \,(\vec{E}^2 + \vec{B}^2) \tag{14}$$</span> takes the form <span class="math-container">$$H = \sum\limits_{\vec{k},s} \omega(\vec{k}) a^\ast(\vec{k},s) a(\vec{k},s), \tag{15}$$</span> the Fourier coefficients are promoted to operators acting in Fock space, obeying the commutation relations <span class="math-container">$$[a(\vec{k},s), a^\dagger(\vec{k}^\prime, s^\prime)]= \delta_{\vec{k} \, \vec{k}^\prime} \, \delta_{s \, s^\prime}, \qquad [a(\vec{k},s), a(\vec{k}^\prime,s^\prime)]=0, \tag{16}$$</span> describing a a system of infinitely many (uncoupled) quantum harmonic oscillators.</p>
|
Physics
|
|electromagnetism|solid-state-physics|electrical-resistance|tensor-calculus|mathematics|
|
Writing the most general form resistivity tensor
|
<p>You need to write the vector equation <span class="math-container">$$\vec{E}=\frac{1}{ne}(\vec{j}\times\vec{B})+\frac{m}{ne^2\tau}\vec{j}$$</span> in components. This gives <span class="math-container">$$\begin{pmatrix}E_x\\E_y\\E_z\end{pmatrix} =\frac{1}{ne}\begin{pmatrix}j_yB_z-j_zB_y\\j_zB_x-j_xB_z\\j_xB_y-j_yB_x\end{pmatrix} +\frac{m}{ne^2\tau}\begin{pmatrix}j_x\\j_y\\j_z\end{pmatrix}$$</span></p> <p>Then you need to further rewrite this until it has the form <span class="math-container">$$\begin{pmatrix}E_x\\E_y\\E_z\end{pmatrix} =\begin{pmatrix} \rho_{xx}&\rho_{xy}&\rho_{xz}\\ \rho_{yx}&\rho_{yy}&\rho_{yz}\\ \rho_{zx}&\rho_{zy}&\rho_{zz}\\ \end{pmatrix} \begin{pmatrix}j_x\\j_y\\j_z\end{pmatrix}$$</span> I leave the missing steps to find the resistivity matrix <span class="math-container">$\rho$</span> to you as an exercise.</p>
|
Physics
|
|special-relativity|spacetime|metric-tensor|
|
Help with the Minkowski space-time metric
|
<p>Perhaps you are unaware of the use of <a href="https://en.wikipedia.org/wiki/Einstein_notation" rel="noreferrer">Einstein notation</a> for brevity. In this case, we wish to perform a <a href="https://en.wikipedia.org/wiki/Tensor_contraction" rel="noreferrer">scalar contraction</a>: <span class="math-container">$$\sum_{\alpha, \beta}g_{\alpha \beta} \ dx^\alpha dx^\beta = \sum_{\alpha} \sum_{\beta}g_{\alpha \beta} \ dx^\alpha dx^\beta $$</span></p> <p>Based on the result you want, the metric signature is <span class="math-container">$(+, -, -, -)$</span> ; so the corresponding metric tensor is: <span class="math-container">$$g = \begin{pmatrix}1 & 0 & 0 & 0\\0 & -1 & 0 & 0\\0 & 0 & -1 & 0\\0 & 0 & 0 & -1\end{pmatrix}$$</span></p> <p>The off diagonal terms are all zero, so <span class="math-container">$g_{ij} = 0 \ \forall \ i \ne j$</span> <span class="math-container">$$\sum_{\alpha} \sum_{\beta}g_{\alpha \beta} \ dx^\alpha dx^\beta = g_{00} \ dx^0 dx^0 + g_{01}(...) + g_{02}(...) + g_{03}(...) \\+g_{10}(...) + g_{11} \ dx^1 dx^ 1 + g_{12}(...) + g_{13}(...) \ + \ [...]$$</span></p> <p>It is clear that this reduces to: <span class="math-container">$$\sum_{\alpha} \sum_{\beta}g_{\alpha \beta} \ dx^\alpha dx^\beta = g_{00} \ dx^0 dx^0 + g_{11} \ dx^1 dx^1 + g_{22} \ dx^2 dx^2 + g_{33} \ dx^3 dx^3$$</span></p> <p>Substitute <span class="math-container">$g_{00} = 1,\ g_{11} = -1,\ g_{22} = -1,\ g_{33} = -1$</span> and with <span class="math-container">$dx^0 = cdt,\ dx^1 = dx,\ dx^2 = dy,\ dx^3 = dz$</span>, you get your desired result: <span class="math-container">$$\boxed{\sum_{\alpha} \sum_{\beta}g_{\alpha \beta} \ dx^\alpha dx^\beta = c^2dt^2 - dr^2}$$</span></p> <p>Hope this helps.</p>
|
Physics
|
|electrostatics|electricity|electric-current|power|textbook-erratum|
|
Why does the power loss in transmission cable increase when resistance is increased?
|
<p>The power you send in on the source side: <span class="math-container">$P_{in}=U \cdot I $</span> The current that flows through the line causes a voltage drop <span class="math-container">$U_{line}=R_{line} \cdot I$</span> . Thus the power you get out is: <span class="math-container">$P_{out}=U_{out} \cdot I = (U-U_{line}) \cdot I = U \cdot I - U_{line} \cdot I = P_{in} - I^2 R_{line}$</span></p> <p>So your Power Budget:</p> <ul> <li>You send in <span class="math-container">$P_{in} = U \cdot I $</span></li> <li>Your loss <span class="math-container">$P_{loss} = I^2 R $</span></li> <li>You get out: <span class="math-container">$P_{out}=P_{in}-P_{loss}$</span></li> </ul> <p>One easy way to make the loss smaller is to use a higher voltage. With a higher voltage you need a lower current for the same transmitted power. An as the loss is poportional to <span class="math-container">$I^2$</span> it is reduced. But as the voltage gets higher you need to make sure the wires are spaced far apart or else you get sparks over your line. This is why high power transmission lines look the way that they are.</p>
|
Physics
|
|general-relativity|potential|centripetal-force|angular-velocity|kerr-metric|
|
The Kerr metric applied to a solid rotating body
|
<p>The Kerr metric strictly only applies to the spacetime of a black hole with spin.</p> <p>There are alternatives for the spacetime around axisymmetric rotating solid bodies. These are <em>approximate</em> solutions to the field equations based on a multipole expansion, parameterised with the mass and angular momentum (as per the Kerr metric) but then including higher order terms attributable to the mass quadrupole, spin octupole and mass hexadecapole (e.g., <a href="https://academic.oup.com/mnras/article/358/3/923/1028039" rel="nofollow noreferrer">Berti et al. 2005</a>; <a href="https://academic.oup.com/mnras/article/466/4/4381/2869876" rel="nofollow noreferrer">Pappas 2017</a>).</p> <p>Such metric approximations apply <em>outside</em> the rotating bodies. The rotating bodies are not spherically symmetric - hence the mass quadrupole term etc.</p> <p>There is also an approximate approach that provides a metric both inside and outside a rotating body that was developed by <a href="https://ui.adsabs.harvard.edu/abs/1968ApJ...153..807H/abstract" rel="nofollow noreferrer">Hartle & Thorne (1968)</a>. This however does still assume that the body is spherically symmetric.</p> <p>The metrics are often expressed in a more complex coordinate system than the simple Droste coordinates applied to Schwarzschild spacetime - an example is given in the Pappas (2017) paper.</p> <p>On the last part of your question - the concept of an effective potential can be used in any spacetime, including the non-rotating Schwarzschild spacetime.</p>
|
Physics
|
|newtonian-mechanics|
|
Forces involved when an elevator decellerates
|
<p>You appear to be assuming that velocity should be in the same direction as acceleration. But consider the case of an object tossed up from the surface of the earth. The only force acting on it (neglecting air resistance) is the downward force of gravity. Yet the object continues to move upward with gradually decreasing speed until it stops. In this case, as with the elevator, there is an initial velocity which must be overcome before the object stops.</p> <p>If instead an object is released from a height with zero initial velocity, then both the force of gravity and the velocity of the object would be in the same direction.</p>
|
Physics
|
|newtonian-mechanics|forces|pressure|
|
Do you need less force to pick same mass having greater surface area comparison to other?
|
<p>You will need the same force to pick up the object regardless of its surface area, because you are lifting a mass in a force field, and mass does not depend on any geometrical properties the object may have. Pressure, however, is defined as <span class="math-container">$P=\frac{F}{S}$</span> and therefore you would need twice as much pressure to lift an object with half the other's surface. If you want to expand on this subject, I recommend Pearson's University Physics Volume I, which covers Classical (Newtonian) Mechanics with some mentions to pressure, if I remember correctly</p>
|
Physics
|
|quantum-mechanics|angular-momentum|representation-theory|
|
$6j$-symbol example in Quantum Mechanics
|
<p>It is a shame that most articles about <span class="math-container">$6j$</span>-symbols (including Wikipedia) fail to mention for what they are actually used. They are used to transform between different coupling schemes in systems with <span class="math-container">$3$</span> coupled angular momenta.</p> <p>Quoted from <a href="https://beckassets.blob.core.windows.net/product/readingsample/9874588/9783642260605_excerpt_001.pdf" rel="noreferrer">Introduction to the Graphical Theory of Angular Momentum</a> (chapter 2.2):</p> <blockquote> <p>If the theory of angular momentum is extended to consider more than two angular momenta one encounters several possible coupling schemes. For three angular momenta <span class="math-container">$j_1$</span>, <span class="math-container">$j_2$</span> and <span class="math-container">$j_3$</span> the coupling into a state of total angular momentum <span class="math-container">$J$</span> could proceed via two routes. As a first step one could couple <span class="math-container">$j_1$</span> and <span class="math-container">$j_2$</span> to <span class="math-container">$j_{12}$</span> and then <span class="math-container">$j_{12}$</span> with <span class="math-container">$j_3$</span> to obtain the total <span class="math-container">$J$</span>. Alternatively, one could first couple <span class="math-container">$j_2$</span> and <span class="math-container">$j_3$</span> to <span class="math-container">$j_{23}$</span> and then <span class="math-container">$j_1$</span> with <span class="math-container">$j_{23}$</span> for <span class="math-container">$J$</span>. The states belonging to the two coupling schemes can be transformed into each other and the elements of the transformation matrix are proportional to a 6j-symbol. A symbolic relation for these couplings is <span class="math-container">$$\left< [(j_1,j_2)j_{12},j_3]J \Big| [j_1,(j_2,j_3)j_{23}]J \right> \propto \begin{Bmatrix} j_3 & J & j_{12} \\ j_1 & j_2 & j_{23} \end{Bmatrix} $$</span></p> </blockquote>
|
Physics
|
|homework-and-exercises|thermodynamics|thermal-radiation|thermal-conduction|
|
Thermal radiation in hollow sphere
|
<p>I think the easiest way to look at this is to imagine a vacuum in the cavity. Heat radiated from one side of the inner surface, is almost immediately reabsorbed on the opposite side of the cavity, so there is an equilibrium inside the cavity and no net loss internally.</p> <p>Another way of looking at it is that the question is about the time for whole sphere to cool down. If any heat was lost to the interior, that heat is still inside the sphere as a whole and so there is no net loss from the the sphere as a whole. That heat can only escape by conduction through the shell and radiation to the environment outside the shell.</p> <p>The importance of the thickness of the shell is that the heat capacity of an object is proportional to the volume of the material and you need the shell thickness to calculate the total volume of the shell material and the total amount of heat energy that has to be radiated away.</p> <blockquote> <p>I am confused since you mention vacuum, the temperature in a vacuum must be zero by definition. – QuantumQuipster</p> </blockquote> <p>Re: vacuum temperature. The vacuum of space is considered to non-zero. It is few degrees above absolute zero due the Cosmic Microwave Background Radiation (CMBR). Radiation inside the cavity (from the interior surface) is energy and is counted as part of the temperature of the vacuum. As I mentioned before, any radiation internally is quickly reabsorbed, but there is a finite period between being radiated and reabsorbed and while the radiation is in transit across the cavity, it contributes to the temperature of the cavity vacuum.</p> <p>Better still. Lets say X Joules of energy is radiated into the cavity every second. In equilibrium the same amount of radiation is reabsorbed by the shell every second. Now disregard the radiation entering the cavity for now. Every second, x amount of joules exits the boundary or effective surface of the cavity. Now temperature of a volume of a material is proportional to the amount of energy that is radiated from the surface of the material per unit time per unit surface area (M). This can be seen from the <a href="https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law" rel="nofollow noreferrer">Stefan-Boltzman law</a>: <span class="math-container">$$M = \sigma T^4 $$</span></p> <p><span class="math-container">$$ T = \left( \frac{M}{\sigma} \right) ^{1/4}$$</span></p> <p>In our case the vacuum is radiating x joules per second so it has a non zero temperature.</p> <blockquote> <p>I have been asked to calculate the time taken for a <strong>highly conducting</strong> hollow sphere to cool down from a certain temperature say <span class="math-container">$\theta_1$</span> to a temperature <span class="math-container">$\theta_2$</span> (<span class="math-container">$\theta_1, \theta_2 > > \theta_s$</span>) where <span class="math-container">$\theta_s$</span> is the surrounding temperature.</p> </blockquote> <p>Be aware if <span class="math-container">$\theta_2 > \theta_s$</span> the situation will still be dynamic and the internal temperature will be different to the external surface temperature at time 2. This means heat lost by the sphere will not simply be heat capacity of the sphere multiplied by the change in external temperature of the sphere.</p> <blockquote> <p>Could you please explain how filling the inside with a specific material affects thermal conductivity in a case? Thank you. – QuantumQuipster</p> </blockquote> <p>This can get complicated and you will see why below. The good news is that thermal properties like <a href="https://en.wikipedia.org/wiki/Thermal_conductivity_and_resistivity" rel="nofollow noreferrer">conductivity</a>, resistivity, and resistance are analogous analogous to their electrical counterparts and have similar mathematical properties. For example (treating voltage as temperature) thermal resistance in series can simply be added like electrical resistance in series.</p> <p>The bad news is that resistance is a function of cross sectional area and in the case of heat energy conducting from the centre of a solid sphere to the exterior, the cross sectional area is continually expanding and will involve doing some probably complicated integration calculations. This sub case of the thermal conductivity of a sphere composed of shells of different material deserves its own post, because its not that trivial.</p> <p>For a vacuum filled cavity, the cavity temperature is pretty much the same as the the interior surface of the shell at any instant and you can pretty much ignore the cavity in your calculations ;-)</p>
|
Physics
|
|newtonian-mechanics|forces|friction|
|
Friction behaviour for a mass in contact with a surface
|
<p>In the Coulomb model of <a href="https://en.wikipedia.org/wiki/Friction" rel="nofollow noreferrer">friction</a>, the maximum static friction force is proportional to the applied load <span class="math-container">$N$</span> and independent of the surface area in contact. Thus</p> <p><span class="math-container">$F_{friction} \le \mu N$</span></p> <p>where <span class="math-container">$\mu$</span> is the coefficient of friction.</p> <p>The Coulomb model is an empirical model that holds in many circumstances but not all. One circumstance where it does not hold is tyres (especially tyres on drag cars) where adhesion between the tyre and the road surface means that the maximum static friction force becomes dependent on contact area.</p> <p>The study of friction between surfaces and ways to reduce or increase it as required is called <a href="https://en.wikipedia.org/wiki/Tribology" rel="nofollow noreferrer">tribology</a>.</p>
|
Physics
|
|newtonian-mechanics|forces|waves|string|stress-strain|
|
How to add a transverse force to a point on a simple wave equation?
|
<p>Well, the way to do it is to work with the differential equation, rather than a finite difference approximation. The thing that happens with a point loading like this is the slope changes discontinuously at the point of loading: <span class="math-container">$$(y_x^+-y_x^-)T=F$$</span>This change in slope occurs at the point that the force F is applied. If you have a nodal point that the location of force application, then only the finite difference equation at the point of application is affected.</p>
|
Physics
|
|electromagnetic-radiation|visible-light|refraction|frequency|wavelength|
|
Why color depends on frequency and not on wavelength?
|
<p>TLDR: The perceived color of monochromatic light depends on the frequency of the light at the retina. The frequency of the light at the retina is equal to the frequency of light in a vacuum or in any other medium. The wavelength of light in the retina is not equal to its wavelength in other media. So it's more convenient to discuss in terms of frequency.</p> <p>Very Important Fact: The <strong>frequency</strong> of light is preserved as it travels through different (linear) media. The wavelength is not. This means that, in situation where light is travelling through different media (air, water, lenses, corneas, fluid in the vitreous body in the eye), the frequency is the same throughout, but the wavelength is changing.</p> <blockquote> <ol> <li>Why the colour of light depends on the frequency only and not on the wavelength?</li> </ol> </blockquote> <p>See my answer to 2.</p> <blockquote> <ol start="2"> <li>Are we biologically designed to perceive only frequency and not wavelength? If yes, then why is that?</li> </ol> </blockquote> <p>Yes, our eyes detect color based on the light that hits the photoreceptors (rods and cones) in the retina of our eyes. The light can be characterized by either its frequency or its wavelength in the eye. <strong>BUT</strong> because of the important fact above, it is more convenient to specify the light in terms of its frequency rather than its wavelength. This way if we measure the frequency of light outside of the eye we know what frequency it will be inside the eye and thus what color will be perceived. If we measured wavelength we would have to do conversions to figure out what the wavelength would be in the eye.</p> <blockquote> <ol start="3"> <li>Why are then colours not defined only on the basis of frequencies? Why are they defined on the basis of wavelengths when wavelength doesn't even matter on the change of media?</li> </ol> </blockquote> <p>Color can be defined either in terms of wavelengths or frequencies. In vacuum the relationship between wavelength and frequency is unambiguous, it is the speed of light <span class="math-container">$c$</span>. Here is a chart showing different colors and their corresponding frequency, wavelength (in vacuum) and energy: <a href="https://i.stack.imgur.com/4xQR3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4xQR3.png" alt="enter image description here" /></a> If you see a color specified in terms of wavelength then you should understand that that is probably the wavelength <strong>in vacuum</strong> and can be converted to a frequency using <span class="math-container">$c$</span>.</p>
|
Physics
|
|lagrangian-formalism|gauge-theory|dimensional-analysis|brst|ghosts|
|
Mass dimension of ghost Lagrangian in BRST quantization
|
<p>In <span class="math-container">$d$</span> spacetime dimensions, the <a href="https://www.google.com/search?as_epq=mass+dimension" rel="nofollow noreferrer">mass dimensions</a> of the gauge field <span class="math-container">$A_{\mu}$</span>, the <a href="https://en.wikipedia.org/wiki/Faddeev%E2%80%93Popov_ghost" rel="nofollow noreferrer">Faddeev-Popov (FP) ghost</a> <span class="math-container">$c$</span>, the Faddeev-Popov (FP) antighost <span class="math-container">$\bar{c}$</span>, and the Lautrup-Nakanishi (LN) auxiliary field <span class="math-container">$B$</span>, are <span class="math-container">$$ [A_{\mu}]~=~\frac{d}{2}-1, \quad [c]~=~\frac{d}{2}-2, \quad \quad [\bar{c}]~=~\frac{d}{2}~=~[B],$$</span> respectively. The Lagrangian density has mass dimension <span class="math-container">$[{\cal L}]=d$</span>.</p> <p>Note that (despite the notation) the <span class="math-container">$c$</span> and the <span class="math-container">$\bar{c}$</span> are <em>independent real</em> Grassmann-odd fields. (The bar on <span class="math-container">$\bar{c}$</span> indicates negative ghost number; not complex conjugation.)</p>
|
Physics
|
|electromagnetic-radiation|atomic-physics|material-science|spectroscopy|
|
Why is red the brightest in the emission spectra of Hydrogen Gas?
|
<p>Brightness does not correspond to the energy, it corresponds to the transition that happens most often (producing the most photons). Which will often be the ones at lowest energy levels. In fact as your graph shows, the brightness (I think a better word is Intensity) of each peak is typically in inverse proportion to the energy level. Higher energy transitions occur less often.</p> <p>Energy gap in the transition corresponds to the color (frequency) of the photon emitted.</p>
|
Physics
|
|black-holes|cosmological-inflation|tidal-effect|
|
Black Hole Ripped Apart
|
<p>First, a black hole between two black holes has two places where two black holes are near each other. This is much like two black holes merging. The two do not rip each other apart.</p> <p>You have probably seen videos showing two black holes circling each other until - blip - there is one black hole. If not, here is one from CalTech. <a href="https://www.youtube.com/watch?v=I_88S8DWbcU" rel="nofollow noreferrer">Two Black Holes Merge into One</a></p> <p>Here is a numerical simulation from CalTech that shows the final moment of the first merger detected in 2019. You can see the shape is distorted, but not ripped apart. <a href="https://www.ligo.caltech.edu/video/ligo20200420v1" rel="nofollow noreferrer">GW190412: Binary Black Hole Merger </a></p> <p><a href="https://i.stack.imgur.com/qSKXv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qSKXv.png" alt="enter image description here" /></a></p>
|
Physics
|
|quantum-mechanics|hilbert-space|wavefunction|
|
Expansion of a wavefunction of a two-particle system in one dimension in an arbitrary basis without operations associated with the tensor product
|
<p>If we fix <span class="math-container">$x_1 = \bar{x}_1$</span>, the function <span class="math-container">$\psi(\bar{x}_1, x_2)$</span> describes a valid state of the second particle, so can be decomposed with respect to the eigenbasis <span class="math-container">$\psi_{\omega_2}(x_2)$</span>. The coefficients in this decomposition depend on the value <span class="math-container">$\bar{x}_1$</span>, and denoting these coefficients by <span class="math-container">$C_{\omega_2}(\bar{x}_1)$</span> we get (1).</p>
|
Physics
|
|quantum-mechanics|operators|harmonic-oscillator|hamiltonian|perturbation-theory|
|
The time-derivative of the Hamiltonian for a 1D harmonic potential
|
<p>In the Schrödinger picture (which seems like what you're working with), <span class="math-container">$\hat{x}$</span> and <span class="math-container">$\hat{p}$</span> don't have any time dependence. This means that the only time dependence of <span class="math-container">$\hat{H}$</span> comes from the explicit <span class="math-container">$t$</span>-dependence in the form that you've written, so <span class="math-container">$$ \frac{d\hat{H}}{dt} = \frac{\partial\hat{H}}{\partial t} = - m \dot{a}(t) \omega^2 (\hat{x} - a(t)). $$</span></p>
|
Physics
|
|statistical-mechanics|partition-function|
|
Is average total energy of two objects is the sum of their individual average energies?
|
<p>You choose the parameters of each system <em>separately</em>. By writing <span class="math-container">$E_1(s) + E_2(s)$</span>, you're neglecting the states where the <span class="math-container">$s$</span>'s aren't the same. That is, there is a state specified by <span class="math-container">$(s_1,s_2)$</span> whose energy is <span class="math-container">$E_1(s_1) + E_2(s_2)$</span>. Once you've realized that, then you have a double sum <span class="math-container">$\sum_{s_1}\sum_{s_2}$</span>, and from there it's straightforward to show that the result is the sum of the averages.</p> <p>All of this assumes that the two systems are <em>independent</em>, which allows you to specify the states of each one separately and write the combined state as <span class="math-container">$(s_1,s_2)$</span>.</p> <hr /> <p>In addition, <span class="math-container">$\beta_1 = \beta_2$</span> by assumption: they are assumed to be in thermal equilibrium so that the temperatures are the same. Otherwise, it's unclear that you can even <em>define</em> an average energy for the entire system.</p>
|
Physics
|
|classical-mechanics|lagrangian-formalism|coordinate-systems|differentiation|calculus|
|
Derivation of lagrange equation in classical mechanics
|
<p>Applying the derivative (don't forger the inner derivative): <span class="math-container">$$\frac{\partial}{\partial\dot{q}_{j}}\left(\frac{1}{2}m_{i}v_{i}^{2}\right)=\frac{1}{2}m_{i}\frac{\partial}{\partial\dot{q}_{j}}v_{i}^{2}=m_{i}\boldsymbol{v}_{i}\frac{\partial\boldsymbol{v}_{i}}{\partial\dot{q}_{j}}$$</span></p>
|
Physics
|
|newtonian-mechanics|
|
Why applying the second law of motion gives me the wrong value of tangential acceleration in vertical circular motion?
|
<p>The problem is in your sketch.</p> <p>There are additional internal shear forces in this object (that is why if you try to bend it, it will remain straight). Ropes too will have some shear, though they behave differently. Sometimes you get away with ignoring it, other times you don't.</p> <p>This is what you might expect (note for simplicity I omitted gravity). <span class="math-container">$V$</span> is the shear. <span class="math-container">$d(\cdot)$</span> is some kind of differential change.</p> <p><a href="https://i.stack.imgur.com/RHlpO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RHlpO.png" alt="enter image description here" /></a></p>
|
Physics
|
|thermodynamics|temperature|thermal-radiation|thermal-conduction|
|
Can people feel the low heat radiation from very cold surfaces?
|
<p>Regarding feeling the coldness: Sure, humans feel when their skin emits more IR radiation that is received. You can test this easily by opening a freezer and standing some ways away from it (avoiding the cold air itself).</p> <p>For the follow up question, we can calculate it:</p> <p>Assuming the person is naked, their outer surface is about 33°C = 306 K. If they are standing up, surface area is roughly 1.8 m². The total emitted <a href="https://en.wikipedia.org/wiki/Black-body_radiation#Human-body_emission" rel="nofollow noreferrer">black body radiation</a> can be calculated as:</p> <p><span class="math-container">$$P_\text{net} = A \sigma \varepsilon \left( T^4 - T_0^4 \right)$$</span></p> <p>Where <span class="math-container">$\sigma$</span> is the Stefan–Boltzmann constant and <span class="math-container">$\varepsilon$</span> is the emissivity, which is close to 1. With <span class="math-container">$T$</span> = 306K and <span class="math-container">$T_0$</span> = 0K, this gives 895 W. The human body internally produces about 200 W of heat in mild exercise or when shivering. This gives net transfer of 700 W.</p> <p>The cooling effect of the emitted radiation is balanced by the heating effect of the air once skin temperature drops below 25°C. Based on a <a href="https://www.engineeringtoolbox.com/convective-heat-transfer-d_430.html" rel="nofollow noreferrer">rough value of convection in otherwise still air</a> of 20 W/m²K, the balance point is skin temperature of 12°C. This is too low for long term survival, so we can be sure that eventually the person will die. In the initial state the conduction to air would further cool the skin at 288 W, but skin temperature will quickly drop to near the air temperature.</p> <p><a href="https://i.stack.imgur.com/lja4P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lja4P.png" alt="Heat transfer by skin temperature graph" /></a></p> <p>How fast will this happen? Based on thermal capacity of water and a body weight of 80 kg, with 700 W net heat emission, body temperature would drop by 8°C every hour. Hypothermia would begin in about 15 minutes and loss of consciousness in an hour. Human body's mechanisms to reduce blood flow to extremities will give a little more time by reducing internal heat conduction.</p> <p>Compared to vacuum, cooling down would be slower because of less evaporation, but otherwise the situation is similar. Of course in vacuum you would suffocate long before freezing. Even a thin layer of clothing would greatly prolong the survival, and thick enough clothing would prevent freezing altogether.</p> <p>Complicating factors:</p> <ul> <li>Crouching down to a ball shape would reduce surface area by about half.</li> <li>It is not realistic to keep air at 25°C close to the 0 K walls without significant air flow. Air flow would increase initial cooling, but bring in heat once skin temperature drops below 25°C.</li> <li>Heavy exercise would temporarily increase body heat production and also air flow.</li> <li>Emissivity of the walls matters too. If the walls were made of uncoated metal, they would reflect back some of the IR radiation.</li> </ul>
|
Physics
|
|lagrangian-formalism|coordinate-systems|inertial-frames|noethers-theorem|galilean-relativity|
|
Equation of Motion Invariance in Galilean Mechanics
|
<p>You have very nicely proven Noether's Theorem here. If a given infinitesimal transformation, <span class="math-container">$q_i \rightarrow q_i^\prime = q_i + \varepsilon \, Q_i\left(\vec{q},t\right)$</span>, is a symmetry of the Lagrangian, i.e., <span class="math-container">$$ L^\prime(\vec{q}^\prime, \dot{\vec{q}}^\prime, t) = L(\vec{q}^\prime, \dot{\vec{q}}^\prime, t) = L(\vec{q}, \dot{\vec{q}}, t) + O\left(\varepsilon^2\right)\,, $$</span> then there exists a constant of motion <span class="math-container">$$ \sum_i Q_i \frac{\partial L}{\partial \dot{q}_i}\,. $$</span> In other words, that time derivative must be zero if the transformation is to be a symmetry.</p> <hr /> <p>To address your main point: yes, it is true that if we put the primed Lagrangian back into an action (ignoring second-order terms in <span class="math-container">$\varepsilon$</span>): <span class="math-container">$$ S^\prime = \int_{t_A}^{t_B} L^\prime(\vec{q}^\prime, \dot{\vec{q}}^\prime, t) \, dt \approx \int_{t_A}^{t_B} L(\vec{q}, \dot{\vec{q}}, t) + \varepsilon \frac{d}{dt} \left[ \sum_i Q_i \frac{\partial L}{\partial \dot{q}_i} \right] \, dt $$</span> then we find that: <span class="math-container">$$ S^\prime = \int_{t_A}^{t_B} L(\vec{q}, \dot{\vec{q}}, t) \, dt + \varepsilon \left[ \sum_i Q_i \frac{\partial L}{\partial \dot{q}_i} \right]_{t_A}^{t_B}\,. $$</span> Since the second term depends only on the end points of the path, variation there will be zero and therefore the Euler-Lagrange equations will be exactly what you would find from: <span class="math-container">$$ S = \int_{t_A}^{t_B} L(\vec{q}, \dot{\vec{q}}, t) \, dt $$</span> But note that we have, by making the Taylor expansion, written <span class="math-container">$L^{\prime}$</span> in the unprimed coordinates. The Euler-Lagrange equations we find will be in the <em>unprimed</em> coordinates: <span class="math-container">$$ \frac{d}{dt} \frac{\partial L}{\partial \dot{q}_i} = \frac{\partial L}{\partial q_i}\, . $$</span> To find the equations in terms of <span class="math-container">$\vec{q}^\prime$</span>, you would need to transform the Euler-Lagrange equations using your originally specified transformation (or, find the Euler-Lagrange equation associated to <span class="math-container">$L^\prime$</span> in its primed coordinates). Only when <span class="math-container">$$ \sum_i Q_i \frac{\partial L}{\partial \dot{q}_i} = \text{ const.} $$</span> will the equations of motion have the same form in both the primed and unprimed coordinates. That only occurs when your transformation of coordinates is a <em>symmetry of the Lagrangian</em>.</p> <hr /> <p>For example, let's take: <span class="math-container">$$ L(x,y, v_x, v_y) = \frac{1}{2} m \left( v_x^2 + v_y^2 \right) + U(x)\, . $$</span> Then the transformation of coordinates: <span class="math-container">$$ \begin{align} x^\prime &= x \\ y^\prime &= y + \varepsilon a \end{align} $$</span> yields: <span class="math-container">$$ L^\prime = L(x^\prime, y^\prime, v_x^\prime, v_y^\prime) = \frac{1}{2} m \left({v_x^\prime}^2 + {v_y^\prime}^2 \right) + U(x^{\prime}) = L(x, y, v_x, v_y) $$</span> and <span class="math-container">$m v_y$</span> is the conserved quantity.</p> <p>A less trivial example is the two-body problem: <span class="math-container">$$ L(\vec{r}, \dot{\vec{r}}) = \frac{1}{2} \mu | \vec{r} |^2 + U(|\vec{r}|) $$</span> you can show that the transformation given by a small rotation <span class="math-container">$\Delta \theta$</span> about an arbitrary axis in the <span class="math-container">$\hat{n}$</span> direction: <span class="math-container">$$ \vec{r} \rightarrow \vec{r}^\prime = \vec{r} + \Delta \theta \,\hat{n} \times \vec{r} $$</span> leaves the Lagrangian unchanged to second order in <span class="math-container">$\Delta \theta$</span>, and the angular momentum component along <span class="math-container">$\hat{n}$</span> is the conserved quantity.</p>
|
Physics
|
|thermodynamics|energy|statistical-mechanics|temperature|entropy|
|
Applicabilty of the definition of thermodynamic temperature
|
<p>I think you are mixing Thermodynamics and Statistical Mechanics concepts.</p> <p>From the point of view of Thermodynamics, there is a set of definitions and relations between functions of the state that hold for all systems at equilibrium, independently of the boundary conditions. Entropy and other state functions are well-defined for an isolated system (fixed energy <span class="math-container">$E$</span>) and for a system in contact with a thermostat (fixed temperature <span class="math-container">$T$</span>). Also, a relation like <span class="math-container">$\frac{1}{T}=\left.\frac{\partial S}{\partial E}\right|_{V,N}$</span> is universally valid although its meaning will be slightly different. For an isolated system, the temperature is provided by the knowledge of the fundamental equation <span class="math-container">$S(E,V,N)$</span>. For a thermostatted system, it gives the value of the derivative of entropy wrt energy from an external parameter (temperature).</p> <p>Things are slightly different in statistical mechanics, mainly if applied to finite systems. In such a case, relative fluctuations do not vanish, and some quantities may not be well-defined for small systems. However, thermodynamics is not intended to be applied to small systems in its standard form. And it is precisely the case of a finite, small system, where the statistical mechanics formulas in different ensembles are not equivalent. However, this is not the condition of the validity of thermodynamics.</p>
|
Physics
|
|spacetime|metric-tensor|time|spacetime-dimensions|
|
What is the problem with two time dimensions?
|
<p>This is one of the many death-by-paper-cuts reasons why I really do not like Susskind.</p> <p>There is a far better experimental argument as to why we do not consider more-than-one time dimensions until some other experiments comes by to force us to consider them.</p> <p>We have experimentally observed that there are certain purify-able materials that undergo particularly simple types of radioactive decay. Those that are basically at the start of a chain of radioactive decays, they fit the random exponential decay curve extremely well, with one fixed mean lifetime <span class="math-container">$\tau$</span>, or half-life <span class="math-container">$T_{\frac12}=\frac\tau{\ln2}$</span>, whatever you prefer to use to characterise this. We also know mathematically that exponential decay in time is linked to a Poisson distribution.</p> <p>There is, as of yet, no known experimental variable we can do to change <span class="math-container">$\tau$</span>. It is a constant for a particular material and we might even have good theoretical reasons (in QFT) to believe that they should be alterable by some means when we consider the really extreme conditions. We just have yet to be able to create these extreme conditions in the lab or see them in telescopes, elsewhere in the universe.</p> <p>The only experimental variable that consistently changes these, is Einsteinian time dilation.</p> <p>The issue with 2 time directions, is that if you do have them, then either these decays should depend upon the "radial direction" combination of both, or only depend upon one of them. Then we should observe a difference compared to our own time-keeping devices; it would make no sense if we have 2 time directions and we always only measure one certain direction everywhere in the universe. If the universe conspires to only ever allow one certain time direction to be measured, then there is no point in calling that a 2-time-direction theory. But then if both time directions are active, then we always ought to see a deviation from the strict radioactive decay fitting. But we have never been able to get anything more than Einsteinian time dilation.</p> <p>So this is an experimental argument as to why we should ignore 2-times until future experiments force us to consider them.</p>
|
Physics
|
|statistical-mechanics|temperature|fermions|chemical-potential|
|
Average number of particles in a Fermi gas
|
<p>To determine how to get the high temperature limit at fixed number of particles, you just need to use the formula for the expected number of particles: <span class="math-container">$$ N = \sum_n\frac{1}{e^{\beta E_n}/z+1} $$</span> The RHS is an increasing function for <span class="math-container">$T$</span> and an increasing function for <span class="math-container">$z$</span>. Therefore, at fixed <span class="math-container">$N$</span>, <span class="math-container">$z$</span> is a decreasing function of <span class="math-container">$T$</span>. In the high temperature limit, fugacity decreases and converges to a finite limit <span class="math-container">$z\to z_\infty$</span>. The question is therefore why is <span class="math-container">$z_\infty=0$</span>.</p> <p>The short answer is that this is not always the case. Take for example a two orbital system, <span class="math-container">$n\in\{1,2\}$</span>, with a single fermion, <span class="math-container">$N=1$</span>, <span class="math-container">$z_\infty$</span> is finite.</p> <p>In most applications, the number of orbitals is large compared to the number of fermions. Take for example the gas in a box (as in your case), harmonic oscillator, etc. where the number of orbitals is rigorously infinite. Note that this still holds even in the thermodynamic limit where you let number of fermions <span class="math-container">$N\to\infty$</span>. In this case, for finite <span class="math-container">$z_\infty$</span>, the RHS would go to infinity, so this imposes <span class="math-container">$z_\infty=0$</span>.</p> <p>The fact that <span class="math-container">$\mu$</span>can be both positive or negative does not affect the high temperature limit. Indeed, since <span class="math-container">$z\to0$</span>, <span class="math-container">$\mu\to-\infty$</span>. In fact, you get the dilute limit in which all the statistics (Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein) agree.</p> <p>Hope this helps.</p>
|
Physics
|
|homework-and-exercises|statistical-mechanics|plasma-physics|kinetic-theory|
|
Definition of injection spectrum of an electron beam
|
<p>You are just showing the zeroth velocity moment (e.g., see answer at <a href="https://physics.stackexchange.com/a/218643/59023">https://physics.stackexchange.com/a/218643/59023</a> for examples) of a velocity distribution function. The <a href="https://en.wikipedia.org/wiki/Differential_(mathematics)" rel="nofollow noreferrer">differential</a> comes about because someone assumed azimuthal symmetry, i.e., you have: <span class="math-container">$$ d^{3}v \rightarrow v^{2} \ dv \ 2 \pi \ \sin{\theta} \ d\theta $$</span></p> <p>The units of <span class="math-container">$f\left( z, v, \theta \right)$</span> are usually in number per unit volume per velocity cubed. So if you integrate over <span class="math-container">$d^{3}v$</span> you get units of number per unit volume, also known as number density.</p>
|
Physics
|
|quantum-mechanics|wavefunction|potential|schroedinger-equation|
|
Boundaries of finite potential well
|
<p>You are correct that this definition of finite potential well is not well-defined. In the best traditions of quantum mechanics boundary condition should be omitted, because particle state at these points are undefined. So, it should be :</p> <p><span class="math-container">$$ V(x)={\begin{cases} 0~{~~~~~~~~~~~~~~\text{if }}x\lt -a/2 ~\lor x \gt a/2 \\ V_{0}~~~~~~~~~~~~~{\text{if }}x \gt -a/2 ~\land x\lt a/2 \\ \text{undefined}~{\text{if }}x=-a/2 \lor x=a/2 \end{cases}} $$</span></p> <p>"Undefined" part is usually omitted in classical texts of quantum mechanics, or is included into one of conditions, because it's singularity. But surely we can't include it in both cases, because our well will have ambiguos definition.</p>
|
Physics
|
|newtonian-mechanics|classical-mechanics|
|
Extension of Orbit by Reflection about Apsidal Vectors
|
<p>The orbit (<span class="math-container">$u$</span> as a function of <span class="math-container">$\theta$</span>) apparently has the property of oscillating between two values <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span>. Apsidal vectors by definition point to the positions of minimal and maximal <span class="math-container">$u$</span>. From the drawing we see that those positions are close to, but not exactly 90 degrees apart, so the drawing shows a non-closed orbit.</p> <p>It depends on the potential <span class="math-container">$V(u)$</span> whether this behavior will occur, of course. The solution <span class="math-container">$u(\theta)$</span> is either a periodic function, or it goes to a limit, most likely <span class="math-container">$0$</span> or <span class="math-container">$\infty$</span> in that case, but it could also asymptotically go to some unstable equilibrium value for <span class="math-container">$u$</span> which would then of course be precisely at a potential maximum (provided you create a total potential by bringing the <span class="math-container">$r$</span> term to the rhs).</p> <p>And if it is a periodic solution, then the period can be anything (there is nothing in the equation that makes the point <span class="math-container">$2\pi$</span> special, you could also see it as some quantity which is a function of time and use <span class="math-container">$t$</span> instead of <span class="math-container">$\theta$</span>). And the periodic solution does not have to be an undistorted sine wave, but we can see that it is time-reversal invariant (or <span class="math-container">$\theta$</span>- reversal invariant in the original, because there is only an even dependence on <span class="math-container">$\theta$</span> throught the second derivative. So you only need to solve one quarter period between one of the minima and a neighboring maximum.</p>
|
Physics
|
|homework-and-exercises|electromagnetism|field-theory|dirac-equation|dirac-matrices|
|
Covariant derivative property
|
<p><span class="math-container">\begin{align} \not{\!\!D}^2 &= \gamma^\mu \gamma^\nu D_\mu D_\nu \\ &= \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \}D_\mu D_\nu + \frac{1}{2} [ \gamma^\mu , \gamma^\nu ] D_\mu D_\nu \\ &= \eta^{\mu\nu} D_\mu D_\nu + \frac{1}{4} [ \gamma^\mu , \gamma^\nu ] [ D_\mu , D_\nu ] \end{align}</span> We next have <span class="math-container">\begin{align} [ D_\mu , D_\nu ] &= ( \partial_\mu - i A_\mu ) ( \partial_\nu - i A_\nu ) - \mu \leftrightarrow\nu \\ &= \partial_\mu \partial_\nu - i [ \partial_\mu A_\nu + A_\nu \partial_\mu ] - i A_\mu \partial_\nu - A_\mu A_\nu - \mu \leftrightarrow\nu \\ &= - i F_{\mu\nu}. \end{align}</span> Thus, <span class="math-container">\begin{align} \not{\!\!D}^2 &= D^2 - \frac{i}{4} [ \gamma^\mu , \gamma^\nu ] F_{\mu\nu} \end{align}</span></p>
|
Physics
|
|quantum-mechanics|hilbert-space|representation-theory|rotation|unitarity|
|
Unitary Representation of $\text{SO}(3)$ in Position Representation
|
<p>The eigenvector <span class="math-container">$|x\rangle$</span> of the position operator <span class="math-container">$\hat X$</span> associated to the eigenvalue <span class="math-container">$x$</span> being defined by <span class="math-container">$$\hat X|x\rangle=x|x\rangle$$</span> the defining property of <span class="math-container">$U_R$</span> gives <span class="math-container">$$U_R^{-1}\hat XU_R|x\rangle=R\hat X|x\rangle=Rx|x\rangle$$</span> which leads to <span class="math-container">$$\hat XU_R|x\rangle=RxU_R|x\rangle$$</span> since <span class="math-container">$Rx$</span> is a number and not an operator. As a consequence, <span class="math-container">$U_R|x\rangle$</span> is the eigenvector of the position operator <span class="math-container">$\hat X$</span> associated to the eigenvalue <span class="math-container">$Rx$</span>. Since <span class="math-container">$U_R$</span> is unitary, it is normalized and <span class="math-container">$$U_R|x\rangle=|Rx\rangle$$</span> Acting on the left of the defining property of <span class="math-container">$U_R$</span> with <span class="math-container">$\langle x|$</span> or taking the adjoint of the latter relation gives <span class="math-container">$$\langle x|U_R^+=\langle x|U_R^{-1}=\langle Rx|$$</span> so that <span class="math-container">$$\langle x|U_R=\langle R^{-1}x|$$</span> Finally, <span class="math-container">$$\langle x|U_R|\psi\rangle=\langle R^{-1}x|\psi\rangle$$</span></p>
|
Physics
|
|optics|
|
View distance as a function of elevation of observer
|
<p>It's basic <a href="https://en.wikipedia.org/wiki/Horizon" rel="nofollow noreferrer">Pythagorean theorem</a> :</p> <p><a href="https://i.stack.imgur.com/Fk76i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fk76i.png" alt="enter image description here" /></a></p> <p>On the mountain of height <span class="math-container">$h$</span>, one can see max (which is current distance <span class="math-container">$d$</span> to the horizon) :</p> <p><span class="math-container">$$ d=\sqrt {\left(R+h\right)^2-R^2} = \sqrt {2Rh+h^2} $$</span></p> <p>Supplying book given data and Earth metrics (radius) :</p> <p><span class="math-container">$$ \sqrt{2×6~371~000 m × 762m + (762m)^2} $$</span></p> <p>gives this seeing distance <span class="math-container">$\approx 60~\text{miles}$</span>, which is off by author given answer by <span class="math-container">$10~miles$</span>. But having in mind that we don't know what Earth radius author (or his references) has supplied into the formulas and exact calculation techniques at that time is also unknown,- this error margin should be acceptable.</p> <p><strong>EDIT</strong></p> <p>Provided that book script action goes on the island, it's very likely that author used distance is converted to Nautical miles. Hence when doing so we get <span class="math-container">$\approx 52~\text{Nautical Miles}$</span>. Thus it matches given data in the book.</p>
|
Physics
|
|electromagnetism|
|
Derivation of Magnetic Energy and Ohm's Law
|
<p>The Panofsky&Phillips (P&P) derivation is limited to specific set of circumstances (battery behaving as Ohmic conductor), but the result (the formula for magnetic energy) is valid more generally. So their derivation is not the most general one, but I suspect such generality and axiomatic clarity was not their aim there. This is the usual kind of limited-validity derivation often encountered in older textbooks and shows several useful concepts at play - electromotive force <span class="math-container">$\mathbf E'$</span>, Joule losses <span class="math-container">$j^2/\sigma$</span>, quasi-stationary current systems. One reason to do it this way is that role (changes of) energy of electric field can be ignored, and the description is realistic (non-zero resistance).</p> <p>You can take the P&P derivation and modify it by taking the limit <span class="math-container">$\sigma \to \infty$</span> to make it valid for a perfect conductor (perfect inductor) powered by a perfect EMF source with zero resistance. But this is the more idealized scenario (in real circuits, with changing current, there is always some non-zero resistance).</p> <p>A unique EM energy formula can't be derived, it has to be <em>defined</em>, and the general definition a la Poynting is based on the Poynting theorem, derived from Maxwell's equations with no approximations.</p>
|
Physics
|
|vectors|velocity|speed|
|
Instantaneous speed x instantaneous velocity
|
<p>If A and B are two points with a <em>finite</em> separation on the path of a moving particle, the segment of path between these points need not be straight, so, for this segment:</p> <p><span class="math-container">$$\text{distance gone}\geq \text {|displacement|}.$$</span></p> <p>Dividing by the time taken to go from A to B we have,</p> <p><span class="math-container">$$\text{speed}\geq \text {|velocity|}.$$</span></p> <p>However as we take B closer and closer to A the segment of path approaches a straight line, so the distance is the same as the magnitude of the displacement, and the <span class="math-container">$\geq$</span> becomes simply = in both relationships.</p>
|
Physics
|
|quantum-field-theory|particle-physics|standard-model|probability|klein-gordon-equation|
|
What particles are described by the Klein-Gordon Equation?
|
<p>A real (hermitean) Klein-Gordon field describes chargeless spin <span class="math-container">$0$</span> bosons. Examples: <span class="math-container">$\pi^0$</span>, Higgs.</p> <p>A complex (non-hermitean) Klein-Gordon field describes charged spin <span class="math-container">$0$</span> bosons, where the "charge" is not necessarily of electromagnetic nature. Examples: <span class="math-container">$\pi^\pm$</span>, <span class="math-container">$K^\pm$</span> (both with electromagnetic charge) or <span class="math-container">$K^0, \bar{K}^0$</span> (electromagnetically neutral with strangeness "charge").</p> <p>In quantum field theory, <span class="math-container">$J^\mu =i[\phi^\ast \partial^\mu \phi- (\partial^\mu \phi^\ast) \phi]$</span> represents the 4-current-density operator with the associated charge operator given by <span class="math-container">$Q=\int \! d^3x \, J^0(x)$</span>.</p>
|
Physics
|
|mathematics|singularities|
|
Use of infinity in physics
|
<blockquote> <p>You sweep the infinitely big error under an infinitely distant rug.</p> </blockquote> <p>You lose me a bit here - there is no error to sweep. Two sets have the same cardinality if and only if the elements of each can be put into one-to-one correspondence with the elements of the other. That is true of <span class="math-container">$\mathbb N$</span> and <span class="math-container">$2\mathbb N$</span>, as you say. No problem.</p> <p>Now, if two sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span> both have a finite number of elements, then they have the same cardinality if and only if they have the same number of elements. In this way, cardinality reduces to "number of elements" <strong>for finite sets</strong>. For non-finite sets, "number of elements" is not a meaningful notion.</p> <blockquote> <p>This makes me squirm a little. You can use this idea to prove unphysical results like the Banach-Tarski theorem. [...] But he says that some physicists think this is physical. Some paper have been written using this idea. For example to explain things about quark confinement.</p> </blockquote> <p>I found <a href="https://www.semanticscholar.org/paper/Hadron-physics-and-transfinite-set-theory-Augenstein/cd3317ef0c13c243ab2f01c76d19a5aed6a3be36" rel="nofollow noreferrer">only one paper</a> linking Banach-Tarski to hadron physics. Its central thesis is that there is a connection between the two, because it can be shown that the so-called <em>minimal decomposition</em> required to implement Banach-Tarski is a single sphere being split into 5 pieces and then reassembled into two spheres, one consisting of 2 pieces and the other consisting of the remaining 3. We are to imagine that the "pieces" are quarks, and that a 2-piece sphere and a 3-piece sphere are a meson and baryon, respectively. The author observes that if this connection exists, then quark confinement is the statement that there does not exist a decomposition in which one of the resulting spheres consists of only one piece.</p> <p>Personally, this seems ... unlikely to bear much fruit. I suspect the reason for this apparent correspondence can be attributed to the <a href="https://en.wikipedia.org/wiki/Strong_law_of_small_numbers" rel="nofollow noreferrer">strong law of small numbers</a>.</p>
|
Physics
|
|quantum-mechanics|wavefunction|potential|schroedinger-equation|curvature|
|
Why does the curvature change with respect to the sign of the wave function?
|
<p>Note the curvature is varying based on the sign of the second derivative.</p> <p><span class="math-container">$\kappa=\frac{y''}{(1+y'^2)^{3/2}}$</span> The sign of the curvature is the sign of the second derivative.</p> <p>So if you have <span class="math-container">$\frac{-\hbar^2}{2m}\frac{d^2\psi}{dx^2}+V\psi=E\psi$</span></p> <p>then <span class="math-container">$\frac{d^2 \psi}{dx^2}=\frac{2m}{\hbar^2}(V-E)\psi$</span></p> <p>So if <span class="math-container">$V>E$</span>, curvature is positive and solutions are roughly exponential decay/grow, and potentially bounded states. If <span class="math-container">$V<E$</span> then you have negative curvature, sinusoids, and/or scattering states.</p>
|
Physics
|
|cosmology|big-bang|cosmic-microwave-background|
|
Why is the horizon problem a problem?
|
<p>The other answers are good, but I want to address this idea specifically:</p> <blockquote> <p>This singularity, unless there is a reason to the contrary, should have given rise to a homogeneous universe, even before particles would exist themselves.</p> </blockquote> <p>Indeed, it seems pretty natural that the Universe should be initially homogeneous. The Universe must ultimately have some initial conditions. Why not let them be the simplest ones?</p> <p>However, the Universe is not initially homogeneous! The primordial curvature power spectrum is measured to be <span class="math-container">$$\mathcal{P}_\zeta(k)\simeq 2.2\times 10^{-9} \, \left(\frac{k}{0.05 \text{ cMpc}^{-1}}\right)^{-0.04}$$</span> (where cMpc is "comoving megaparsec"). Very roughly speaking, this describes the squared fractional amplitudes of initial density variations as a function of the reciprocal length scale <span class="math-container">$k$</span>. The interpretation is that on scales that are too large to be in causal contact, there are density variations with fractional amplitudes of around <span class="math-container">$\sqrt{2.2\times 10^{-9}}\simeq 5\times 10^{-5}$</span> (one part in 20 thousand), and moreover, these variations have slightly scale-dependent amplitudes, such that on a 10-times-larger length scale, the initial density variations have amplitudes <span class="math-container">$\sqrt{10^{0.04}}\simeq 1.05$</span> times larger (5% larger).</p> <p>Those are perhaps not so natural initial conditions after all! (In fact, the <span class="math-container">$10^{-9}$</span> in there is <a href="https://en.wikipedia.org/wiki/Naturalness_(physics)" rel="nofollow noreferrer">technically unnatural</a>.) Why would the Universe pick them?</p> <hr /> <p>(Technical note: when I refer to superhorizon density variations, I'm speaking in Newtonian gauge.)</p>
|
Physics
|
|quantum-mechanics|wavefunction|schroedinger-equation|dispersion|
|
Wave packet at $t<0$
|
<p>In the <em>Heisenberg picture</em>, the time evolutions of the position operator <span class="math-container">$X(t)$</span> and the momentum operator <span class="math-container">$P(t)$</span> of a free particle with mass <span class="math-container">$m$</span> are given by <span class="math-container">$$X(t)=X(0)+\frac{P(0) }{m}t, \quad P(t)=P(0). \tag{1} \label{1}$$</span> Defining <span class="math-container">$\langle A\rangle_\psi:=\langle \psi |A|\psi \rangle$</span> and <span class="math-container">$\left(\Delta_\psi A \right)^2:= \langle \left(A-\langle A \rangle_\psi \right)^2\rangle_\psi =\langle A^2 \rangle_\psi-\langle A\rangle_{\!\psi}^{\,2}$</span> for an arbitrary (hermitean) operator <span class="math-container">$A$</span> and an arbitrary state vector <span class="math-container">$|\psi\rangle$</span> (with normalization <span class="math-container">$\langle \psi |\psi \rangle=1$</span>), we find <span class="math-container">$$\begin{align}\left(\Delta_\psi X(t)\right)^2&= \langle X(0)^2 \rangle_\psi+\frac{t}{m}\langle \{X(0),P(0)\} \rangle_\psi +\frac{t^2}{m^2}\langle P(0)^2 \rangle_\psi \\[5pt] & -\langle X(0) \rangle_{\! \psi}^{\,2}- \frac{2t}{m} \langle X(0)\rangle_\psi \langle P(0) \rangle_\psi -\frac{t^2}{m^2}\langle P(0) \rangle_{\! \psi}^{\,2} \\[5pt] &=\left( \Delta_\psi X(0) \right)^2 +\frac{t}{m} \left(\langle \{X(0),P(0)\}\rangle_\psi-2\langle X(0)\rangle_\psi\langle P(0)\rangle_\psi \right) +\frac{t^2}{m^2} \left( \Delta_\psi P(0)\right)^2,\end{align} \tag{2}$$</span> where <span class="math-container">$\{A,B\}:=AB+BA$</span> denotes the anticommutator of two operators <span class="math-container">$A$</span>, <span class="math-container">$B$</span>. Assuming (without loss of generality) <span class="math-container">$\langle X(0) \rangle_\psi=0$</span>, the time dependence of the square of <span class="math-container">$\Delta_\psi X(t)$</span> assumes the simple form <span class="math-container">$$\left( \Delta_\psi X(t) \right)^2 = (\left( \Delta_\psi X(0) \right)^2+\frac{t}{m} \langle \{ X(0),P(0) \}\rangle_\psi+\frac{t^2}{m^2}\left( \Delta_\psi P(0) \right)^2 \tag{3} \label{3},$$</span> a parabola with its minimum at <span class="math-container">$$t_{\rm min}=-m \langle \{ X(0), P(0) \} \rangle_\psi/2 (\Delta_\psi P(0) )^2 \tag{4} \label{4}$$</span> with minimum value <span class="math-container">$$ (\Delta_\psi X(t_{\rm min}))^2 = (\Delta_\psi X(0))^2-\frac{\langle \{X(0),P(0)\}\rangle_{\! \psi}^{\, 2}}{4 (\Delta_\psi P(0))^2} \ge 0 \tag{5} \label{5}$$</span> and <span class="math-container">$\Delta_\psi X(t) \to +\infty$</span> for <span class="math-container">$t\to \pm \infty$</span>.</p> <p>Note that for the <em>special case</em> of the Gaussian wave packet <span class="math-container">$$\langle x |\psi \rangle = \frac{1}{(2\pi)^{1/4}\sigma^{1/2}} e^{-x^2/4\sigma^2} \tag{7} \label{7}$$</span> with <span class="math-container">$\Delta_\psi X(0) = \sigma$</span> and <span class="math-container">$\Delta_\psi P(0)= \hbar/2 \sigma$</span>, the expectation value of the anticommutator of <span class="math-container">$X(0)$</span> and <span class="math-container">$P(0)$</span> vanishes and \eqref{3} simplifies to <span class="math-container">$$(\Delta_\psi X(t) )^2= \sigma^2+\frac{t^2 \hbar^2}{4 m^2 \sigma^2}.\tag{8}$$</span></p>
|
Physics
|
|quantum-mechanics|electrons|nuclear-physics|physical-chemistry|atoms|
|
Electrons and atoms
|
<p>So in Quantum mechanics an electron being in a bound state in an Atom does not just mean, that we don’t know exactly where it is and have a probability distribution describing at what point it actually might be. That would be the case, if we would know, that the wave function was a delta peak in position space and we just had incomplete information about, where exactly that delta peak is centered. In a bound state the electron is in fact delocalized around the center of your binding potential and exponentially decaying at large distances to the center. Collapsing to specific position is only happening, when measuring the position. Since the wave function of the electron in an atomic bound state is clearly centered around the nucleus, there is a definite association of that electron with the nucleus. Actually the fact, that this electron does occupy this specific bound state (including definite spin state), is the only way you can distinguish it from other electrons as they are indistinguishable otherwise.</p> <p>Now, if you do some measurement, you might cause the electron wave function to collapse into small spacial area far away from the nucleus. If as mentioned in your question the system also does contain a lot of other stuff apart from the electron and the nucleus, which cause stronger interaction at the new position of the electron, the association with the previous nucleus would indeed be lost. In fact that is, how I imagine a tunneling transition to work. But it is key here that you changed the state of the electron by the measurement and only after the measurement the binding is lost.</p> <p>For completeness I would like to mention that this whole picture of first looking at the electron nucleus interaction, then a measurement and then the interaction of the electron with new particles closer to its new positions is of course just a model. As for exact time development you would have to solve the entire many particle system without neglecting any pair interactions at any time. From my current understanding this whole hand wavy fuzz about “state → measurement → new state” probably is an artifact of the inability to solve a many particle system exactly, though I am somewhat unsure on this take. I mainly write this clarification to avoid getting asked what exactly I mean by measurement in this context.</p> <p>Also if you are interested in a more statistical take on something somewhat related to your question you might be interested in reading this <a href="https://physics.stackexchange.com/a/774294/325089">https://physics.stackexchange.com/a/774294/325089</a> .</p>
|
Physics
|
|quantum-mechanics|
|
Aggregate Photon behaviour at spooky distances
|
<p>There is a very simple explanation for this scenario: Neither the Mars scientist nor the Earth scientist will ever see an interference pattern!</p> <p>The reason is also simple: Entangled photons do not produce such a pattern because they do not have the necessary coherence properties. The reasons are somewhat complicated (having to do with orthogonal paths), but you will be able to see more detail from this reference from a recent Nobel winner:</p> <p><a href="https://courses.washington.edu/ega/more_papers/zeilinger.pdf" rel="nofollow noreferrer">Experiment and the foundations of quantum physics</a> (1999)</p> <p>See page S290, figure 2. This is exactly the scenario you propose. From the associated text: "<em>Will we now observe an interference pattern for particle 1 [labeled a] behind its double slit? ... Formally speaking, the states |a> and |a'> again cannot be coherently superposed because they are entangled with the two orthogonal states |b> and |b'>.</em></p> <p>To make these photons produce interference, you must place a single slit in front of the apparatus to create the necessary coherence. That ends the entanglement, so now there is no longer the connection you intended.</p>
|
Physics
|
|classical-mechanics|rotational-dynamics|moment-of-inertia|moment|structural-beam|
|
Moment of inertia of a cantilever beam
|
<p>Note that the <span class="math-container">$I(x)$</span> term in the beam deflection formula is the area moment of inertia of a <em>cross-section</em> of the beam about an axis perpendicular to the plan of the cross-section. <span class="math-container">$I(x)$</span> may be a function of the distance <span class="math-container">$x$</span> along the beam - although in your example its is not, as the cross-section of the beam is the same at all points along it.</p> <p>The <a href="https://en.wikipedia.org/wiki/Perpendicular_axis_theorem" rel="nofollow noreferrer">perpendicular axis theorem</a> tells us that the area moment of inertia of a two-dimensional shape about an axis perpendicular to its plane is the sum of its moments of inertia about two perpendicular axes within its plane. For a rectangle with height <span class="math-container">$h$</span> and breadth <span class="math-container">$b$</span> this gives</p> <p><span class="math-container">$\displaystyle I_z = I_x + I_y = \frac 1 {12} bh^3 + \frac 1 {12} b^3h = \frac 1 {12} bh(h^2+b^2)$</span></p>
|
Physics
|
|quantum-mechanics|measurements|quantum-measurements|
|
Why are quantum effects of the apparatus ignored in quantum experiments?
|
<blockquote> <p>And in all the experiments all these in-between steps are considered to be "magically perfect". Nobody pays them any more attention except to mention that they're there. All mirrors reflect all photons perfectly. Polarization filters just "know" which photons to pass through and which not, etc.</p> </blockquote> <p>This is true for simple textbook examples to teach undergraduates the fundamentals, but a realistic model of an experiment takes into account sources of noise from the imperfections of the apparatus.</p> <blockquote> <p>But that's not how it works in real life, is it? All these parts are big, macroscopic chunks of matter with many, many atoms in them. A particle doesn't just seamlessly pass through/reflect - it bounces around in there, gets absorbed and re-emitted, and entangled with god only knows how many other particles on the way.</p> </blockquote> <p>Yes, this is true. This is handled with the theory of "open quantum systems". In the case of modelling discrete elements such as mirrors, etc., usually it suffices to go beyond unitary transformations acting on pure states, and generalise to <a href="https://en.wikipedia.org/wiki/Quantum_operation#Kraus_operators" rel="noreferrer">Kraus operators</a> acting on density matrices.</p> <p>In general, if we're dealing with continual sources of noise, interactions with the heat bath causing the noise usually happen over such a short time scale that you can model it as instantaneous and memoryless, in which case you get <a href="https://en.wikipedia.org/wiki/Lindbladian" rel="noreferrer">Lindblad master equations</a> and related/equivalent methods such as <a href="https://en.wikipedia.org/wiki/Quantum_Trajectory_Theory" rel="noreferrer">quantum trajectory theory</a>. In the case when these approximations don't hold, this requires more explicit modelling of the particle/heat bath interaction and is usually a matter of active research.</p> <p>A good general reference is <a href="https://link.springer.com/book/9783540223016" rel="noreferrer">Quantum Noise by Gardiner and Zoller</a>, as well as <a href="https://journals.aps.org/pra/abstract/10.1103/PhysRevA.31.3761" rel="noreferrer">Gardiner and Collet's original paper</a> on quantum input/output relations.</p> <blockquote> <p>The final particle that arrives at the detector at the end is almost certainly not the same particle that was emitted. And even if it by some miracle is, its quantum state is now hopelessly altered by all the obstacles it met on the way.</p> </blockquote> <p>Fundamental quantum particles are indistinguishable. The effect on particles with distinguishable internal states such as atoms can simply be modelled by the previously mentioned Lindblad equations or Kraus operators causing transitions between internal states. The possibility of particle loss is handled by using a second quantised description.</p> <blockquote> <p>Yet nobody seems to care about this and just assumes that it's the same particle and tries to measure it and draw conclusions from that.</p> </blockquote> <p>I wouldn't say nobody cares. Pretty much every experimentalist these days takes the effects of noise into account.</p>
|
Physics
|
|quantum-mechanics|condensed-matter|
|
Delocalized electrons in the Hubbard model for Mott insulators
|
<p>'The Hubbard model is an approximate model used to describe the transition between conducting and insulating systems.' (Wikipedia)</p> <p>It does so by considering two parameters, the hopping term t and the onsite repulsion U. Consider half filling, that is, one electron per site. If t is small / U is large then U prevents the electrons from moving. Electrons will reside in localised orbitals and the system is an insulator. t then does not contribute to the energy. If t is large / U is small then electrons delocalise and the system is a conductor. Now U does not contribute to the energy.</p>
|
Physics
|
|special-relativity|doppler-effect|specific-reference|gravitational-redshift|
|
Acceleration in flat space-time and gravitational redshift
|
<blockquote> <p>Question: Does the gravitational Doppler effect also somehow come into play due to the acceleration (by invoking the equivalence principle)? Would the Doppler shift that Stella observes be a combination of the kinematic and gravitational Doppler shifts in this scenario?</p> </blockquote> <p>The idea that (with respect to an inertial frame) there is no additional time dilation due to acceleration is called the clock hypothesis. In the 1970's Bailey did some experiments where they took muons that were going in a circular loop at relativistic speeds and measured their decay rates to determine their proper time. This experiment confirmed the clock hypothesis up to about <span class="math-container">$10^{18}\ g$</span>.</p> <p>Bailey et al., "Measurements of relativistic time dilation for positive and negative muons in a circular orbit," Nature 268 (July 28, 1977) pg 301.</p> <p>Bailey et al., Nuclear Physics B 150 pg 1–79 (1979).</p> <p>So in Terrence's inertial frame, Stella's time dilation is due only to her velocity and there is no additional dilation that must be accounted for due to her acceleration. Now, of course, Stella's frame is more complicated and cannot use the standard time dilation formula. However, due to the manifest covariance of the laws of physics, we are guaranteed that with the calculation of the correct time dilation formula, Stella's frame will obtain the same result as Terrence's frame for all measurable outcomes.</p> <p>Note, the above analysis is focused on the time dilation while the question asked about the Doppler shift. Time dilation is the transverse Doppler, so they are closely related. As you go around a closed path, the non-transverse parts of Doppler cancel out and all you are left with is the transverse Doppler, or time dilation.</p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|mathematics|
|
Hermiticity of a projection operator
|
<p>Indeed, you missed that <span class="math-container">$\sigma_{00}^0=I$</span>.</p> <p><span class="math-container">\begin{equation} \hat{U}=I+\sigma_{00}(e^{i\omega}-1), \hspace{6 mm} \hat{U}^{\dagger}=I+\sigma_{00}(e^{-i\omega}-1), \end{equation}</span></p> <p>and it is now easy to see that <span class="math-container">$\hat{U}^{\dagger}\hat{U}=I$</span>.</p>
|
Physics
|
|general-relativity|
|
Question in transpositioning with Einstein summation notation
|
<p>No, it is not working like that.</p> <p>As one disposes of ( as one has 3 different indices r, m, n which are not contracted of which each runs from 0 to 3 -- so 4 different values) <span class="math-container">$4\times 4\times 4$</span> equations (some of these might be identical due to symmetry relations) one can combines these equations upon multiplication with the metric tensor <span class="math-container">$g^{rs}$</span> --- i.e. the multiplication with the metric tensor is not just multiplication but also implicates summation:</p> <p><span class="math-container">$$g^{rs}(\partial_n g_{rm} + \partial_m g_{rn} -\partial_r g_{nm})= 2g^{rs}g_{rt}\Gamma^t_{mn}$$</span></p> <p>which actually means: <span class="math-container">$$\sum\limits_{r=0,\ldots, 3} g^{rs}(\partial_n g_{rm} + \partial_m g_{rn} -\partial_r g_{nm})=\sum\limits_{r=0,\ldots, 3} 2 g^{rs}g_{rt} \Gamma^t_{mn}$$</span></p> <p>This means that one takes from the <span class="math-container">$4\times 4\times 4$</span> equations <span class="math-container">$4\times 4$</span> eqns (for m and n) but with fixed <span class="math-container">$r=0$</span> multiplying it with <span class="math-container">$g^{0s}$</span>, adds it to the next set of equations for <span class="math-container">$r=1$</span> multiplying that one with <span class="math-container">$g^{1s}$</span> and so forth, i.e. also adds the set of eqns for <span class="math-container">$r=2$</span> weighted by <span class="math-container">$g^{2s}$</span> and for <span class="math-container">$r=3$</span> weighted by <span class="math-container">$g^{3s}$</span>. And of course one does it for each <span class="math-container">$r$</span> on LHS and RHS.</p> <p>Then one gets: <span class="math-container">$$ \sum\limits_{r=0,\ldots, 3} g^{rs}\frac{1}{2}(\partial_n g_{rm} + \partial_m g_{rn} -\partial_r g_{nm})= \delta^s_t\Gamma^t_{mn} = \Gamma^s_{mn}$$</span></p> <p>because</p> <p><span class="math-container">$$ \sum\limits_{r=0,\ldots, 3} g^{rs}g_{rt} = \delta^s_t$$</span></p> <p>where <span class="math-container">$\delta^s_t$</span> is the Kronecker symbol. Or applying Einstein's summation convention the summation symbol can be omitted.</p> <p><span class="math-container">$$g^{rs}\frac{1}{2}(\partial_n g_{rm} + \partial_m g_{rn} -\partial_r g_{nm})= \delta^s_t\Gamma^t_{mn} = \Gamma^s_{mn}$$</span></p> <p>As the very last step one can replace the index <span class="math-container">$s$</span> by <span class="math-container">$t$</span> again. But the name of an index can be chosen (almost) arbitrarily --- better do not use indices which are already used in the equation.</p>
|
Physics
|
|optics|everyday-life|reflection|
|
Does the fingernail test for detecting two-way mirrors really work?
|
<p>A standard mirror consists of a glass pane with a reflective backing. The reflection that you see comes from the reflective backing, not the glass. The principle of this test is that if you see a gap between your fingertip and its reflection then this gap is due to the thickness of the glass, and you have a standard mirror. However, if you do not see a gap then the reflection <em>may</em> be coming from the front face of the glass itself and you <em>may</em> have a two-way mirror. Or you could have a standard mirror with a very thin piece of glass.</p> <p>Checking mirrors in my house, I can see a gap in a large wall-mounted mirror, but no gap with two smaller, lighter mirrors. So clearly this test is not conclusive evidence of a two-way mirror. I would have thought that turning the room lights off (to remove any reflection) and looking carefully for anything through the "mirror" would be a more reliable test.</p>
|
Physics
|
|condensed-matter|elasticity|phonons|
|
What is the relation between linear elastic theory and phonon transport?
|
<blockquote> <p>Clearly there should be a limit where both are one and the same, isn't it?</p> </blockquote> <p>I think the underlying contrast you want to make is the continuum model of a solid vs the atomistic model of a solid. The elastodynamic equations come from the continuum model, and the phonon equations come from the atomistic model. That also hints at where the two agree: in the limit of wavelengths much longer than the spacing of the atoms, the material can often be treated as a continuum, and the elastodynamic and phonon equations will converge --- at least for acoustic phonons.</p> <p>However, optical phonons have no continuum equivalent; they exist due to having more than one atom in a unit cell, and continuum models cannot reproduce the effect.</p> <blockquote> <p>How is the thermal conductivity or the time-relaxation rate related to the elastic coefficients of wave propagation in linear elasticity?</p> </blockquote> <p>There's not a direct connection for a few reasons:</p> <ol> <li><p>A continuum model allows for an arbitrarily small wavelength whereas the atomistic model does not. If you try to calculate thermal conductivity with the continuum model, those arbitrarily small wavelength waves will result in problems --- especially at high temperatures. This is a little like the <a href="https://en.wikipedia.org/wiki/Ultraviolet_catastrophe" rel="nofollow noreferrer">ultraviolet catastrophe</a> for light.</p> </li> <li><p>The continuum model leads to linear dispersion relations (<span class="math-container">$\omega \propto k$</span>), which is correct in the limit of small <span class="math-container">$k$</span> but is otherwise wrong.</p> </li> <li><p>The continuum model cannot explain the relaxation time because a non-linear dispersion relation is required for phonon-phonon scattering. Linear dispersion relations allow waves to "pass though" each other without interacting, but an interaction is required for phonon-phonon scattering to exist.</p> </li> <li><p>The continuum model cannot explain some other sources of scattering (and the relaxation times they result in) such as mass-difference impurity scattering, which arises from some atoms having different masses than the rest of the crystal (e.g. most of the sample is Si 28, but other isotopes of Si are there in small quantities). The continuum model cannot account for this because it does not allow for atoms.</p> </li> </ol> <p>EDIT: For how to get elastodynamic coefficients (the elastic modulus tensor) from the phonon models, I'm going to cite two references:</p> <ol> <li>Landau and Lifshitz "Theory of Elasticity" (volume 7 of their Course of Theoretical Physics)</li> <li>M. J. P. Musgrave "Crystal Acoustics"</li> </ol> <p>For crystals, Crystal Acoustics sections 18.7 and 18.8 shows how to get from the interatomic potential to the elastic modulus tensor. Doing this can be more or less complicated depending on the symmetry of the crystal (L&L section 10 for more information). Crystal Acoustics chapter 19 has examples for several flavors of cubic crystals (which are some of the simpler crystals).</p> <p>For polycrystals, it is in general not possible to derive the elastic modulus tensor from the interatomic potential. L&L section 10 says more about this, but the short version is that it depends on the details of the boundaries between the many crystals, and that generally makes the problem intractable.</p> <p>For amorphous materials, the situation is even more hopeless.</p>
|
Physics
|
|newtonian-mechanics|kinematics|
|
Are there only 2 types of motion -- Translational & Rotational?
|
<p>There is only <strong>one</strong> kind of 3D motion at any instant. This is the so called <a href="https://phys.libretexts.org/Bookshelves/Classical_Mechanics/Classical_Mechanics_(Dourmashkin)/20%3A_Rigid_Body_Kinematics_About_a_Fixed_Axis/20.06%3A_Appendix_20A_Chasless_Theorem-_Rotation_and_Translation_of_a_Rigid_Body" rel="noreferrer">Chasles' theorem</a>.</p> <p>This motion is that of a rotation about some arbitrary axis, <em>and</em> a parallel translation along the axis. This motion is often called the screw motion, and it is the basis for <a href="https://publish.illinois.edu/ece470-intro-robotics/files/2023/02/07-lecture-slides.pdf" rel="noreferrer">Screw Theory</a> in the study of rigid body motions, kinematics, and robotics.</p> <p>Any instantaneous motion of a rigid body can be decomposed into a screw motion defined by the rotation axis direction, a point somewhere along the axis of rotation, the magnitude (speed) of the rotation and the pitch which represents the ratio of translation to rotation.</p> <p>The general screw motion has two special cases</p> <ul> <li>Pure rotation is when the pitch is zero and there is no translation</li> <li>Pure translation has its axis of rotation at infinity and zero pitch, or just a translational velocity (zero rotation) which means the pitch is infinite. Both interpretations are equally valid.</li> </ul> <p>When looking at 2D motion, then the axis of rotation <em>must</em> be out of plane if it exists. You might have heard of the term for <a href="https://en.wikipedia.org/wiki/Instant_centre_of_rotation" rel="noreferrer">"centre of rotation"</a>. This designates the point where the rotation axis intersects the plane of motion.</p>
|
Physics
|
|astrophysics|astronomy|
|
(Astrophysics) How to calculate photons detected by a radiometer over a period of 10 seconds?
|
<blockquote> <p>Can I use it even though I dont have A?</p> </blockquote> <p>You don't need to have the area, because the question asks you for "how many photons would be picked up by the radiometer, <strong>per unit area</strong>," [emphasis added].</p> <p>You are explicitly asked for the photon count per unit area, not the total photon arrival rate over a specified area.</p> <p>Edit to add:</p> <p>Since this doesn't seem to be enough, I'll work through one of the cases. Let's do <span class="math-container">$5.5\times 10^{14}\ {\rm Hz}$</span>.</p> <p>I'll skip part 1a, since that result isn't needed to do the rest of the calculations.</p> <blockquote> <p>1b. Calculate the energy of a single photon at each of these frequencies. Express your answer in electron-volts (eV)</p> </blockquote> <p><span class="math-container">$$ E = \frac{hc}{\nu}$$</span> <span class="math-container">$$ E = \frac{(1.055\times 10^{-34}\ {\rm eV\cdot s}) (2.998\times 10^8\ {rm m\cdot s^{-1}})} {5.5\times 10^{14}\ \rm s^{-1}}$$</span> Keeping in mind the sig figs provided in the input data: <span class="math-container">$$ E = 5.8\times10^{-41}\ {\rm J}$$</span> I'll leave it to you to convert this to eV, since we'll just use the value in joules for the later steps.</p> <blockquote> <p>1c. Consider an astronomical source with a flux density on Earth of fν = 250 milliJansky (mJy), which is constant across all frequencies. ...</p> </blockquote> <p><span class="math-container">$$1\ {\rm Jy} = 10^{-26}\ {\rm W\cdot m^{-2} \cdot Hz^{-1}}$$</span> Therefore <span class="math-container">$$1\ {\rm mJy} = 10^{-29}\ {\rm W\cdot m^{-2} \cdot Hz^{-1}}$$</span></p> <blockquote> <p>... We observe it with a radiometer (a light-measuring instrument) that has a uniform response across all frequencies. It operates by integrating the flux density over a range of frequencies that is 4 Gigahertz (GHz) wide, centered on a given frequency of interest. What is the flux picked up by the radiometer (in units of W m−2) within a band centered on each of the three frequencies?</p> </blockquote> <p>I don't even know this topic area, but I know I have to multiply here from dimensional analysis: <span class="math-container">$$ \Phi = f_\nu \Delta \nu$$</span> <span class="math-container">$$ \Phi = (250\ {\rm mJy}) (4\ {\rm GHz})$$</span> <span class="math-container">$$ \Phi = (250\times 10^{-29}\ {\rm W\cdot m^{-2} \cdot Hz^{-1}} ) (4\times 10^9\ {\rm Hz})$$</span> <span class="math-container">$$ \Phi = 1 \times 10^{-17}{\rm W\cdot m^{-2}}$$</span> This is the energy flux received by the detector.</p> <blockquote> <p>1d. At each frequency, how many photons would be picked up by the radiometer, per unit area, in an exposure time of 10 seconds?</p> </blockquote> <p>This is back to regular radiometry, which I do know something about, but you should still be able to work out what to do from dimensional analysis.</p> <p>We divide the power flux by the photon energy to get the photon flux, and multiply by the observation time to get the total photon count density in that time window: <span class="math-container">$$ N = \frac{\Phi\ \Delta t}{E}$$</span> <span class="math-container">$$ N = \frac{(1 \times 10^{-17}{\rm W\cdot m^{-2}})(10\ {\rm s})} {5.8\times10^{-41}\ {\rm J}}$$</span> <span class="math-container">$$ N = 1.7\times 10^{24}\ {\rm m^{-2}}$$</span></p> <p>To get the actual photon count, you'd multiply this value by the actual detector area, but you weren't asked for that, so you don't need the detector area.</p>
|
Physics
|
|definition|history|si-units|metrology|length|
|
Has there been a big change in 1983 when the definition of the metre changed?
|
<p>No, there was not.</p> <p>The meter has been redefined multiple times, and, each time, the intent was to keep the actual length as close as possible to what it was before. Not long after it was originally defined based on the length of the meridian, it was redefined to be the distance between two marks on the official platinum-iridium meter bar in Paris. That definition stuck until 1960, when it was defined to be a certain number of wavelengths of light produced by a certain electron transition in a certain isotope of Krypton. The specified number of wavelengths was 1,650,763.73. Note that, if they were willing to allow the length of the meter to change by almost a quarter of a mm, they would have specified that number to fewer decimal places.</p> <p>Finally, it was redefined to be 1/299,792,458 of a light-second. Again, they specified that many digits because they didn't want to change the length of a meter by more than a tiny fraction. In fact, they didn't want to change it at all - they just wanted to make it more precise.</p> <p>At the end of the <a href="https://en.wikipedia.org/wiki/History_of_the_metre" rel="noreferrer">History of the meter</a> article in Wikipedia, there is a table of the different definitions. It shows the precision of each definition, but not an absolute difference. That is because each one was designed to be within the range of error of the definition before. It is as if you bought a new meter stick with thinner tick marks than the old one, so you can make more precise measurements. But each tick mark in the new ruler is basically in the middle of the corresponding tick mark in the old ruler.</p>
|
Physics
|
|quantum-mechanics|electromagnetism|
|
Do the length of electrons appear to be shorter when they travel close to the speed of light?
|
<p>As far as we know experimentally, electrons are point particles, so they don't have a "length" per se.</p> <p>However, quantum mechanically, there is uncertainty in the position of a point particle. For a single particle, we can describe this uncertainty as a wavefunction. The wavefunction is like a wave mathematically, except instead of being "ripples in a medium", the wavefunction represents the probability to discover a particle at a given location. A simple kind of wavefunction is a sine wave, and in this case the wavelength of the wave <span class="math-container">$\lambda$</span> is related to the particle's momentum <span class="math-container">$p$</span> and Planck's constant via the famous equation <span class="math-container">$p = h / \lambda$</span>. The wavelength can be observed directly in interference experiments like the double slit experiment, for example.</p> <p>From the formula <span class="math-container">$p = h/\lambda$</span>, we see that as the momentum increases, the wavelength decreases. This is exactly what you would expect based on special relativity. As you boost into a frame where the particle has a larger momentum, the wavelength should shrink due to length contraction. Even though the formula we wrote down is non-relativistic in that it does not involve <span class="math-container">$c$</span>, it can be made relativistic in the way we relate momentum and energy (or wavelength and frequency). The fact that the formula is compatible with length contraction is not an accident and part of how it is possible to reconcile special relativity and quantum mechanics in the framework of relativistic quantum field theory.</p>
|
Physics
|
|mathematics|complex-numbers|signal-processing|analyticity|
|
Kramers-Kronig relations for a Gaussian function
|
<p>I'm not sure that PSE is the best site to ask this question, but I'll answer it anyway. The main issue is that the KK relations do not apply on <span class="math-container">$f$</span>. Intuitively, the Fourier transform of <span class="math-container">$f$</span> is again a gaussian so it is not causal.</p> <p>From a purely spectral point of view, the issue is that you only looked at the properties of <span class="math-container">$f$</span> on the real line. You need to look at the upper half complex plane. You'll notice that it does not decay to zero when <span class="math-container">$\omega\to \pm i\infty$</span>. In fact you have an essential singularity there.</p> <p>Hope this helps.</p>
|
Physics
|
|homework-and-exercises|electric-circuits|power|
|
Bulb glows full brightness at which time of $I$-$t$ graph?
|
<p>Your convictions are correct, from the graph you see that the current in the bulb reaches a steady state at 0.5 A, this steady state current certainly corresponds to the normal operating brightness of the bulb and thus corresponds to the maximum brightness level. So that power is given by <span class="math-container">$P=IV=(0.5A)(200V)$</span>.</p> <p>Just from looking at the shape of the graph, it appears that the filament reaches its operating temperature in such a way as to cause an exponential increase in resistance that leads to a corresponding exponential decrease in the filament current.</p>
|
Physics
|
|fluid-dynamics|conservation-laws|flow|vector-fields|potential-flow|
|
Stokes stream function derivation
|
<ol> <li><p>Define functions<span class="math-container">$^1$</span> <span class="math-container">$$ f(r,\theta)~:=~r^2\sin\theta~u_r(r,\theta) \quad\text{and}\quad g(r,\theta)~:=~r\sin\theta~u_{\theta}(r,\theta). \tag{A}$$</span></p> </li> <li><p>Next consider the one-form <span class="math-container">$$ \eta ~:=~ f{\rm d}\theta -g{\rm d}r. \tag{B}$$</span></p> </li> <li><p>The <a href="https://en.wikipedia.org/wiki/Stokes_stream_function" rel="nofollow noreferrer">incompressible flow/zero divergence condition</a> <span class="math-container">$$\vec{\nabla}\cdot\vec{u}~=~0\tag{C}$$</span> is equivalent to <span class="math-container">$$ \frac{\partial f}{\partial r}+\frac{\partial g}{\partial\theta}~=~0, \tag{D}$$</span> which in turn is equivalent to the closedness condition <span class="math-container">$${\rm d}\eta~=~0.\tag{E}$$</span></p> </li> <li><p><a href="https://en.wikipedia.org/wiki/Closed_and_exact_differential_forms" rel="nofollow noreferrer">Poincare lemma</a> shows that the one-form <span class="math-container">$$ \eta~=~{\rm d}\psi \tag{F}$$</span> is locally an <a href="https://en.wikipedia.org/wiki/Exact_differential" rel="nofollow noreferrer">exact differential</a>, or equivalently, <span class="math-container">$$ f~=~\frac{\partial\psi}{\partial\theta} \quad\text{and}\quad g~=~-\frac{\partial\psi}{\partial r}. \tag{G}$$</span> Here <span class="math-container">$\psi$</span> is <a href="https://en.wikipedia.org/wiki/Stokes_stream_function" rel="nofollow noreferrer">Stokes stream function</a>. This proves OP's eq. (3).</p> </li> </ol> <p>--</p> <p><span class="math-container">$^1$</span> A similar trick works for 3D cylindrical coordinates.</p>
|
Physics
|
|photons|quantum-optics|polarization|identical-particles|
|
If the state is VH+HV how one can prove experimentally that two photons are identical?
|
<p>After you split the photons with a <a href="https://www.rp-photonics.com/beam_splitters.html#:~:text=somewhat%20different%20amplitudes.-,polarizing%20beam%20splitter%20cubes,-Instead%20of%20glass" rel="nofollow noreferrer">PBS</a>, you can rotate the polarization of the photon in one arm (say the one with the <span class="math-container">$|H\rangle$</span> photon) by <span class="math-container">$90°$</span>, for example using a <a href="https://en.wikipedia.org/wiki/Waveplate#Half-wave_plate" rel="nofollow noreferrer">half-wave plate</a>. The state is now <span class="math-container">$|VV\rangle$</span>. Then, you combine the now identical photons on a non-polarizing beamsplitter. As of the <a href="https://en.wikipedia.org/wiki/Hong%E2%80%93Ou%E2%80%93Mandel_effect" rel="nofollow noreferrer">Hong-Ou-Mandel effect</a>, the photons will leave the beamsplitter in the same direction, if they are identical in all degrees of freedom (polarization, spectrum, timing, transversal position and momentum distribution).</p> <p>If you place a <a href="https://www.rp-photonics.com/photon_counting.html" rel="nofollow noreferrer">single-photon detector</a> in each exit arm of the beamsplitter, only one of them detects a pair of photons, but they don't simultaneously detect a photon each. However, if the photons are partially distinguishable by the properties listed before, not all of them exit the beamsplitter in the same direction. Therefore, the level of suppression of coincidences is often used to quantify the indistinguishability.</p>
|
Physics
|
|quantum-mechanics|energy|operators|differentiation|
|
Deduction of Kinetic energy operator in quantum mechanics
|
<p>In operatorial language for any operator <span class="math-container">$A$</span>, <span class="math-container">$A^2$</span> means <span class="math-container">$A \circ A$</span>, that is <span class="math-container">$A$</span> composed with itself. In the particular case of <span class="math-container">$A = \hat{p} = -i\hbar \frac{d}{dx}$</span>, indeed <span class="math-container">$$\hat{p}^2 = -\hbar^2 \frac{d^2}{dx^2}\,.$$</span></p> <p>As I was reading in your comment you wanted also a reason for why using the square of the <span class="math-container">$p$</span> operator is the correct way to apply the correspondence principle to the energy. I feel like any motivation will end up with some sort of "because it works" in the end so I may just give some examples and showing that <span class="math-container">$p^2$</span> does what you would expect the square of momentum to do. For example take an eigenfunction of the momentum operator <span class="math-container">$\phi_p(x)$</span> (I am avoiding Dirac's notation because I am not sure you have encountered it yet), that is <span class="math-container">$$\hat{p} \phi_p(x) = p \phi_p(x)\,.$$</span></p> <p>A particle described by the wavefunction <span class="math-container">$\phi_p(x)$</span> has definite momentum with the value <span class="math-container">$p$</span>. Clearly than also the value of the square of momentum is well definite and it is <span class="math-container">$p^2$</span>. The operator <span class="math-container">$\hat{p}^2$</span> is precisely the operator that satisfies the eigenvalue equation <span class="math-container">$$\hat{p}^2 \phi_p(x) = p^2 \phi_p(x)$$</span> as you would expect.</p>
|
Physics
|
|cosmology|redshift|gravitational-redshift|
|
Cosmological redshift without approximations
|
<p>You can not access <span class="math-container">$z$</span> at the beginning of time as any observation of light beyond the CMB is impossible. Unless in very specific cases, and for all practical purpose, "professional scientists" always use</p> <p><span class="math-container">$z=\frac{1}{a}-1$</span></p> <p>(and <span class="math-container">$a(t_0)=1$</span> by construction)</p>
|
Physics
|
|electrostatics|electric-circuits|electricity|electric-current|electronics|
|
Force on charge carriers in a simple circuit
|
<blockquote> <p>Is it true that in a simple circuit where a simple conducting wire is connected to a battery, the force on each charge carrier is same in magnitude ?</p> </blockquote> <p>No, it is not.</p> <p>First, in a microscopic view, charge carriers frequently interact with the material lattice (with the nuclear charges or with the phonon field, depending on how deep a model you want to use) so that their velocities are randomized and their temperature is kept in equilibrium with the lattice. Each of these interactions applies an impulsive force on an individual carrier, making it experience a force that is, in that moment, not the same as the force on the other carriers.</p> <p>But what is it your source likely meant by their claim?</p> <blockquote> <p>in a conductor there are other charges too, one after another and this will result in each charges having a different force acting upon it due to repulsion from other charges present in the conductor.</p> </blockquote> <p>Macroscopically, the charge is distributed randomly but (when averaged over a reasonable amount of time and volume) uniformly through the conductor, so all the free charges see essentially the same force <em>due to the applied potential from the battery and from the other surrounding charges</em>.</p> <p>This may break down at the surface of the conductor where the other charges are all distributed to one side of a given position. And in real conductors we do see charge (either positive or negative) build up on the conductor surface, an effect that we model with <em>parasitic capacitance</em> between the wire and other objects.</p> <p>In comments you added</p> <blockquote> <p>It is saying steady current, which means each charge carries must have same force acting on it.</p> </blockquote> <p><em>Steady state</em> current does not mean that each charge carrier must have the same force acting on it. It only means that the conditions remain constant over time, and that a non-changing behavior can be achieved in the given system (for example, it does not oscillate).</p> <p>The concept of a steady state current is a simplifying approximation used to make circuit analysis easier, while still giving results that are "close enough" to reality. All of circuit theory is based on simplifying assumptions: That charge is infinitely divisible rather than quantized, that magnetic fields don't affect the wiring between elements in the circuit, that charge doesn't build up in the wiring between the elements, that the circuit isn't big enough to generate radio waves, etc. These approximations give quite accurate results when analyzing many every-day circuits. But they aren't meant to give you the ability to analyze the behavior of individual charge carriers in the wires, or the light-speed delay between a potential being applied at one end of the wire and a response being seen at the other end, etc.</p> <p>If you look more closely (considering very short time intervals, very small currents, or very narrow wires), you find that the current through the wire can not be truly uniform over time, because of the quantization of charge. When this becomes important in a circuit, we have to give up the steady state approximation and characterize this behavior as "shot noise".</p>
|
Physics
|
|homework-and-exercises|energy|kinematics|momentum|
|
Percentage change in K.E for a given change in momentum
|
<p>The equation <span class="math-container">$$\frac{dE}{E}=\frac{2dP}{P} \tag{1}$$</span> is only valid for infinitesimal small differentials <span class="math-container">$dE$</span> and <span class="math-container">$dP$</span>.</p> <p>Thus using differences (instead of differentials) in <span class="math-container">$$\frac{\Delta E}{E}\approx \frac{2\Delta P}{P} \tag{2}$$</span> is only valid if <span class="math-container">$\frac{\Delta E}{E}\ll 1$</span> and <span class="math-container">$\frac{\Delta P}{P}\ll 1$</span>.</p> <p>But in your case it is <span class="math-container">$\frac{\Delta P}{P}=0.5$</span> which is clearly not <span class="math-container">$\ll 1$</span>, and therefore (2) is not valid.</p>
|
Physics
|
|electrostatics|capacitance|
|
Why can a isolated spherical conductor act as a capacitor?
|
<p>The second sphere is assumed to be at infinity,a sphere of infinite radii So basically it's just your normal capacitor</p>
|
Physics
|
|quantum-mechanics|homework-and-exercises|electric-fields|wavefunction|schroedinger-equation|
|
How to solve for a particle in a triangular well?
|
<p>Here's how I would approach it. I'll use units where <span class="math-container">$m = q\epsilon = \hbar = 1$</span> for notational simplicity.</p> <p>First, use a temporary normalization where <span class="math-container">$a = 1$</span>. We'll fix the overall normalization later.</p> <p>The boundary conditions on your function are <span class="math-container">\begin{align*} x &= 0: && \text{Ai}(-E_n) + b \,\text{Bi}(-E_n) = 0 \\ x &= L: && \text{Ai}(L-E_n) + b \,\text{Bi}(L-E_n) = 0 \end{align*}</span> This is a set of two equations and two unknowns in <span class="math-container">$b$</span> & <span class="math-container">$E_n$</span>, and so should have a discrete solution set. It may be numerically hard to find, but it should exist. For example, here's what it looks like for <span class="math-container">$L = 4$</span>; the blue contours represent the solutions to the first condition, while the yellow contours represent the solutions to the second condition.</p> <p><a href="https://i.stack.imgur.com/W1X2c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W1X2c.png" alt="enter image description here" /></a></p> <p>We can see that these contours intersect at points corresponding to <span class="math-container">$E_n \approx 2.4, 4.5$</span>, and <span class="math-container">$7.6$</span>, with corresponding values of <span class="math-container">$b$</span>. The values read off from the graphs can then be refined with numerical root-finding techniques.</p> <p>Finally, once <span class="math-container">$b$</span> has been found, you would normalize the overall wavefunctions in the usual way. This will probably require numerical integration.</p> <p><strong>EDIT:</strong> A better parametrization scheme would be to set <span class="math-container">$a = c \cos \theta$</span> and <span class="math-container">$b = c \sin \theta,$</span> with <span class="math-container">$0 \leq \theta < \pi$</span>. If you set <span class="math-container">$a = 1$</span> by fiat, then my original method risks missing solutions where <span class="math-container">$b \gg a$</span>. The equations above then become a system in <span class="math-container">$\theta$</span> and <span class="math-container">$E_n$</span>, and <span class="math-container">$c$</span> can be fixed by the normalization constraint once these values are found.</p>
|
Physics
|
|electric-current|
|
What happens when (XL-XC) of a series LCR circuit equals it's resistance during AC flow?
|
<p>I assume you are talking about a series LCR circuit since for parallel circuits we plot <span class="math-container">$1/X_L$</span> and <span class="math-container">$1/X_C$</span> on our phasor diagram instead of <span class="math-container">$X_L$</span> and <span class="math-container">$X_C$</span>.</p> <p>In that case there is nothing special about having <span class="math-container">$|X_L-X_C| = R$</span>. It just means the phase angle is 45° since on the phasor diagram the vertical and horizontal vectors have the same length. As far as I know there is nothing special about a phase angle of 45°.</p> <p>The phasor diagram looks like this. It is not very exciting!</p> <p><a href="https://i.stack.imgur.com/yzYnw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yzYnw.png" alt="Phasor diagram" /></a></p>
|
Physics
|
|acceleration|vectors|velocity|calculus|
|
Magnitude of Acceleration Vector when Speed is Constant
|
<p>The acceleration vector, any instant of the motion, regardless of whether or not the magnitude of the velocity is altered, is always perpendicular to the instantaneous velocity vector. You can work this out using the concept of a "differential triangle".</p> <p>For example draw a velocity vector <span class="math-container">$\vec V$</span> from some point, then draw the changed velocity vector, i.e. another vector of same length and stemming from the same point but rotated by some <em>small</em> angle from the original. Now, draw a vector from the head of the first, to the head of the original, i.e. <span class="math-container">$\Delta \vec v$</span>. In the limit that the angle approaches an infinitesimal, the vector <span class="math-container">$\Delta\vec v$</span> makes a right angle to the vector <span class="math-container">$\vec v$</span>, so that <span class="math-container">$\Delta \vec v$</span> is perpendicular to <span class="math-container">$\vec v$</span>. Now it only takes an infinitesimally small amount of time for the velocity vector to change by an infinitesimally small angle, so you have: <span class="math-container">$$\vec a=\lim\limits_{\Delta t\rightarrow 0}{\Delta\vec v\over \Delta t},$$</span> which is a vector that is equal to the acceleration and is clearly perpendicular to <span class="math-container">$\vec v$</span>.</p>
|
Physics
|
|gauge-theory|group-theory|representation-theory|lie-algebra|linear-algebra|
|
$SU(3)$ adjoint representation and irreducibility
|
<p>The question fails to interpret its own definition correctly: It is correct that every vector <span class="math-container">$v$</span> in an irreducible representation is cyclic, which is the technical term for the span of the orbit being the full representation.</p> <p>However, this does not mean that for any two vectors <span class="math-container">$v,w$</span> in the representation you can find some <span class="math-container">$g$</span> such that <span class="math-container">$w = \rho(g)v$</span> - it only means that there exist finitely many elements <span class="math-container">$g_i$</span> in the group and constants <span class="math-container">$c_i$</span> such that <span class="math-container">$$w = \sum_{i=1}^N c_i g_i v,$$</span> whereas the question makes the mistake of assuming <span class="math-container">$N=1$</span>.</p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|terminology|
|
Difference between real operators and Hermitian operators in quantum mechanics
|
<p>A real linear operator is one whose matrix elements are real. For example, given an orthonormal basis <span class="math-container">$\{|\psi_k\rangle\}$</span>,and an operator <span class="math-container">$\hat O$</span>, suitably defined on a Hilbert space <span class="math-container">$\mathcal H$</span>; when we calculate the expectation value of the operator: <span class="math-container">$\langle \psi_i|\hat O|\psi_j\rangle, \;\forall |\psi_i\rangle,|\psi_j\rangle\in \{|\psi_k\rangle\}$</span>; we always get real numbers. Because we have assumed a particular basis, the realty of an operator is not guaranteed to be a property that holds for any arbitrary basis used to span the Hilbert space. Every Hermitian operator is also a real operator, however, not all real operators are Hermitian. Although the realty of an operator is a basis dependent concept, Hermiticity is intrinsic to the operator and thus holds in any basis, i.e. <span class="math-container">$\langle\hat O\psi |\psi\rangle=\langle\psi |\hat O\psi\rangle, \;\forall |\psi\rangle\in \mathcal H$</span>. So Hermiticity is the crucial requirement needed to do quantum mechanics, it does no good for an operator to be real in this sense, without having Hermiticity.</p>
|
Physics
|
|thermodynamics|electricity|electric-current|electrical-resistance|
|
Why should the heating coil of a heater have high resistance?
|
<blockquote> <p>In that case, the resistance should be minimised?</p> </blockquote> <p>Yes, but</p> <ol> <li><p>You don't want the heater element to produce so much heat that it melts itself.</p> </li> <li><p>You don't want the heater element to draw more current than the power source can support (i.e. you don't want to throw a breaker or start a fire in the building wiring).</p> </li> <li><p>Given the above constraints (or some other reason to choose a specific power consumption for your heater) you want to physically fit the heater in a reasonable amount of space. You can make a, for example, 10 ohm heating element with either 100 meters of "wire" that has 0.1 ohms per meter resistance, or with 1 meter of "wire" that has 10 ohm per meter resistance. The heater made with the second wire can be fit under your desk to keep your feet warm, the other one might be as big as your refrigerator. Which one do you think customers are more likely to buy?</p> </li> </ol>
|
Physics
|
|special-relativity|acceleration|time-dilation|
|
Relativistic Time Dilation and Experienced G-forces
|
<p>The answer requires thinking about what you mean by "acceleration".</p> <p>In SR, there are multiple conceptions of time. When you define <span class="math-container">$\vec{a} = \frac{d\vec{v}}{dt}$</span>, you must specify whose <span class="math-container">$dt$</span>. There is a difference between <a href="https://en.wikipedia.org/wiki/Proper_acceleration" rel="nofollow noreferrer"><strong>proper acceleration</strong></a>, where the derivative is taken with respect to proper time <span class="math-container">$d\tau$</span> (measured on the astronaut's clock), and <strong>coordinate acceleration</strong> where they are taken with respect to coordinate time dt (measured on a stationary observer's clock).</p> <p>Due to the commutativity of derivatives, you could also define proper acceleration as the derivative of proper velocity with respect to coordinate time. Proper acceleration is: <span class="math-container">$$ \vec{A} = \frac{d}{dt}\left[\frac{d\vec{x}}{d\tau}\right] = \frac{d}{d\tau}\left[\frac{d\vec{x}}{dt}\right], $$</span> and coordinate acceleration is: <span class="math-container">$$ \vec{a} = \frac{d^2\vec{x}}{dt^2}.$$</span></p> <h2>constant coordinate acceleration?</h2> <p>One consequence of SR is that nothing can be accelerated to the speed of light. Let's say we remain stationary. The astronaut's rocket ship supplies a constant force, which causes the astronaut to accelerate away at <span class="math-container">$10g$</span> starting from rest. In SR this won't cause a constant coordinate acceleration. Newton's second law in SR (for a force parallel to the motion, like the rocket) is: <span class="math-container">$$ \vec{F} = \gamma^3 m \vec{a}, $$</span> where <span class="math-container">$\gamma = (1-v^2/c^2)^{-1/2}$</span> is the Lorentz factor. As the astronaut's speed increases, so does <span class="math-container">$\gamma$</span>, and the coordinate acceleration <span class="math-container">$a$</span> decreases. In fact, as <span class="math-container">$v\rightarrow c$</span>, <span class="math-container">$a\rightarrow 0$</span>.</p> <p>For the astronaut's coordinate acceleration to remain constant, the force of the rocket would need to increase as time passed. Eventually, it would require infinite force to maintain the <span class="math-container">$10g$</span> acceleration. This is why nothing with mass can be accelerated to the speed of light.</p> <p>In SR no coordinate acceleration can be uniformly maintained for an arbitrary amount of time.</p> <h2>constant proper acceleration</h2> <p>The astronaut experiences bodily stress according to their proper acceleration. The equivalence principle states that a uniform acceleration is indistinguishable from a uniform gravitational field. If the rocket has a proper acceleration of <span class="math-container">$10g$</span>, then the astronaut feels like they are in a <span class="math-container">$10g$</span> gravitational field. If the rocket had a <span class="math-container">$1g$</span> proper acceleration, the astronaut would feel quite at home in the <span class="math-container">$1g$</span> gravitational field.</p> <p>The constant force rocket will result in constant proper acceleration: <span class="math-container">$$\vec{F} = m\vec{A}$$</span></p> <p>Because proper acceleration takes a derivative using the astronaut's clock, time dilation is in some sense the cause of the difference. However, the astronaut does not experience any local time dilation or length contraction. By all experiments the astronaut could devise, they will conclude that they are at rest in a uniform <span class="math-container">$10g$</span> gravitational field. All the local lengths inside their spaceship will remain the same, and their clock will happily tick along normally. SR effects are noticed when multiple observers compare measurements in different reference frames. The astronaut would conclude that the Earth is accelerating away, and as it goes faster the clocks on Earth slow down more and more compared to their own perfectly normal clock.</p>
|
Physics
|
|electromagnetism|
|
What does the magnetic force between two current currying wire segments depend on?
|
<p>The <span class="math-container">$\bf B$</span> field for a wire of length L in the x direction is, <span class="math-container">\begin{equation} {\bf B}(x,y)=\frac{{\bf I\times{\hat j}}}{cy}\left[\frac{(L/2-x)}{\sqrt{L/2-x)^2+y^2}}+\frac{(L/2+x)}{\sqrt{L/2+x)^2+y^2}}\right]. \end{equation}</span> You can derive from this that the field diminishes as x extends beyond L/2. This means there will be a weaker field acting on the second wire when they are displaced. Integrating I<span class="math-container">$\bf dl\times B$</span> over the second wire also shows that the force is weakens with displacement.</p>
|
Physics
|
|homework-and-exercises|electric-circuits|capacitance|
|
How does this circuit behave?
|
<p>In the original diagram, the voltage across capacitance <span class="math-container">$C_1$</span> is <span class="math-container">$\Delta V_1 = V_A - V_{\text{junction}}$</span>, where <span class="math-container">$V_{\text{junction}}$</span> is the potential between at the junction just to the right of <span class="math-container">$C_1$</span>. Because there is no potential drop across a resistance-less wire, the potential at the junction just to the right of <span class="math-container">$C_2$</span> is also <span class="math-container">$V_{\text{junction}}$</span>, so the voltage across <span class="math-container">$C_2$</span> is <span class="math-container">$\Delta V_2 = V_A - V_{\text{junction}}$</span>.</p> <p>Similarly, the voltage across <span class="math-container">$C_3$</span> is <span class="math-container">$\Delta V_3 = V_{\text{junction}} - V_B$</span> and across <span class="math-container">$C_4$</span> is <span class="math-container">$\Delta V_4 = V_{\text{junction}} - V_B$</span>.</p> <p>If the voltage across two or more capacitances is the same, then they are connected in parallel, so here clearly the only combinations are <span class="math-container">$C_1/C_2$</span> and <span class="math-container">$C_3/C_4$</span>. Their equivalents (say <span class="math-container">$C_{eq, 12}$</span> and <span class="math-container">$C_{eq, 34})$</span> have different voltages across them (<span class="math-container">$V_A - V_{\text{junction}}$</span> and <span class="math-container">$V_{\text{junction}} - V_B$</span> respectively), so they are NOT connected in parallel.</p> <p>However, the total voltage across these two (equivalent) components is <span class="math-container">$V_A - V_B = (V_A - V_{\text{junction}}) + (V_{\text{junction}} - V_B)$</span>, and when two capacitances are connected such that <span class="math-container">$\Delta V = \Delta V_1 + \Delta V_2$</span> (and that they have the same charge <span class="math-container">$Q_1$</span> = <span class="math-container">$Q_2$</span>), they are connected in series.</p> <p><span class="math-container">$C_{eq, 12} \ (C_1 || C_2)$</span> and <span class="math-container">$C_{eq, 34} \ (C_3 || C_4)$</span> are in series.</p> <p>The variation you drew maintains all these descriptions of the circuit, so it is indeed equivalent.</p> <p>Hope this helps.</p>
|
Physics
|
|optics|waves|polarization|
|
Why intensity of unpolarised light is halved each time it passes through a polariser?
|
<p>The intensity of <em>unpolarized</em> light is halved when it passes through a polarizing filter (note that this assumes a perfect filter - in practice the exact proportion of light passed may be somewhat less than half, depending on the design of the filter).</p> <p>The intensity of <em>polarized</em> light passing through a polarizing filter depends on the angle between the polarization of the incident light and the direction of the filter, according to Malus's law (again, this assumes a perfect filter).</p> <p>We can't comment on your test question without seeing exactly how it was worded.</p> <p><strong>Update</strong></p> <p>Now that you have posted the original question then your problem is clearer. There is already a more detailed answer elsewhere, but in summary the maximum amplitude (when the angle between filters P and Q is <span class="math-container">$\frac \pi 4$</span>) is <span class="math-container">$\frac {I_0} 8$</span> because:</p> <ul> <li>the intensity of the unpolarized light passing through filter P is reduced by a factor of <span class="math-container">$\frac 1 2$</span> because this is the <em>average</em> value of <span class="math-container">$\cos^2 \theta$</span></li> <li>the intensity of the polarized light passing through filter Q is reduced by another factor of <span class="math-container">$\frac 1 2$</span> because this is the value of <span class="math-container">$\cos^2 \theta$</span> when <span class="math-container">$\theta = \frac \pi 4$</span>, which is the angle between filters P and Q</li> <li>the intensity of the polarized light passing through filter R is reduced by a third factor of <span class="math-container">$\frac 1 2$</span> because this is the value of <span class="math-container">$\cos^2 \theta$</span> when <span class="math-container">$\theta = \frac \pi 4$</span>, which is the angle between filters Q and R</li> </ul>
|
Physics
|
|electric-circuits|electricity|
|
Does the position of the fuse affect whether the bulb or the fuse would blow first?
|
<p>In either case the bulb will be protected by the blowing of the fuse, further this also works when the current is AC or DC.</p> <p>The reason for this is because fuses are calibrated to blow at a rated amperage, where the fuse is in the series circuit does not change the current flowing through it, therefore it protects the circuit in either location.</p>
|
Physics
|
|fluid-dynamics|flow|vortex|
|
Can vortices be formed in a non-viscous fluid?
|
<p>Short answer:</p> <ul> <li>viscosity, even in small regions of the domain, is usually needed to create vorticity;</li> <li>if thin regions of large vorticity exist in a domain of otherwise irrotational flow, its possible to identify the rotational regions as vortices</li> <li>once created, if the viscous effects outside them are negligible, the intensity of these vortices tends to be conserved.</li> </ul> <p>Anyway, I feel that some more details can be useful.</p> <h2>Model to describe nature</h2> <p>We usually explain the nature with models. In fluid dynamics, in many cases of interest, its possible to work with (incompressible) <strong>inviscid irrotational flows</strong> where the influence of viscosity and vorticity are zero <strong>almost everywhere</strong>. This model usually is a good model for flows:</p> <ul> <li>at high Reynolds number, where effects of viscous stress are almost everywhere negligible w.r.t. inertia,</li> <li>around streamlined bodies, so that no flow separation occurs.</li> </ul> <p>In such conditions, vorticity and viscous effects are confined to very thin regions, namely thin boundary layers on the surface of solid bodies and free wakes in the domain.</p> <p>Mathematically, these regions can be represented as finite jump of velocity in a infinitesimal thickness, and thus infinite vorticity: these mathematical models are lumping vorticity in these thin regions, while the rest of the domain is modelled as irrotational.</p> <h2>What is a vortex?</h2> <p>While there is no unambiguous answer in fluid dynamics, it's possible to identify a vortex with the infinitesimal infinite-vorticity regions when using the (quasi)-irrotational model of the fluid described above.</p> <p>Within this model some theorems exist, namely Kelvin and Helmholtz theorems, stating some sort of <strong>conservation of vortex intensity</strong>:</p> <ul> <li><p><strong>Helmholtz</strong> theorems tell us that:</p> <ul> <li>the intensity of a vortex line is constant in space</li> <li>and thus, a vortex line is either closed (vortex ring) or originates/ends on the boundary of the fluid domain (as an example on solid walls), or they can merge like (<a href="https://www.youtube.com/watch?v=t2kvEC852MI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=t2kvEC852MI</a>)</li> </ul> </li> <li><p><strong>Kelvin</strong> theorem tells us that the circuitation along a closed material curve is constant in time</p> </li> </ul> <p><strong>Vortex rings</strong> <img src="https://i.stack.imgur.com/3tiyt.png" alt="1" /></p> <p><strong>Vortex originating from solid bodies</strong> | | | | |------------|------------|------------| | <img src="https://i.stack.imgur.com/QncGh.png" alt="2" /> | <img src="https://i.stack.imgur.com/KsUps.jpg" alt="3" /> | <img src="https://i.stack.imgur.com/xxiz1.jpg" alt="4" /> |</p> <p><strong>Merging vortices</strong> | | | |------------|------------| | <img src="https://i.stack.imgur.com/OxsVg.png" alt="5" /> | <img src="https://i.stack.imgur.com/Fc0ri.png" alt="6" /> |</p>
|
Physics
|
|quantum-mechanics|quantum-information|quantum-interpretations|measurement-problem|quantum-measurements|
|
What happens if two people have different knowledge about a state in a quantum mechanical system?
|
<p>Suppose you have an ensemble of identical quantum systems prepared in a pure state <span class="math-container">$\psi$</span>. Next a measurement occurs of an observable <span class="math-container">$A$</span> with two possibles outcomes <span class="math-container">$a$</span> and <span class="math-container">$a’$</span>.</p> <p>A measurement is an objective fact, independent of any observer, since it is a physical interaction with an instrument. As a consequence the state changes. Outcomes are irreversibly recorded at macroscopic level and produce physical events in spacetime (it does not matter if we know them or not).</p> <p>How it changes may depend on the knowledge about the outcome in a probabilistic theory as QM is. We can assign different states to a single system according to the comparison ensemble we choose. This choice is made on the ground of our knowledge about the outcomes of the measurement.</p> <p>Suppose that the post measurement states associated to the above outcomes are respectively <span class="math-container">$\phi$</span> and <span class="math-container">$\phi’$</span> with respective probabilities <span class="math-container">$p$</span> and <span class="math-container">$p’$</span> with <span class="math-container">$p+p’=1$</span>.</p> <ul> <li>If I know that a specific element of the ensemble has produced <span class="math-container">$a$</span>, then I assign it the post measurement state <span class="math-container">$\phi$</span>.</li> <li>If I do not know the outcome, I assign it the post measurement state given by the mixture</li> </ul> <p><span class="math-container">$$p|\phi\rangle\langle \phi|+ p’|\phi’\rangle\langle \phi’|.$$</span></p> <p>Both possibilities are permitted, and the theory turns out to be consistent if I use it coherently with the made choice. That is because predictions in terms of probabilities to be compared with experimental frequencies refer to different post measurement ensembles. And the considered system simultaneously belongs to both them.</p> <p>In the first case the system is viewed as part of the post measurement ensemble, sub ensemble of the initial one, which is associated to the outcome <span class="math-container">$a$</span>.</p> <p>In the second case, the same system, is viewed as part of the post measurement ensemble which coincides with the initial one and embodies both outcomes.</p> <p>The objective fact is the measurement process which changes the state, in a nonunitary way, from a pure state to a mixture. (The nature of this process is not yet completely understood, and maybe it also includes a subjective part in terms of a partial trace on the instrument/environment quantum degrees of freedom.)</p> <p>If we focus on an outcome, and we view the system as part of the corresponding sub ensemble, we introduce a subjective effect, a choice, which is very often called the “state collapse” <span class="math-container">$\psi \to \phi$</span>.</p> <p>(All the discussion above relies upon the ensemble interpretation of the QM formalism. A different possibility is to think of the state <span class="math-container">$\psi$</span> as an objective property of a single quantum system, independently of any ensemble which may contain it. Today I do not like this perspective, even if I considered it the correct one in the past.)</p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|definition|
|
Are Hermitian operators Hermitian in any basis?
|
<p>A linear operator on a Hilbert space <span class="math-container">$A:\mathcal H\to\mathcal H$</span> is Hermitian if for all <span class="math-container">$v,w\in\mathcal H$</span>, <span class="math-container">$$\langle Av, w\rangle = \langle v,Aw\rangle$$</span></p> <p>This definition makes no reference to any particular basis, since both the inner product and the linear operators on the Hilbert space are defined independently of any basis. Therefore Hermiticity is a property of the operator and not what basis it is represented in.</p>
|
Physics
|
|newtonian-mechanics|forces|free-body-diagram|torque|
|
Angle for ruler falling off the table
|
<p>You have to start with a good diagram indicating all applicable forces applied in a positive sense, relevant dimensions, and points of interest.</p> <p><a href="https://i.stack.imgur.com/aKv1G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKv1G.png" alt="fig1" /></a></p> <p>Here a force <span class="math-container">$P$</span> is applied on the end of the ruler downwards, in addition to the weight of the ruler <span class="math-container">$W$</span> applied in the center.</p> <p>The edge contact provides a normal force <span class="math-container">$N$</span> and a frictional reaction <span class="math-container">$F$</span> that prevents the ruling from sliding off.</p> <p>If you look at the forces applied in the vertical direction you must counteract the weight and applied force with the vertical component of the normal force and frictional force. On the other hand, the horizontal components must balance out</p> <p><span class="math-container">$$ \begin{aligned} -W - P + N \cos \varphi + F \sin \varphi & = 0 \\ N \sin \varphi - F \cos \varphi & = 0 \\ \end{aligned}$$</span></p> <p>The solution to the above is</p> <p><span class="math-container">$$ \begin{aligned} F & = (P+W) \sin \varphi \\ N & = (P+W) \cos \varphi \end{aligned}$$</span></p> <p>The stability limit is reached when <span class="math-container">$F \gt \mu_S N$</span> where <span class="math-container">$\mu_S$</span> is the coefficient of static friction and from the above you get</p> <p><span class="math-container">$$ \sin \varphi \gt \mu_S \cos \varphi $$</span></p> <p>with the solution for the tilt angle</p> <p><span class="math-container">$$ \varphi \gt \arctan( \mu_S) $$</span></p> <p>This angle is often called the <em>critical angle</em> for friction. It is the same angle something placed on a ramp is going to be sliding down when exceeded.</p>
|
Physics
|
|quantum-field-theory|feynman-diagrams|path-integral|partition-function|graph-theory|
|
Interpreting generating functional as sum of all diagrams
|
<p>The generating functional <em>by definition</em> is an object that encodes the Green's functions in a series as <span class="math-container">$$Z[J]=\sum_{n=0}^\infty \dfrac{1}{n!}\int dx_1\cdots dx_n J(x_1)\cdots J(x_n) \langle \phi(x_1)\cdots \phi(x_n)\rangle\tag{1}.$$</span></p> <p>This is the case because it is defined so that the correlators can be extracted as <span class="math-container">$$\langle \phi(x_1)\cdots\phi(x_n)\rangle = \dfrac{\delta^n Z[J]}{\delta J(x_1)\cdots \delta J(x_n)}\bigg|_{J=0}\tag{2}.$$</span></p> <p>The fact that <span class="math-container">$Z[J]$</span> is given by the expression you wrote can then be seem as a consequence of the Dyson-Schwinger equations, for example. See this <a href="https://physics.stackexchange.com/questions/747031/what-does-it-mean-for-a-field-to-be-defined-by-a-measure/747034#747034">answer of mine</a> to a question you asked previously for more into this.</p> <p>So, answering your current question, since each correlator is a sum of Feynman diagrams it is indeed possible to see <span class="math-container">$Z[J]$</span> as a sum over all possible Feynman diagrams in view of its definition, equation (1).</p>
|
Physics
|
|electric-circuits|electric-fields|
|
Why doesn't the nonconservative nature of the electric field pushing along the wire to produce the drift velocity violate Kirchhoff's voltage law?
|
<p>Consider that your loop is a battery connected by wires to one resistor. The E field inside the battery is, like a capacitor, opposite to the E field inside the wires, where the E field inside the wires is needed to set up the current as Ohm's Law <span class="math-container">$\vec J=\sigma \vec E$</span> says it has to be.</p> <p>The voltage difference between the ends of the battery is one and the same; going from inside the battery gives opposite sign as going outside the battery, and so the loop integral cancels out. KVL happens to hold for this. Of course, you should always abandon KVL and go straight to Faraday's Law, which always holds. There is no time varying B fields here, so Faraday's Law says that KVL will hold.</p>
|
Physics
|
|quantum-field-theory|feynman-diagrams|correlation-functions|greens-functions|propagator|
|
Amputated connected 2-point function is inverse to connected 2-point function
|
<p>TL;DR: Use the relations <span class="math-container">$$D_2~=~\int E_1A_2E_2\tag{1}$$</span> and <span class="math-container">$$E_1~=~D_2~=~E_2\tag{2}$$</span> to conclude OP's sought-for relation <span class="math-container">$$A_2~=~D_2^{-1}.\tag{3}$$</span></p> <p>Eq. (2) can be argued in at least 2 ways:</p> <ol> <li><p>Either we assume that the propagator <span class="math-container">$D_2$</span> is a matrix of <em>all</em> field species in the theory, and hence unique.</p> </li> <li><p>More commonly, we assume [or can prove e.g. via conservation laws] that the propagator <span class="math-container">$D_2$</span> is <em>diagonal</em> in the field species [so that there in principle are eqs. (1)-(3) for <em>each</em> field species].</p> </li> </ol>
|
Physics
|
|general-relativity|special-relativity|
|
Apparent "centrifugal" like force on a uniformly accelerating relativistic object
|
<blockquote> <p>This would mean that objects undergoing uniform acceleration would feel as though something was trying to stretch them apart, but I've never heard of something like this.</p> </blockquote> <p>This is basically correct. More precisely, if an object is to remain rigid as it accelerates, then different parts of the object require different proper accelerations. <strong>The rear of the object has to accelerate faster than the front.</strong></p> <p>There are several ways to understand this. One is that in the frame of an inertial observer, the object length-contracts as it accelerates. To accomplish this, the rear clearly has to accelerate faster than the front.</p> <p>We can also consider the perspective of observers on the accelerating object. Due to how surfaces of constant time rotate as you accelerate, these observers would say that the front of the object has been accelerating for a longer time than the back has. You can frame this more clearly in terms of gravitational time dilation: the rear is lower in the "gravitational potential", so its clock runs slower. Since the rear of the object has had less time to accelerate, it needs a greater acceleration to keep up.</p>
|
Physics
|
|spacetime|astronomy|curvature|estimation|error-analysis|
|
Measuring distances between the stars
|
<p>For nearby stars (i.e. anything in our own galaxy) we can ignore the curvature of space and the calculation is straightforward - if we know the lengths of two sides of a triangle and the angle between those sides then simple trigonometry gives us the length of the third side.</p> <p>For objects that are far enough away that the curvature of space becomes significant, then (a) the whole notion of distance between two objects becomes more complicated, since we could be observing these objects at very different epochs due to the finite speed of light and (b) any correction due to the curvature of space is marginal compared to the uncertainties in our estimates of cosmic distances at this range, which are only accurate to <span class="math-container">$\pm 5 \%$</span> at the very best (see <a href="https://en.wikipedia.org/wiki/Cosmic_distance_ladder" rel="nofollow noreferrer">this Wikipedia article</a>).</p> <blockquote> <p>From the calculation of the position of the stars can we infer the shape of space-time ?</p> </blockquote> <p>Not by calculating the distances between pairs of objects, but instead by estimating the distances to many, many objects (galaxies or galaxy clusters) and then comparing the number of objects at various distance ranges with the number we would expect to see if the universe were not expanding.</p>
|
Physics
|
|special-relativity|photon-emission|
|
Relativistic case recoil of the target by emission of a photon
|
<p>You should simply NOT introduce relativistic mass. The only well-defined quantity is rest mass, and working in terms of it will be much easier than you think. You should learn the way I am organising this computation, using 4-vectors, because it will keep everything simple.</p> <p>Let us consider the most simplest case, the centre-of-momentum frame's view of the collision between an atom and a photon. The atom has energy <span class="math-container">$E=+\sqrt{m^2c^4+p^2c^2}$</span> if it has momentum <span class="math-container">$p$</span>, i.e. satisfying <span class="math-container">$E^2-p^2=m^2$</span> always, and the photon, using your notation, has energy <span class="math-container">$Q=h\nu$</span> and momentum <span class="math-container">$\frac Qc$</span>, and I have chosen for all of these quantities to be in the centre-of-momentum frame. In 4-vector form, this collision looks like this: <span class="math-container">$$\tag1 \begin{pmatrix}+\sqrt{m^2c^4+Q^2}\\+Q/c \end {pmatrix}+ \begin{pmatrix}+Q\\-Q/c \end {pmatrix}\Rightarrow \begin{pmatrix}+\sqrt{m^2c^4+Q^2}\\-Q/c \end {pmatrix}+ \begin{pmatrix}+Q\\+Q/c \end {pmatrix} $$</span> where initially the atom is moving rightwards with equal and opposite momentum to the photon, and later they just exchanged their momenta. You can literally read off the 4-vectors and know that the left quantity is always the atom, and the right quantity is the photon, they always obey the Einstein energy-momentum-mass relation <span class="math-container">$E^2-p^2=m^2$</span></p> <p>Now we move to the laboratory frame, where the initially the atom was at rest. In this case, we obtain <span class="math-container">$$\tag2 \begin{pmatrix}+mc^2\\0 \end {pmatrix}+ \begin{pmatrix}+Q_0\\-Q_0/c \end {pmatrix}\Rightarrow \begin{pmatrix}+\sqrt{m^2c^4+(Q_0+Q_1)^2}\\-(Q_0+Q_1)/c \end {pmatrix}+ \begin{pmatrix}+Q_1\\+Q_1/c \end {pmatrix} $$</span> You can check that, in this way of writing things, I have guaranteed that</p> <ol> <li>linear momentum is strictly conserved.</li> <li>Each particle still strictly obeys <span class="math-container">$E^2-p^2=m^2$</span> appropriate for itself.</li> </ol> <p>We just have to ensure that energy is conserved. i.e. just the top components <span class="math-container">$$ \begin{align} \tag3mc^2+Q_0&=+\sqrt{m^2c^4+(Q_0+Q_1)^2}+Q_1\\ \tag4(mc^2+Q_0-Q_1)^2&=m^2c^4+(Q_0+Q_1)^2\\ \tag5m^2c^4+2mc^2(Q_0-Q_1)+(Q_0-Q_1)^2&=m^2c^4+(Q_0+Q_1)^2\\ \tag62mc^2(Q_0-Q_1)&=(Q_0+Q_1)^2-(Q_0-Q_1)^2\\ \tag7&=2Q_0(2Q_1)=4Q_0Q_1\\ \tag8\therefore\qquad Q_1&=\frac{mc^2}{mc^2+2Q_0}Q_0\quad<Q_0 \end {align} $$</span> Now, my Equation (8) is very different from your Equations (7) and (21), and it should be clear that it is yours that is wrong. The 2 is in the numerator, not in the denominator.</p> <p>To get the recoil velocity, we just need to use <span class="math-container">$\frac vc=\frac{pc}E$</span>, and to do that, it helps if we first have <span class="math-container">$Q_0+Q_1$</span>, which, using the above, simplifies to <span class="math-container">$\frac{2mc^2+2Q_0}{mc^2+2Q_0}Q_0$</span> and so <span class="math-container">$$\tag9v=c\frac{\frac{2mc^2+2Q_0}{mc^2+2Q_0}Q_0}{\sqrt{m^2c^4+\left(\frac{2mc^2+2Q_0}{mc^2+2Q_0}Q_0\right)^2}}$$</span> This is actually a function of <span class="math-container">$\frac{Q_0}{mc^2}$</span>; expand it this way to get the usual limit. If you want the atom to recoil relativistically, it is obvious then that <span class="math-container">$Q_0\gg mc^2$</span> so that you should expand this in powers of <span class="math-container">$\frac{mc^2}{Q_0}$</span> and see what happens.</p> <p>The entire above analysis assumes that the recoiling atom is still in its initial state. As the photon increases in energy, it is more likely for the photon to excite the atom to a higher state. Then you cannot assume that the rest mass is the same afterwards. That is also an interesting thing to consider and compute, in roughly the same manner.</p> <hr /> <p>After many days of back and forth, it is now clear what the OP wants is something completely else. We now consider the problems as covered by AP French. For the situation of a photon being completely absorbed by an atom, we have <span class="math-container">$$\tag{10} \begin{pmatrix}+Q_0\\+Q_0/c \end {pmatrix}+ \begin{pmatrix}+mc^2\\0 \end {pmatrix}\Rightarrow \begin{pmatrix}+mc^2+Q_0\\+Q_0/c \end {pmatrix} $$</span> That is, the exact full SR velocity of the recoil of the atom is <span class="math-container">$v=c\frac{Q_0}{mc^2+Q_0}$</span>, and this formula is in the book (and seems to be in OP's question too). However, it is important and interesting to consider the invariant rest energy of the resulting atom, which is <span class="math-container">$\sqrt{(mc^2+Q_0)^2-Q_0^2}=\sqrt{m^2c^4+2mc^2Q_0}=mc^2+E_\text{excitation}$</span>; For a rough estimate, consider that the rest energy of the Hydrogen atom is <span class="math-container">$938.27208816\times10^6\,e$</span>V whereas the maximum energy that a Hydrogen atom can absorb and yet still stay an atom, the <em>binding energy,</em> is the famous <span class="math-container">$13.6\,e$</span>V, and you can immediately tell that <span class="math-container">$Q_0\ll mc^2$</span> for the above formula to be applicable. i.e. the recoil velocity of the atom is necessarily non-relativistic.</p> <p>Similarly, we can consider the emission of a photon. Now the excited atom is stationary and transitions to the ground state. The smart thing to do is to take the invariant rest energy level from earlier and deduce what the new photon energy is. Namely, <span class="math-container">$$\tag{11} \begin{pmatrix}+\sqrt{m^2c^4+2mc^2Q_0}\\0 \end {pmatrix}\Rightarrow \begin{pmatrix}+\sqrt{m^2c^4+Q_1^2}\\-Q_1/c \end {pmatrix}+ \begin{pmatrix}+Q_1\\+Q_1/c \end {pmatrix} $$</span> <span class="math-container">$$ \begin{align} \tag{12}m^2c^4+2mc^2Q_0-2Q_1\sqrt{m^2c^4+2mc^2Q_0}+Q_1^2&=m^2c^4+Q_1^2\\ \tag{13}Q_1&=\frac{Q_0}{\sqrt{1+\frac{2Q_0}{mc^2}}} \end {align} $$</span> Here, the recoil velocity is <span class="math-container">$v=c\frac{Q_1}{\sqrt{m^2c^4+Q_1^2}}=c\frac{Q_0}{mc^2+Q_0}$</span>, which is already an interesting result. Needless to say, there is yet again constraints on how energetic the photon can be, and so the recoiling atom must be in the non-relativistic regime.</p> <p>Anyway, between my Equation (8) and my Equation (13), this problem is very completely solved. There is not much more to say.</p>
|
Physics
|
|gyroscopes|precession|
|
When looking at Gyroscopes, does precession only occur if a force is applied perpendicular to the axis of rotation? or can it be a parallel force?
|
<p>So this is for a presentation for Helicopter students. And yeah, many texts are written by authors who only have partial understanding, so you end up with different authors contradicting each other to some extent.</p> <p>My approach is: understanding of the underlying dynamics such that I don't need any memorization. Anytime I need it I reconstruct the reasoning. (With every repetition that reconstructing-the-reasoning goes faster.)</p> <p>(It may be that you have an expectation that the phenomenon is very complicated, and that for helicopter students rote learning is the only option. I assert: transparent understanding is accessible for everyone.)</p> <p>As pointed out in the answer by Jos Bergervoet, it is necessary to specify applied force in terms of torque.</p> <p>About onset of gyroscopic precession:</p> <p>I need to establish some basics first:<br /> When a torque is applied to a spinning wheel the initial/momentary response is to yield to that torque (a little). It is the <em>motion</em> of yielding to the initial torque (a little) that induces the subsequent precessing motion.</p> <p>Discussion of onset of gyroscopic precession, illustrated with diagrams, is in a 2012 answer by me to a question titled '<a href="https://physics.stackexchange.com/a/47645/17198">What determines the direction of precession of a gyroscope?</a>'</p> <p>In that discussion I capitalize on symmetry. The reasoning is simplified by a division in quadrants.</p> <p>In that 2012 answer I use the case of a gimbal mounted gyro wheel with the spin axis parallel to the local horizontal.</p> <p>So, in order to visualize the response of the rotating helicopter blades you remap the orientations of the explanation to that case.</p> <p>If you have any follow-up questions: contact information for me is available on my stackexchange profile page.</p> <p>About the assertion at the start of this answer, that the rotating object yields (a little):</p> <p>This property of onset of gyroscopic precession is for example pointed out in the Feynman lectures (Vol 1, chapter 20, section 3, <a href="https://www.feynmanlectures.caltech.edu/I_20.html#Ch20-S3" rel="nofollow noreferrer">The gyroscope</a> . In addition, it has been experimentally verified with an tabletop experiment. Svilen Kostov and Daniel Hammer, 2010, '<a href="https://arxiv.org/abs/1007.5288" rel="nofollow noreferrer">It has to go down a litte, in order to go around</a>'</p>
|
Physics
|
|thermodynamics|energy|electric-circuits|electricity|
|
Electrical energy is $I^2Rt$, and heat dissipated is also $I^2Rt$?
|
<p>"If a source provides some energy, and all of it is dissipated as heat, is it of any use?"</p> <p>Do you not consider electric heaters, (eg fan heaters, convectors) useful?</p> <p>They consist, essentially, of a length of wire across which we place a potential difference. Once the wire has reached its equilibrium temperature all the electrical energy taken from the source is given out as heat from the wire.</p> <p>If, though, the wire is moving through a magnetic field, as in an electric motor, a 'back emf' will be induced in it, so that the effective potential difference across the wire will be less than the source emf, so heat given off is less than the energy taken from the source. The rest of the energy taken from the source is available as mechanical work done by the motor spindle.</p>
|
Physics
|
|quantum-mechanics|density-operator|
|
Square of the density operator for a mixed state
|
<p>Assume, for example, that the states <span class="math-container">$|\psi_k\rangle$</span> are mutually orthogonal. In an orthonormal basis including the <span class="math-container">$|\psi_k\rangle$</span>, <span class="math-container">$\rho$</span> is diagonal and its elements are the <span class="math-container">$p_k$</span>. To obtain <span class="math-container">$\rho^2$</span>, we simply replace matrix elements <span class="math-container">$p_k$</span> by <span class="math-container">$p^2_k$</span> . Relations <span class="math-container">$\rho^2\neq \rho$</span> and <span class="math-container">$tr(\rho^2) \le 1$</span> then follow from the fact that the <span class="math-container">$p_k$</span> are always less than 1 and <span class="math-container">$\Sigma p_k^2 \le (\Sigma p_k)^2$</span> (except in the particular case where only one of them is non-zero: the pure case).</p> <p>Source : QUANTUM MECHANICS, Volume I : Basic Concepts, Tools, and Applications, by Claude Cohen-Tannoudji, Bernard Diu, and Franck Laloë , Translated from the French by Susan Reid Hemley, Nicole Ostrowsky, and Dan Ostrowsky</p>
|
Physics
|
|electrostatics|
|
Finding equilibrium of a system by differentiating the potential energy of the system by one of the charge
|
<p>This question and some similar ones are answered <a href="https://physics.stackexchange.com/questions/803882/equilibrium-constellations-of-classical-point-charges-in-hollow-conductors/803883#803883">here</a>. Note that your configuration can only attain <em>unstable</em> equilibrium. But that's still equilibrium of course.</p> <p>You'll see that for different regular polygons you need: <span class="math-container">$$ q_c=-q\ \sum_{n=1}^{N-1} \frac1{4\sin(n\pi/N)} $$</span> <span class="math-container">$$ N=2 \ \ \Rightarrow \ \ q_c=-q/4 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$</span> <span class="math-container">$$ N=3 \ \ \Rightarrow \ \ q_c=-q/\sqrt3 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$</span> <span class="math-container">$$ N=4 \ \ \Rightarrow \ \ q_c=-q\ (2\sqrt2 + 1)/4 $$</span> <span class="math-container">$$ N=5 \ \ \Rightarrow \ \ q_c=-q \ \sqrt{1+2/\sqrt5} $$</span></p> <p>What you do in your analysis is not testing for equilibrium against displacement of the charges, as that would require <span class="math-container">$d U/da$</span>. You calculate in fact what would happen if the central charge <span class="math-container">$q$</span> is fixed and then charge is allowed to flow (in equal amounts) to the corner points. That is another equilibrium, and gives another value for <span class="math-container">$Q$</span>, than the value for which the forces become <span class="math-container">$0$</span>.</p>
|
Physics
|
|pressure|temperature|fusion|
|
Why does a Tokomak work on low pressure? In most other places high pressure seems to be required for fusion
|
<p>For fusion to "work", the energy being lost to the environment, mostly through radiation, has to be slower than the rate of new energy being created by the fusion reactions. As long as that is true, the fuel will continue to stay hot.</p> <p>An added annoyance is that some of that energy being created is in a form that cannot easily be captured within the fuel, like neutrons. In the D-T reaction, the neutrons are 75% of the energy, which makes things difficult.</p> <p>So, what is the rate of fusion? There are three important bits:</p> <ol> <li><p>the temperature - the fuel consists of ions which naturally repel each other, so the energy of the ions has to be at least this "Coulomb barrier" value. It also can't be so high that the ions just go whizzing past each other without a chance to react. The result is a curve of energy vs. probability, or more commonly, temperature, which looks roughly like a bell curve with a peak reaction rate at some characteristic value. The latter is not the same as the former, because in a fluid the particles will be at a mix of temperatures, so even in a fuel that is in bulk below the Coulomb barrier, some of the ions will have enough to fuse. In other words, even if your prob/energy curve peaks at the energy equivalent of 1 billion degrees, the fuel as a whole only needs to be 50 million, which will give you enough high energy bits in the long tail to keep things going.</p> </li> <li><p>time - if you have a mix of fuel and only some of the ions have enough energy, they are going to take some time before they randomly bump into someone going the other direction with enough energy. The fuel has to stay in that state long enough that these reactions occur, and this bit is important, if that energy radiates away before the reactions occur you're sunk. So the real measure here is what they call the "confinement time", the time that the energy stays in the system, not the time any given particle does. In the past, the systems were so leaky that the particle confinement time was less than the energy time, so this was academic, but these days we have a bunch of concepts that can keep the plasma in there on the order of minutes and now the energy loss dominates.</p> </li> <li><p>density - if you pack the fuel together tighter, the ions don't have to go as far before they meet their partner. So (2) and (3) work in concert, if you increase the density you'll increase the rate of fusion events, so you can back off on the confinement time because it won't cool as much in that shorter period.</p> </li> </ol> <p>The product of these three numbers gives you the rate of fusion, and this is so important it is known as "the <strong>fusion triple product</strong>".</p> <p>So back to your question.</p> <p>The reason you can get fusion in a tokamak at very low densities/pressures is because they ramp up the temperature. The sun's core is around 15 million K, whereas devices like ITER aim for around 100 million or more. This puts it right on the peak of the D-T reaction curve. Then they use huge magnets to hold the plasma like that for long times. Now it's the <em>energy</em> confinement time that's important, not the plasma's lifetime, and to help with that they have all sorts of systems inside the reactor to remove any other atoms that bleed off energy. In contrast, the sun is filled with "ash" and all sorts of other crap that are releasing lots of x-rays and the energy is flowing out continually - that's why we can see it in the sky. So basically, tokamaks (etc) have much better (1), somewhat better (2), and thus back way off on (3).</p> <p>There is another sweet spot. Density has one other advantage... although most of the energy released in D-T is in the form of neutrons, about 25% is alpha particles. Those are charged, so in ITER the magnets catch them and they thermalize in the plasma. This is called "self-heating" and is vitally important.</p> <p>Thermalization is due to the alphas interacting with the other ions electromagnetically. If you increase the density, those reactions take place much faster. In ITER, that process takes a couple of meters (or more, my memory fades) but if you increase that density enough, it can happen in fractions of a millimetre. "Enough" turns out to be about 30 to 50 times the density of lead.</p> <p>This is how a hydrogen bomb works. The "secondary" is collapsed down on a thin rod of plutonium which gives off a huge burst of neutrons. Those travel outward into LiD fuel surrounding this "spark plug", where they cause a reaction that releases T. Now you have D-T at immense pressure (like 1000x lead or more) and so the alphas instantly heat the fuel around it so that fuses too, while the neutrons are making more T. The explosion travels from the spark plug core outward, burning the fuel as it goes. It's amazing that it works at all - if any of the reaction times were a little different it wouldn't.</p> <p>So the other major approach to fusion is the "inertial" approach. Here you rely on the fact that even as it's exploding, the expansion of the fuel is still slower than the fusion reaction. That's only true if your density is high enough so that the alphas thermalize really fast. So at NIF, they use lasers to crush a capsule with a tiny amount of fuel inside, and even though the reactions are blowing the resulting dust-sized fuel blob apart, (2) is still enough that (1) and (3) get you into the same ballpark.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.