subject
stringclasses 2
values | topic
stringlengths 4
138
| question
stringlengths 3
1.14k
| answer
stringlengths 1
45.6k
|
|---|---|---|---|
Physics
|
|quantum-field-theory|special-relativity|momentum|metric-tensor|coordinate-systems|
|
Value of $p^{2}$ for little groups
|
<p>Weinberg writes four-vector in the order <span class="math-container">$1,2,3,0$</span> with <span class="math-container">$p^2:= \vec{p}^2-(p^0)^2$</span> (c.f. p. xxv "Notations").</p> <p>(c) <span class="math-container">$p=(0,0,\kappa,\kappa)$</span> with <span class="math-container">$\kappa \gt 0$</span> <span class="math-container">$\; \Rightarrow \;$</span> <span class="math-container">$p^0=\kappa \gt 0$</span> and <span class="math-container">$p^2=0$</span>, but <span class="math-container">$p^\prime=(\kappa,0,0,\kappa)$</span> would also be possible.</p> <p>(d) <span class="math-container">$p=(0,0,\kappa,-\kappa)$</span> with <span class="math-container">$\kappa \gt 0$</span> <span class="math-container">$\; \Rightarrow \;$</span> <span class="math-container">$p^0=-\kappa \lt 0$</span> and <span class="math-container">$p^2=0$</span>, but <span class="math-container">$p^\prime =(-\kappa,0,0,\kappa)$</span> has <span class="math-container">$p^0=\kappa \gt 0$</span>.</p> <p>(e) <span class="math-container">$p=(0,0,N,0)$</span> with <span class="math-container">$N\ne 0$$\; \Rightarrow \;$</span> <span class="math-container">$p^2=N^2 \gt 0$</span></p>
|
Physics
|
|electromagnetic-radiation|interference|home-experiment|interferometry|non-linear-optics|
|
Mach Zehnder Do-it-yourself
|
<p>Besides the BS you need a few other components:</p> <ul> <li>Make sure the BS is at least working at the wavelength of your laser.</li> <li>You need at least 2 mirrors and 2 BS in order to achieve interference:</li> </ul> <p><a href="https://i.stack.imgur.com/W0arH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W0arH.png" alt="From Edmund Optics" /></a></p> <ul> <li>The lens may or may not be necessary,depending on how big your beam is. I would definitely suggest to have at least a few lenses at hand or a beam expander in case your laser beam is too narrow (To be able to observe the fringes "by eye" I recommend ~20-30mm).</li> <li>The mirrors are used to make the optical paths equal. You need to make sure that the optical paths are exactly the same (this is done by making the lasers exactly 90° with each other and measure the propagation distance between BS and mirrors)</li> <li>The sample is not necessary, you can also achieve fringes of the laser itself.</li> <li>The first BS should be preferably 50:50 so you see the power split evenly</li> <li>The second BS is to have the beams overlap before reaching the screen, I forgot if these have to have a specific ratio, I think I used a 10:90</li> </ul> <blockquote> <p>in one they said they can not arrange it because the paths difference can not be tuned by hand.</p> </blockquote> <p>That's not true. With some elbow grease you can achieve a fair equal-path between the beam splitters, just have a millimeter-range ruler at hand, and preferably use e.g. bore holes in an optical table to get a good reference.</p> <p>If you have little experience aligning lasers it may take you longer, but it should be possible to make an interferometer "at-home".</p> <p><a href="https://www.edmundoptics.de/knowledge-center/application-notes/optomechanics/building-a-mach-zehnder-interferometer/" rel="nofollow noreferrer">https://www.edmundoptics.de/knowledge-center/application-notes/optomechanics/building-a-mach-zehnder-interferometer/</a></p> <p>The solution Edmund Optics gives is actually rather fancy, you should be able to build the interferometer with less components, although I strongly recommend at least something optics-grade to make your life easier.</p>
|
Physics
|
|quantum-mechanics|operators|quantum-information|observables|quantum-measurements|
|
Questions regarding measurement of a qubit
|
<p>When you measure an observable <span class="math-container">$A$</span> in a state <span class="math-container">$|\psi\rangle$</span> the expectation value of the observable is <span class="math-container">$\langle\psi|A|\psi\rangle$</span>. The possible measurement outcomes are the eigenvalues of the observable <span class="math-container">$a_1,a_2\dots$</span> and the expectation value is <span class="math-container">$\sum_i a_ip_i$</span> where <span class="math-container">$p_i$</span> is the probability of the <span class="math-container">$i$</span>th outcome. When you do the measurement you will see the value <span class="math-container">$a_i$</span> with probability <span class="math-container">$p_i$</span>.</p> <p>You will need to calculate the eigenvalues of <span class="math-container">$\sigma_x$</span>. Then you could write down <span class="math-container">$\sigma_x$</span> in terms of <span class="math-container">$|i\rangle\langle j|$</span> operators, calculate the expectation value and work out the probabilities from that. Or you can write down <span class="math-container">$|\psi\rangle$</span> in terms of the eigenvectors of <span class="math-container">$\sigma_x$</span> and the square modulus of the amplitude of each state gives you the probability of each state and so the probability of each of the possible values.</p> <p>You can find more on how to do this in any decent textbook, such as "Quantum computation and quantum information" by Nielsen and Chuang.</p>
|
Physics
|
|fluid-dynamics|differentiation|notation|flow|vector-fields|
|
What does the notation $(k \cdot \nabla ) v$ mean?
|
<p>Let's work in 3 dimenstions.</p> <p><span class="math-container">$k \cdot\nabla = k_{x} \partial_{x} + k_{y} \partial_{y} + k_{z} \partial_{z}$</span>.</p> <p>Then <span class="math-container">$(k \cdot \nabla) v$</span> has three components: <span class="math-container">$k_{x} \partial_{x} v_{x} + k_{y} \partial_{y} v_{x} + k_{z} \partial_{z} v_{x}$</span>, <span class="math-container">$k_{x} \partial_{x} v_{y} + k_{y} \partial_{y} v_{y} + k_{z} \partial_{z} v_{y}$</span> and <span class="math-container">$k_{x} \partial_{x} v_{z} + k_{y} \partial_{y} v_{z} + k_{z} \partial_{z} v_{z}$</span>.</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|classical-mechanics|energy-conservation|
|
Solving a problem using energy conservation doesn't work
|
<p>Unless I've missed something major, I did get the same answer with both approaches</p> <p>The energy method however is more complex than it seems, and does required finding at least one force, tension in the string. This is because limiting frictional force in classical mechanics is taken as directly proportional to normal reaction between the surfaces considered.</p> <p>A component of tension changes this normal reaction, meaning you cannot find the frictional force acting on your block unless you have already found tension. Just as a quick gut check, the angle the string makes with the horizontal is 30 degrees, a standard angle.</p>
|
Physics
|
|electromagnetism|magnetic-fields|magnetic-moment|spin-models|ferromagnetism|
|
Time for ferromagnet to align with magnetic field
|
<p>For large polycrystaline ferromagnets, the magnetization response to a high strength external magnetic field is usually limited by eddy currents. For magnets that are very small or have very low electrical conductivity, the reversal depends on how fast the magnetic domains can change. Since the question is about reversing the magnetization of a ferromagnet, we'll only consider large applied fields greater than the <a href="https://en.wikipedia.org/wiki/Coercivity" rel="nofollow noreferrer">coercive field</a> of the material.</p> <p><strong>Eddy Current Time Constant</strong></p> <p>Most common ferromagnetic materials are electrically conductive, so eddy currents are produced within the ferromagnet when the applied external magnetic field changes. The <span class="math-container">$\tau_\mathrm{eddy} \sim L/R$</span> decay <a href="https://en.wikipedia.org/wiki/RL_circuit#Time_domain_considerations" rel="nofollow noreferrer">time constant</a> of these eddy currents limits the rate of change of the magnetization. The precise value of <span class="math-container">$\tau_\mathrm{eddy}$</span> depends on the geometry, but roughly speaking we expect:</p> <p><span class="math-container">$$L \sim \mu \frac{A}{\ell} \qquad\qquad R \sim \rho \frac{\ell}{A}$$</span></p> <p>so</p> <p><span class="math-container">$$\tau_\mathrm{eddy} \sim \frac{\mu}{\rho} \left(\frac {A}{\ell}\right)^2$$</span></p> <p>where <span class="math-container">$\mu$</span>, <span class="math-container">$\rho$</span>, <span class="math-container">$A$</span>, and <span class="math-container">$\ell$</span> are the ferromagnet's <a href="https://en.wikipedia.org/wiki/Permeability_(electromagnetism)" rel="nofollow noreferrer">magnetic permeability</a>, <a href="https://en.wikipedia.org/wiki/Electrical_resistivity_and_conductivity" rel="nofollow noreferrer">electrical resistivity</a>, cross-sectional area, and length. Assuming the ferromagnet's size scale is <span class="math-container">$d$</span>, then <span class="math-container">$A\sim d^2$</span> and <span class="math-container">$\ell \sim d$</span>, so</p> <p><span class="math-container">$$\tau_\mathrm{eddy} \sim \frac{\mu}{\rho} d^2$$</span></p> <p>Essentially the same limit can be derived from the frequency dependence of the <a href="https://en.wikipedia.org/wiki/Skin_effect#Formula" rel="nofollow noreferrer">skin depth</a>, <span class="math-container">$\delta=\sqrt{2\rho/\omega \mu}$</span>, since the skin depth and eddy current time constant both depend on the same physics.</p> <p>The permeability of ferromagnetic materials <a href="https://en.wikipedia.org/wiki/Saturation_%28magnetic%29#Explanation" rel="nofollow noreferrer">depends on the magnetic field /strength</a>, increasing up to some maximum value and then decreasing, but we can use <span class="math-container">$\mu_{\mathrm{max}}$</span> to make rough estimates of the time constant. For example, for <span class="math-container">$d\sim 1\,\mathrm{cm}$</span>, the response times should be of order:</p> <ul> <li><span class="math-container">$\tau_\mathrm{eddy} \sim 1$</span> second for an <a href="https://en.wikipedia.org/wiki/Electrical_steel" rel="nofollow noreferrer">electrical steel</a> ferromagnet with <span class="math-container">$\mu\approx 4000 \mu_0$</span> and <span class="math-container">$\rho\approx 4.72\times 10^{-7}\,\Omega \,\mathrm{m}$</span></li> </ul> <p>Much faster response times can be achieved with other magnetic materials such as <a href="https://en.wikipedia.org/wiki/Ferrite_(magnet)" rel="nofollow noreferrer">ferrites</a> or <a href="https://en.wikipedia.org/wiki/Neodymium_magnet" rel="nofollow noreferrer">Neodymium magnets</a>:</p> <ul> <li><span class="math-container">$\tau_\mathrm{eddy} \sim 80$</span> nanoseconds for <a href="https://product.tdk.com/en/system/files?file=dam/doc/product/ferrite/ferrite/ferrite-core/catalog/ferrite_mn-zn_material_characteristics_en.pdf" rel="nofollow noreferrer">MnZn ferrite</a> with <span class="math-container">$\mu\sim 2500 \mu_0$</span> and <span class="math-container">$\rho\sim 4\,\Omega \,\mathrm{m}$</span></li> <li><span class="math-container">$\tau_\mathrm{eddy} \sim 80$</span> microseconds for a <span class="math-container">$\mathrm{Nd_2}\mathrm{Fe_{14}}\mathrm{B}$</span> <a href="https://en.wikipedia.org/wiki/Neodymium_magnet" rel="nofollow noreferrer">Neodynium magnet</a> with <span class="math-container">$\mu\sim \mu_0$</span> and <span class="math-container">$\rho\sim 1.5\times 10^{-6}\,\Omega \,\mathrm{m}$</span></li> </ul> <p><strong>Domain Reversals</strong></p> <p>Eddy currents limit the magnetization reversal speed of the macroscopic magnets I believe you are asking about, but domain realignment is more important at the nanoscale.</p> <p>A ferromagnetic material consists of many small <a href="https://en.wikipedia.org/wiki/Magnetic_domain" rel="nofollow noreferrer">magnetic domains</a>. Within a domain, quantum effects align all the molecular magnetic dipoles. When the material is placed in a magnetic field, the domains more aligned with the field tend to grow and anti-aligned domains shrink, and when the field is large enough the orientation of the domains rotates to align with the field.</p> <p>As Testina has noted in <a href="https://physics.stackexchange.com/a/801964/145491">their answer</a>, if a magnetic field <span class="math-container">$H$</span> is applied instantaneously, the time-scale for an ideal ferromagnet to flip its magnetization should follow the Arrhenius-Néel-Brown law (e.g. <a href="https://doi.org/10.12691/jmpc-6-2-1" rel="nofollow noreferrer" title="Activation Energy Depending on the Thickness of the Ferromagnetic Layer, A. Adanlété Adjanoh , R. Belhi, Journal of Materials Physics and Chemistry. 2018, 6(2), 36-38.">Eq. 2 of this paper</a>):</p> <p><span class="math-container">$$\tau_{ANB}=t_0\,\mathrm{exp}\left(\frac{E_a - MV\mu_0 H}{kT}\right)$$</span></p> <p>here <span class="math-container">$E_a$</span> is the activation energy aligning the spins to each other, <span class="math-container">$MV$</span> is the typical magnetic moment (Magnetization <span class="math-container">$\times$</span> Volume) of the domains, and <span class="math-container">$T$</span> is the temperature. The attempt time <span class="math-container">$t_0$</span> characteristic of a material can crudely be thought of as the average time between "attempts" for the domain to transition. It is bounded from below by the thermal time scale <span class="math-container">$h/kT$</span> (<span class="math-container">$\sim 2\times 10^{-13}\,\mathrm{s}$</span>, for <span class="math-container">$T\sim300\,\mathrm{K}$</span>), but is <a href="https://en.wikipedia.org/wiki/N%C3%A9el_relaxation_theory#Mean_transition_time" rel="nofollow noreferrer">more typically</a> <span class="math-container">$10^{-10}-10^{-9}\,\mathrm{s}$</span>.</p> <p>This equation tells us that if the magnetic field is strong enough, the transition time should become exponentially small (at least down to some limit). <a href="https://doi.org/10.1103/PhysRevB.102.020413" rel="nofollow noreferrer" title="Dynamical aspects of magnetization reversal in the neodymium permanent magnet by a stochastic Landau-Lifshitz-Gilbert simulation at finite temperature: Real-time dynamics and quantitative estimation of coercive force, Masamichi Nishino, Ismail Enes Uysal, Taichi Hinokihara, and Seiji Miyashita, Phys. Rev. B 102, 020413(R) – Published 30 July 2020">Simulations</a> of tiny <span class="math-container">$10\,\mathrm{nm}$</span> neodynium boron magnets show the relaxation time falling from from <span class="math-container">$1$</span> to <span class="math-container">$10^{-10}$</span> seconds as the field increases past the <a href="https://en.wikipedia.org/wiki/Coercivity" rel="nofollow noreferrer">coercive field</a> from <span class="math-container">$3$</span> to <span class="math-container">$4$</span> Tesla, and then slowly decreasing to <span class="math-container">$\sim 10^{-12}\,\mathrm{s}$</span>. Experimentally, there is a very <a href="https://scholar.google.ca/scholar?hl=en&as_sdt=2005&sciodt=0%2C5&cites=16169542146917257643&scipsc=1&q=circularly+polarized+light&oq=%22circularly+p" rel="nofollow noreferrer">large amount of research</a> on flipping nanoscale domains, but instead of a solenoid, the magnetic fields are provided by fast laser pulses.</p>
|
Physics
|
|dimensional-analysis|units|coulombs-law|physical-constants|
|
Relation between Coulomb's law and Fine-structure constant
|
<p>We can rearrange the definition of the fine structure constant <span class="math-container">$$\alpha = \frac{e^2}{4\pi \epsilon_0 \hbar c}$$</span> to get <span class="math-container">$$\alpha \frac{4\pi \epsilon_0 \hbar c}{e^2}=1$$</span> Then we can multiply Coulomb’s law by <span class="math-container">$1$</span> to get <span class="math-container">$$F=\frac{1}{4\pi\epsilon_0}\frac{q_1 q_2}{r^2}\alpha \frac{4\pi \epsilon_0 \hbar c}{e^2}$$</span> <span class="math-container">$$F=\alpha \hbar c \frac{n_1 n_2}{r^2}$$</span> where <span class="math-container">$n_i=q_i/e$</span> is the dimensionless number of fundamental charges in <span class="math-container">$q_i$</span>.</p> <p>So the fine structure constant can be used to write Coulomb’s law in terms where the charge is given as a dimensionless count of the number of electrons. Meaning that the force is related to the number of charges.</p>
|
Physics
|
|newtonian-mechanics|calculus|
|
Is is true to say $F(x) = ma(x)$?
|
<p><strong>What you seek is not true.</strong> It can't be true if your object might ever revisit the same position twice with different forces on it.</p> <p>Part of the challenge is in notation. It's more obvious what's going on if we're careful with variable names. You start with <span class="math-container">$F(t)=ma(t)$</span>. This works perfectly. It's just a time varying force. However, when you sought to switch to a function of position, you kept the variable names the same: <span class="math-container">$F(x(t))=ma(x(t))$</span> <span class="math-container">$F$</span> was a function of time and <span class="math-container">$a$</span> was a function of time. Thus they <em>cannot</em> be a property of distance. To keep ourselves consistent and sane, we need to come up with new variables. It's common to use the "prime" symbol to capture the idea of a new something. So we can define a new function <span class="math-container">$F^\prime(x(t)) = ma^\prime(x(t))$</span>. Now that we're consistent, we can ask if this is true.</p> <p>We can show that its trivially false by trying to work our way backwards. If this were true, we should be able to create an inverse function <span class="math-container">$x^{-1}$</span> mapping from position to time such that <span class="math-container">$F(x^{-1}(t)) = ma(x^{-1}(t))$</span>. Note that this is really close to the equation you had earlier. The difference is that <span class="math-container">$x^{-1}$</span> is a function from positions to times, so <span class="math-container">$F$</span> and <span class="math-container">$a$</span> still being consistently treated as a function of time.</p> <p>And herein lies the problem - your function <span class="math-container">$x(t)$</span> needs to be invertible for the property you seek to hold. What if your object revists the same <span class="math-container">$x$</span> position twice? In the time based equation for <span class="math-container">$F(t)$</span>, they can have different forces and accelerations at those two times. However, in the latter equation, <span class="math-container">$F^\prime(x)$</span>, we see the object must have the same force at both of those times.</p> <p>As a concrete example, consider a bowling ball swinging back and forth on a rope -- a pendulum. At some time when the bowling ball is near its maximum, you slide a concrete wall into its path at the lowest point. That ball is going to come down and be subject to sudden forces when it impacts. But it will have to experience those forces at a position it has swung through many times before.</p> <p>And note that while the equation you wrote will be true if <span class="math-container">$x(t)$</span> is invertible, the usual equations for position and velocity will be slightly different. You will not be able to simply write <span class="math-container">$v(x(t)) = a(x(t))t+v_0$</span> or <span class="math-container">$x=\frac 1 2 a(x(t))t^2 + v_0t + x_0$</span>. While <span class="math-container">$F=ma$</span> is simply the definition of what a force is, and can be treated simply as a look-up function of time or space, velocity and positions are calculated using calculus. Calculus has a <a href="https://en.wikipedia.org/wiki/Chain_rule" rel="nofollow noreferrer">chain rule</a> which you need to use when integrating or taking the derivative (which are the operations that go between positions, velocities, and accelerations). That rule affects how you go about changing variables (such as from time to space).</p>
|
Physics
|
|homework-and-exercises|classical-mechanics|fluid-dynamics|buoyancy|statics|
|
Adding mass to a bowl until it sinks
|
<blockquote> <p>The volume displaced by the "outer ring" of the bowl, i.e. the non-hollow part, would be <span class="math-container">$\frac{2}{3}πr^2tϕ$</span>. Therefore, the total volume of water displaced by the bowl would be the difference between these, i.e. <span class="math-container">$\frac{2}{3}π(r+t)^3ρ - \frac{2}{3}πr^2tϕ$</span>.</p> </blockquote> <p>You're over-complicating things here. When the bowl is floating, there's no fluid inside it, so it displaces the same amount of fluid whether it's solid or hollow.</p> <p>The critical point happens when the fluid just reaches the rim of the bowl. So the fluid displaced has a solid hemispherical shape. You just need to find how much this volume of fluid weighs, and then how much sand needs to be added to the bowl to give it (bowl + sand) the same weight.</p>
|
Physics
|
|angular-momentum|quantum-spin|group-theory|representation-theory|
|
Projector onto Adjoint and Singlet Representations for $SU(N)$
|
<p>Indeed.</p> <p>Review for <em>su(2)</em>, in your notation, where <span class="math-container">$\vec S$</span> are the normalized 3-vector generators, so <span class="math-container">$\vec \sigma /2$</span> for the fundamental and anti fundamental, so you have <span class="math-container">$$ \vec{S}_1\cdot \vec{S}_2=\tfrac{1}{2}((\vec S_1+\vec S_2)^2-\vec S_1^2- \vec S_2^2)= -3/4 + \tfrac{1}{2}((\vec S_1+\vec S_2)^2, $$</span> hence <span class="math-container">$=-3/4+0= -3/4$</span> for the singlet (spinless) and <span class="math-container">$= -3/4+1=1/4$</span> for the triplet (adjoint), as detailed in most QM texts.</p> <p>Hence, these eigenvalues plug into the projectors below to confirm they are that, <span class="math-container">$$ P_s = -\vec{S}_1\cdot \vec{S}_2 + \tfrac{1}{4}\mathbb{I},\\ P_a = \vec{S}_1\cdot \vec{S}_2 + \tfrac{3}{4}\mathbb{I}\qquad \leadsto \\ P_sP_a=0; \qquad P_s+P_a={\mathbb I}. $$</span> The two projector relations on the last line serve as checks of the above, but could determine the spin/quadratic Casimir of the adjoint, if you were oblivious of it!!</p> <p>Now repeat this for <em>su(3)</em>, for normalized 8-vector generators <span class="math-container">$\vec F$</span> (<span class="math-container">$= \vec\lambda /2$</span> for the <strong>3</strong> and its conjugate), and quadratic Casimir 4/3 for the triplet and 2 for the adjoint. (Note this differs from the 3 of Wikipedia, because of the different normalization involved here, different by a factor of 3/2, the ratio of rep indices. As mentioned above, you don't really need it, as the Casimir for the fundamental will suffice.)</p> <p>Consequently, <span class="math-container">$$ \vec F_1\cdot \vec F_2= -4/3 ~~~~\hbox {for the singlet, and}\\ \vec F_1\cdot \vec F_2= -4/3 +2/2=-1/3 ~~~~\hbox {for the adjoint}, $$</span> hence <span class="math-container">$$ P_s = -\vec{F}_1\cdot \vec{F}_2 - \tfrac{1}{3}\mathbb{I},\\ P_a = \vec{F}_1\cdot \vec{F}_2 + \tfrac{4}{3}\mathbb{I}\qquad \leadsto \\ P_sP_a=0; \qquad P_s+P_a={\mathbb I}. $$</span> <span class="math-container">$P_a$</span> projects out the singlet, and <span class="math-container">$P_s$</span> projects out the adjoint. <span class="math-container">$P_sP_a$</span> is a quadratic polynomial in the <span class="math-container">$F\cdot F$</span> with roots at the right places, -1/3 and -4/3.</p> <p>You may extend this for the <em>su(N)</em> algebra, given the normalizations, in, e.g., <a href="http://scipp.ucsc.edu/%7Ehaber/ph218/sunid17.pdf" rel="nofollow noreferrer">this</a>, <span class="math-container">$$ P_s = -\vec{T}_1\cdot \vec{T}_2 - \tfrac{(N-1)^2-2}{2N}\mathbb{I},\\ P_a = \vec{T}_1\cdot \vec{T}_2 + \tfrac{N^2-1}{2N}\mathbb{I}\qquad \leadsto \\ P_sP_a=0; \qquad P_s+P_a={\mathbb I}. $$</span></p> <hr /> <p><em><strong>Edit in response to last comment (geeky):</strong></em> <sub> Bypassing the charge conjugation stunt, here is the <em>su(3)</em> case displaying the P&S conventions you use. Start with <span class="math-container">$3\otimes 3= 6\oplus \bar 3$</span>, which is more similar to the <em>su(2)</em> one! Using "=" signs to indicate obvious "up to a total-vs-uncoupled 9×9 matrix basis changes", you have <span class="math-container">$ 2\vec F_1\cdot \otimes \vec F_2 = \tfrac{4}{3} {\mathbb I}_{3} +\tfrac{10}{3} {\mathbb I}_{6} - 2\cdot \tfrac{4}{3} {\mathbb I}_{9} ~~\implies ~~ \vec F_1\cdot \otimes \vec F_2 =-\tfrac{2}{3} P_{\bar 3}+ \tfrac{1}{3} P_6. $</span> With the condition <span class="math-container">$P_{\bar 3}+P_6={\mathbb I}_9$</span>, it yields <span class="math-container">$P_{\bar 3}=(- \vec F_1\cdot \otimes \vec F_2+1/3); ~~~P_6=( \vec F_1\cdot \otimes \vec F_2 +2/3$</span>). </sub></p> <p><sub>Now on to <span class="math-container">$3\otimes \bar 3= 8\oplus 1$</span>. As you wish, call the 9×9 matrix <span class="math-container">$ \vec F_1\cdot \otimes (-\vec F_2)^* \equiv G$</span>. As above, <span class="math-container">$2G= 0 {\mathbb I}_{1} + 3 {\mathbb I}_{8} - 2\cdot \tfrac{4}{3} {\mathbb I}_{9} ~~\implies ~~ G=-\tfrac{4}{3} P_ 1+ \tfrac{1}{6} P_8.$</span> As before, it implies <span class="math-container">$P_1=\tfrac{2}{3}(-G +1/6); ~~~P_8=\tfrac{2}{3}(G +4/3)$</span>. So your comment was trenchant and my glib scaling in the answer was off! See how <span class="math-container">$P_1 +P_8={\mathbb I}_9$</span> is satisfied now! </sub></p> <sub> This adjusts/morphs mutatis mutandis to the <span class="math-container">$su(N)$</span> case you confirmed, <span class="math-container">$P_1=\tfrac{2}{N}(-G +1/2N); ~~~~P_{N^2-1}=\tfrac{2}{N}(G +(N^2-1)/2N)$</span>. </sub>
|
Physics
|
|capacitance|
|
Charging Capacitor with one terminal grounded
|
<p>At the first instance when the capacitor is hooked up, there will be a brief response whereby the plates are charged, after which the system reaches equilibrium and there will be no flow of current; only a set of capacitor plates with a potential difference as per the indicated battery. Connecting first to ground makes no difference in the scenario. The bottom line is that the capacitor will store the amount of charge on it's plates according to the potential difference it is subjected to, i.e. <span class="math-container">$C=Q/V$</span>. The only way to avoid any current flow whatsoever, even at the first, would be to connect a capacitor that has been <em>precisely</em> pre-charged to the amount corresponding to the battery voltage, and that the potential of the negative plate is identical to the that of the negative terminal; whatever it may be. If this were <em>exactly</em> the case then there would never be any electric field in the wires, since the electric field depends on a potential difference as per: <span class="math-container">$\vec E=-\nabla\phi$</span>. This is why the author specifies the connecting of an <em>uncharged</em> capacitor, so as to allow for a problem that is completely specified by the given quantities, viz. capacitance C, battery voltage V.</p>
|
Physics
|
|quantum-field-theory|energy|hamiltonian|antimatter|klein-gordon-equation|
|
How can the Klein-Gordon equation have negative-energy solution if its Hamiltonian is positive-definite?
|
<p>If you interpret the Klein-Gordon equation as describing a field theory (which is the modern way to look at it), then yes, the energy of the field is positive definite.</p> <p>The problem is if you try to interpret the Klein-Gordon equation as a relativistic generalization of the Schrodinger equation, describing the motion of a single quantum particle. Then the energy of the wavefunction is related to its frequency by <span class="math-container">$E=\hbar \omega$</span>. However, there are positive and negative frequency solutions to the Klein-Gordon equation, <span class="math-container">$\sim \exp\left[\pm i \left(\sqrt{\vec{k}^2 + m^2}\right) t\right]$</span>. The problem is resolved by understanding that the Klein-Gordon equation should not be interpreted as a "relativistic Schrodinger equation for one particle", but rather as the classical equations of motion for a field we are going to quantize (or, as the Heisenberg equations of motion for a quantum field).</p>
|
Physics
|
|atomic-physics|bose-einstein-condensate|cold-atoms|
|
Why are Alkali atoms used in many Cold Atom experiments?
|
<p>Alkali atoms have several benefits!</p> <ul> <li>The one outer electron makes them "hydrogen-like". Therefore, it is "easy" to calculate the energy levels which makes predictions and calculations using these elements far easier!</li> <li>Since the energy structure is very simple, you can find closed cooling cycle. E.g. cooling rubidium requires the use of only one repumping laser beam to close the cooling cycle.</li> <li>Why not use just hydrogen? Alkali atoms feature transition frequencies, which are easily accessible (laser technology is very advanced for visible to near infrared light). Hydrogen would require UV laser light, which is hard to produce and air is not transparent for this light.</li> <li>Feshbach resonances are certainly nice to have, but without the points I mentioned above, you could not even cool the atoms, rendering the study of Feshbach resonances impossible.</li> </ul>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|newtonian-gravity|equilibrium|
|
Gravitational Null Points in a system of point masses
|
<p>Starting with the position vector to the mass</p> <p><span class="math-container">$$\mathbf P= \left[ \begin {array}{c} R\cos \left( \alpha \right) -r \\ R\sin \left( \alpha \right) \end {array} \right] $$</span></p> <p>with <span class="math-container">$~G\,M~=1$</span> you obtain</p> <p><span class="math-container">$$\mathbf g=\frac{1}{\mathbf P\cdot\mathbf P}\,\hat{\mathbf{P}}= \left[ \begin {array}{c} {\frac {R\cos \left( \alpha \right) -r}{ \left( \left( R\cos \left( \alpha \right) -r \right) ^{2}+{R}^{2} \left( \sin \left( \alpha \right) \right) ^{2} \right) ^{3/2}}} \\ {\frac {R\sin \left( \alpha \right) }{ \left( \left( R\cos \left( \alpha \right) -r \right) ^{2}+{R}^{2} \left( \sin \left( \alpha \right) \right) ^{2} \right) ^{3/2}}}\end {array} \right]= \left[ \begin {array}{c} {\frac {R\cos \left( \alpha \right) -r}{ \left( {R}^{2}-2\,R\cos \left( \alpha \right) r+{r}^{2} \right) ^{3/2 }}}\\ {\frac {R\sin \left( \alpha \right) }{ \left( {R}^{2}-2\,R\cos \left( \alpha \right) r+{r}^{2} \right) ^{3/2}}} \end {array} \right] $$</span></p> <p>the integral <span class="math-container">$~ g_x~$</span> is <span class="math-container">$$\int g_x\,d\alpha=\rm Elliptic(R,r,\alpha)$$</span></p> <p>from here I plotted the result for <span class="math-container">$~ R=1~,r=0.7~$</span> and <span class="math-container">$~R=1~,r=0.3~$</span> over <span class="math-container">$~\alpha=0..2\pi$</span>.</p> <p><a href="https://i.stack.imgur.com/OYcD9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OYcD9.png" alt="enter image description here" /></a></p> <p>you can see that the zeroing is independent of the geometry parameter <span class="math-container">$~R~,r~$</span></p> <hr /> <p>I don't think that your results are correct ?</p> <p>to obtain the gravitational null point I use this equation</p> <p><span class="math-container">$$\mathbf F=\sum_{i=1}^{n} \frac{\mathbf p_i-\mathbf s}{|\mathbf p_i-\mathbf s|^3}=0$$</span></p> <p>where <span class="math-container">$~n~$</span> are the number of the points on the circle (radius a). <span class="math-container">$~\mathbf p_i~$</span> are the <span class="math-container">$~x~,y~$</span> coordinates of the points and <span class="math-container">$~\mathbf s=[s_x~,s_y]^T~$</span> the solution that we are looking for.</p> <p>because the symmetrie <span class="math-container">$~s_y=0~$</span> from here <span class="math-container">$$F_x(s_y=0)=F_x(s_x)=0\tag 1$$</span></p> <p>the solution <span class="math-container">$~s_x~$</span> of equation (1) must be real and <span class="math-container">$-a\le s_x\le a$</span></p> <p><strong>Results</strong></p> <p><span class="math-container">$$n~=3~,\phi_0=\frac{\pi}{3}$$</span> <span class="math-container">$$s_x=0.2847 a$$</span></p> <p><span class="math-container">$$n~=4~,\phi_0=\frac{\pi}{4}$$</span></p> <p><span class="math-container">$$s_x=\pm 0.5469 a$$</span></p> <p><span class="math-container">$$n~=6~,\phi_0=\frac{\pi}{6}$$</span></p> <p><span class="math-container">$$s_x=\pm 0.77278 a$$</span></p> <p><span class="math-container">$$n~=12~,\phi_0=\frac{\pi}{12}$$</span></p> <p><span class="math-container">$$s_x=\pm 0.9330 a$$</span></p> <p>obviously if <span class="math-container">$~n\mapsto \infty ~$</span> the solution <span class="math-container">$~s_x=\pm a~$</span></p> <hr /> <p><a href="https://i.stack.imgur.com/jcLSm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jcLSm.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/A5BYM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A5BYM.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/EPKg3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EPKg3.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/ET2cJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ET2cJ.png" alt="enter image description here" /></a></p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|quantum-information|mathematical-physics|
|
Outer product as an operator in an infinite dimensional Hilbert space
|
<p>For any <span class="math-container">$\psi\in H$</span>, you can define an operator <span class="math-container">$P_\psi$</span> by <span class="math-container">$P_\psi \phi:=\langle \psi, \phi\rangle_H \psi$</span> for all <span class="math-container">$\phi\in H$</span>. It is easily verified that <span class="math-container">$P_\psi$</span> is a linear bounded operator, which in quantum mechanics is often written as <span class="math-container">$P_\psi=|\psi\rangle\langle \psi|$</span>.</p> <p>You don't have to argue with tensor products and so on a priori.</p> <hr /> <p>However, if you insist, then you can consider the space of all Hilbert-Schmidt operators on <span class="math-container">$H$</span>, denoted by <span class="math-container">$\mathcal B_2(H)$</span>. A linear bounded operator <span class="math-container">$A$</span> is in this Hilbert-Schmidt space if <span class="math-container">$\sum\limits_{n\in \mathbb N} \|Ae_n\|_H^2 <\infty $</span>, where <span class="math-container">$(e_n)_{n\in \mathbb N}\subset H$</span> is an orthonormal basis of <span class="math-container">$H$</span>.</p> <p>It can also be shown that <span class="math-container">$\mathcal B_2(H)$</span> is a Hilbert space with inner product defined by <span class="math-container">$$\langle A_1,A_2\rangle_{\mathcal B_2(H)}:=\mathrm{Tr}_H\,A_1^*A_2 \quad . $$</span></p> <p>Of course, you first have to show that all of these expressions are well-defined and so on.</p> <p>Finally, it can be shown that <span class="math-container">$H\otimes H^\prime$</span> is naturally isomorphic to <span class="math-container">$\mathcal B_2(H)$</span>, and with that, it indeed makes sense to view <span class="math-container">$P_\psi$</span> as <span class="math-container">$|\psi\rangle\otimes\langle \psi|$</span>.</p>
|
Physics
|
|kinematics|
|
Kinematics: Given the velocity as a function of the position, is it possible to derive the velocity as a function of time?
|
<p>The equation <span class="math-container">$v = \sqrt{2as}$</span> can be rewritten as <span class="math-container">$$ \frac{ds}{dt} = \sqrt{2as}. $$</span> This is a separable ODE, which means that we have <span class="math-container">$$ \int \frac{ds}{\sqrt{s}} = \sqrt{2a} \int dt \quad \Rightarrow \quad 2 \sqrt{s} = \sqrt{2 a} t + C $$</span> where <span class="math-container">$C$</span> is an arbitrary constant. Demanding that <span class="math-container">$s = 0$</span> when <span class="math-container">$t = 0$</span> implies that <span class="math-container">$C = 0$</span>; and solving this equation for <span class="math-container">$s$</span> then yields <span class="math-container">$s = \frac12 a t^2$</span>, as expected.</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|harmonic-oscillator|spring|linear-systems|
|
Why is time period same even if you give an impulse perpendicular to the spring?
|
<p>I am pretty sure you can treat this as a superposition of 2 SHMs one in the x direction and one in the y direction. You can derive the equation of ellipse from this too. Calculating time period is trivial.</p>
|
Physics
|
|homework-and-exercises|rotational-dynamics|reference-frames|moment-of-inertia|
|
Moment of Inertia of Cylinder through horizontal axis
|
<p>The perpendicular axis theorem is only applicable for laminar bodies where the moment of inertia about an axis perpendicular to its plane is the sum of its moments of inertia about two mutually perpendicular axes in its plane and all three axes being concurrent.</p> <p>To find the <span class="math-container">$_{horizontal axis}$</span> you could proceed with integration of disc elements or using the perpendicular axis theorem, parallel axis theorem and Routh's rule all together to obtain the value.</p>
|
Physics
|
|capacitance|
|
Charging Capacitor with grounded terminal
|
<p>It appears you have a couple of misconceptions.</p> <ol> <li><p>You say the negative battery terminal and plate are at the same potential, yet you've drawn resistance in the soil between them which means they are not, in fact, at the same potential.</p> </li> <li><p>You seem to think there can't be current flow if two points <em><strong>are</strong></em> at the same potential, yet you seem to accept that current will flow between the positive terminal and plate which are at the same potential (if we assume a zero wire resistance).</p> </li> </ol> <p>In circuits we generally assume zero resistance wires (wires where there is no potential difference along them) with current flowing through them (though all real conductors, with the exception of superconductors, have some resistance).</p> <p>That said, in the circuit you've drawn it is certainly possible to charge the capacitor using a return path through the Earth. The rate of charging will depend on the total resistance between the negative battery terminal and capacitor plate. That, in turn, will depend on soil resistivity, the type of electrodes (the conductors inserted into the soil), the distance between electrodes, and the length of the electrode in contact with the soil.</p> <p>Hope this helps.</p>
|
Physics
|
|electromagnetism|special-relativity|charge|si-units|
|
Reason Why 1 Coulomb was redefined in SI unit system?
|
<blockquote> <p>Why definition of SI unit of charge was moved from that one related to current strength ?</p> </blockquote> <p>Because the new standard is more accurate and stable than the previous standard.</p> <blockquote> <p>Can it be because time is not Lorentz invariant under inertial reference frame, … Hence (1) definition is faulty because charge is absolute quantity and cannot depend on a reference frame.</p> </blockquote> <p>No. There was nothing faulty about the previous standard. This one is just more accurate and stable.</p>
|
Physics
|
|quantum-field-theory|special-relativity|commutator|causality|spacetime-dimensions|
|
How to show causality for a Klein-Gordon field in 1+1 dimensions using field commutators?
|
<p>Consider the commutator in <span class="math-container">$n+1$</span> space-time dimensions, <span class="math-container">$$[\phi(x),\phi(0)]= \int d\mu(p) \left( e^{-ip\cdot x}-e^{+ip\cdot x}\right) \tag{1} \label{1}$$</span> with <span class="math-container">$$d\mu(p)= \frac{d^n p}{(2\pi)^n \, 2 \, \omega(\vec{p})}, \qquad \omega(\vec{p})=\sqrt{\vec{p}^2+m^2}, \qquad p\cdot x= \omega(\vec{p}) \, x^0-\vec{p} \cdot \vec{x}.\tag{2} \label{2} $$</span> As <span class="math-container">$$\int d\mu(p)\, e^{\pm ip\cdot x}= \int \frac{d^{n+1} p}{(2\pi)^n}\, \theta(p^0)\, \delta(p^2-m^2)\, e^{\pm ip\cdot x} \tag{3} \label{3},$$</span> we see that \eqref{1} is invariant under proper orthochronous Lorentz transformations. Note that in \eqref{3}, <span class="math-container">$p^0$</span> has now become an integration variable with <span class="math-container">$p^2=(p^0)^2-\vec{p}^2$</span> and <span class="math-container">$p\cdot x = p^0 x^0-\vec{p}\cdot \vec{x}$</span>!</p> <p>For spacelike <span class="math-container">$x$</span> (i.e. <span class="math-container">$x^2 = (x^0)^2-\vec{x}^2\lt 0$</span>) one can always find a proper orthochronous Lorentz tansformation <span class="math-container">$x^\prime = L x$</span> with <span class="math-container">$x^{\prime \, 0}=0$</span>. Computing \eqref{1} in this reference frame, we obtain <span class="math-container">$$\int \frac{d^n p}{(2\pi)^n \, 2 \, \omega(\vec{p})}\left(e^{i \vec{p} \cdot \vec{x}} -e^{-i \vec{p} \cdot \vec{x}}\right) =0, \tag{4} \label{4}$$</span> where the transformation of variables <span class="math-container">$\vec{p} \to -\vec{p}$</span> was performed on the second term in the last step. This shows that indeed <span class="math-container">$$[\phi(x), \phi(0)]= 0 \quad \text{for} \quad x^2\lt 0 \tag{5} \label{5}, $$</span> <em>independent</em> of the space dimension <span class="math-container">$n$</span>.</p> <p>Remark: The general relation <span class="math-container">$$[\phi(x), \phi(y)] = 0 \quad \text{for} \quad (x-y)^2 \lt 0 \tag{6} \label{6}$$</span> is an immediate consequence of translation invariance.</p>
|
Physics
|
|thermodynamics|
|
Various ways of evaluating the polytropic index $n$
|
<p>One is limited to <span class="math-container">$n=\ln(p_1/p_2)/\ln(V_2/V_1)$</span> unless an equation of state is available. For an ideal gas, <span class="math-container">$V\propto T/P$</span>, so (1) is obtained through algebra. For that class of matter, <span class="math-container">$U\propto T$</span> (assuming that the reference zero for <span class="math-container">$U$</span> corresponds to <span class="math-container">$T=0$</span>), so one could replace each <span class="math-container">$T$</span> with <span class="math-container">$U$</span>.</p>
|
Physics
|
|statistical-mechanics|hamiltonian|many-body|ferromagnetism|spin-chains|
|
Is there a name for a Heisenberg-like model, but instead of the ZZ operator, we have one that favor only spin-up-spin-up configurations?
|
<p>As pointed out by @LPZ, the term you mention is <span class="math-container">\begin{equation} J_{\mathrm{new}}(4\sigma^z_i\sigma^z_{j}+2\sigma^z_i+2\sigma^z_j+1)/4, \end{equation}</span> and therefore, if you plug this in to where you have <span class="math-container">$\sigma^z_i\sigma^z_j$</span> now and calculate the result, it ends up being just another XXZ model with <span class="math-container">$J_z^{\prime}=J_{\mathrm{new}}$</span> and <span class="math-container">$h^{\prime}=h+J_{\mathrm{new}}$</span>, with a constant <span class="math-container">$NJ_{\mathrm{new}}/8$</span> shift in the baseline energy. So, that slightly different version ultimately is just the same as the usual XXZ Hamiltonian.</p> <p>XXZ chains are known to have Bethe Ansatz solutions (if <span class="math-container">$h=0$</span>), which is a beast on its own. There are many papers on it so I'd recommend searching for them.</p>
|
Physics
|
|newtonian-mechanics|torque|earth|tidal-effect|moon|
|
If the tidal bulge on the earth speeds the moon up, how does the moon move to a higher orbit?
|
<p>To understand this, let’s start with a simpler example of orbital mechanics. Suppose we have a rocket in a circular orbit that wishes to transfer to a higher circular orbit. This proceeds in the following steps</p> <ol> <li><p>Burn the engines to accelerate forward. This increases the velocity to be greater than the circular orbital velocity. Thus the rocket is now in an elliptical orbit.</p> </li> <li><p>Follow the elliptical orbit halfway around, to its highest point. Kinetic energy has converted to potential energy and the rocket is higher than the previous orbit and traveling slower than the circular orbit at this higher altitude.</p> </li> <li><p>Burn the engines to accelerate forward again. This increases the velocity to be equal to the circular orbital at the new altitude. This new circular orbital velocity is smaller than the velocity for the lower circular orbit.</p> </li> </ol> <p>Note, the rocket accelerates forward both times, and yet ends up traveling slower at the higher altitude. The KE gained by the burns plus some of the original KE is changed to potential energy by gravity.</p> <p>Now, with the moon, the tidal bulge leads the moon. So the moon is gravitationally attracted to a point slightly ahead of the center of the earth. This attraction can be decomposed into a component toward the center and a component forward.</p> <p>This forward component acts like the rocket burn. It increases the KE, and as the moon moves up the KE is converted to potential energy. The net result being, as before, a propulsive force acting only forward, but a transition to a higher and slower orbit.</p>
|
Physics
|
|optics|electromagnetic-radiation|visible-light|superposition|vision|
|
Why does white light appear white?
|
<p>The retina of the eye contains rods and cones, which are the actual light-sensitive components. Cones see colour and rods don't so I'll only talk about cones here. There are three types of cone: L, M, and S. Each can detect light over a range of wavelengths/frequencies/colours, with peak sensitivity at one colour and lower sensitivity at other colours. There are graphs in the wikipedia article at <a href="https://en.wikipedia.org/wiki/Cone_cell" rel="noreferrer">https://en.wikipedia.org/wiki/Cone_cell</a>.</p> <p>Every colour we see triggers a particular combination of levels for the three types of cone. If only L is triggered you will see a deep red. If S is triggered most, with only a little L and M you will see a deep blue. When we look at a "white" object in good light (e.g. daylight), a particular combination of L, M, and S is triggered, and we call that sensation "white".</p> <p>The cones are closely packed in the part of the retina we use to see most clearly. Our brain does not separate the signal from adjacent cones in a way that lets us interpret it as coloured spots close together. Instead it interprets the signals as the images we are used to seeing when we look at things, in order to help us live our lives.</p>
|
Physics
|
|homework-and-exercises|fluid-dynamics|flow|
|
Query regarding approach to solve a fluid kinematics problem
|
<p>The key word here is "estimate". It is not hard to see that the acceleration of a water molecule along the center line is not constant as it passes through the nozzle, so there is not one particular preferred value of the acceleration.</p> <p>However, it is reasonable to assume (and can be justified more rigorously if needed) that the acceleration magnitude will not change by more than one order of magnitude over the passage through the nozzle. This means that we can calculate several different numbers to use as an "estimate" of the acceleration, any one of which could be reasonable:</p> <ol> <li>The acceleration just after a particle enters the nozzle</li> <li>The acceleration when a particle is halfway through the nozzle</li> <li>The acceleration just before a particle exits the nozzle</li> <li>The acceleration averaged over the distance of the path through the nozzle</li> <li>The acceleration averaged over the time that the particle is in the nozzle</li> </ol> <p>I <em>think</em> your method corresponds to method #3 (I will confess that I don't quite understand it, particularly how you found a number for <span class="math-container">$dA/dt$</span>.) Meanwhile the "official" solution corresponds to method #4.</p> <p>Any one of these numbers is justifiable as an "estimate". That said, on an intuitive level I would probably prefer #2 or #5. Assuming that the acceleration varies monotonically through the nozzle, #1 & #3 will end up being the highest or lowest value of the acceleration, rather than somewhere in the middle of the range. I would also prefer #5 over #4 because it would have a nice interpretation: if <span class="math-container">$\bar{a}$</span> is the time-averaged acceleration, rather than the distance-averaged, then force being exerted on the water to keep the nozzle in place would just be <span class="math-container">$F = m \bar{a}$</span>, where <span class="math-container">$m$</span> is the mass of the volume of the water in the nozzle.</p>
|
Physics
|
|general-relativity|differential-geometry|metric-tensor|tensor-calculus|notation|
|
Clarification about Wald's notation on his General Relativity Book
|
<p>Greek indices are used to label elements of an ordered set. For example an arbitrary element from a set of basis vectors <span class="math-container">$\{e_0, e_1, e_2, e_3\}$</span> would be labeled <span class="math-container">$e_\mu$</span> so that <span class="math-container">$\mu \in \{0,1,2,3\}$</span> denotes its order in the basis. Similarly an arbitrary expansion coefficient of a vector <span class="math-container">$v$</span> in this basis belongs to the set <span class="math-container">$\{v^0,v^1,v^2,v^3\}$</span> and would be labeled <span class="math-container">$v^\mu$</span> to label which basis vector it is associated with, so that <span class="math-container">$v = \sum_\mu v^\mu e_\mu$</span>.</p> <p>When working in components you can tell the rank of the tensor that components are associated with by the location of the Greek indices (<span class="math-container">$v^\mu$</span> is the <span class="math-container">$\mu$</span> component of a vector, <span class="math-container">$\omega_\mu$</span> is the <span class="math-container">$\mu$</span> component of a covector, etc), however if you want to work with the tensors themselves (<span class="math-container">$v$</span>, <span class="math-container">$\omega$</span>, etc) you have no way of knowing its rank from its notation. Abstract index notation solves this by labeling tensors with Latin indices according to their rank, e.g. <span class="math-container">$v^a$</span> is a rank <span class="math-container">$(1,0)$</span> tensor, <span class="math-container">$\omega_a$</span> is a rank <span class="math-container">$(0,1)$</span> tensor.</p> <p>For your example tensor,</p> <ul> <li><span class="math-container">$h_{ab}$</span> is a rank <span class="math-container">$(0,2)$</span> tensor</li> <li><span class="math-container">$h_{\mu\nu}$</span> is the <span class="math-container">$\mu,\nu$</span> element of a set of rank <span class="math-container">$(0,0)$</span> tensors</li> <li><span class="math-container">$(\mathrm dx^\mu)_a$</span> is the <span class="math-container">$\mu$</span> element of a set of rank <span class="math-container">$(0,1)$</span> tensors</li> <li><span class="math-container">$(\mathrm dx^\nu)_b$</span> is the <span class="math-container">$\nu$</span> element of that same set of rank <span class="math-container">$(0,1)$</span> tensors</li> </ul> <p>So to answer your question, for <span class="math-container">$(\mathrm dx^\mu)_a$</span>, the Latin index <span class="math-container">$a$</span> means that this is a rank <span class="math-container">$(0,1)$</span> tensor, and the Greek index <span class="math-container">$\mu$</span> is the label associated with the ordered basis it belongs to.</p>
|
Physics
|
|electric-circuits|electrical-resistance|power|batteries|
|
Derivative of formula for battery's output power is not right
|
<p>Take a look at the equaiton for power: <span class="math-container">$$P=IV.$$</span> It may seem at first that one can take the derivative of this expression with respect to the current and obtain: <span class="math-container">$${dP(I)\over dI}=V,$$</span> however, this overlooks the fact that <span class="math-container">$V$</span> is a function <span class="math-container">$V=V(I)$</span>, e.g. as you not in your own post: <span class="math-container">$$V=\epsilon-Ir.$$</span> So that in reality you have no reason to expect that the derivative of power with respect to current is <span class="math-container">$V$</span>, rather: <span class="math-container">$${dP(I)\over dI}={dI\over dI}V+I{dV\over dI}=(\epsilon-Ir)+I(-r)=\epsilon-2Ir.$$</span> Where I have used the "product rule", <span class="math-container">$${dfg\over dx}={df\over dx}g+f{dg\over dx},$$</span> for taking the derivative of a product of functions. You don't have to use the product rule, you can indeed take the derivative by writing out the function for <span class="math-container">$P$</span> explicitly then differentiating, like you did in your example; however, the product rule can be handy for more complicated examples. Thus, your work was correct except for your expectation that the derivative of <span class="math-container">$P$</span> should be <span class="math-container">$V$</span>.</p>
|
Physics
|
|thermodynamics|entropy|
|
Clausius Inequality and Thermodynamic Potentials
|
<p>Your interpretation of the Clauius inequality is correct, and T is the temperature of the surroundings. However, your interpretation of how the Helmholtz free energy is applied is incorrect. In the case of applying the Helmholtz free energy. we assume that for both reversible and irreversible processes on a closed system, the surroundings are constantly maintained at the same temperature as the initial temperature of the system T throughout the process for both reversible and irreversible processes. So from the first law of thermodynamics, we have <span class="math-container">$$\Delta U=Q-W$$</span> and, from the 2nd law of thermodynamics we have <span class="math-container">$$\Delta S=\frac{Q}{T}+\sigma$$</span>, where <span class="math-container">$\sigma$</span> is the generated entropy. If we combine these two equations, we obtain: <span class="math-container">$$\Delta U=T\Delta S-T\sigma-W$$</span>or under these constant external temperature conditions, <span class="math-container">$$\Delta A=-W-T\sigma$$</span>or<span class="math-container">$$W=-\Delta A-T\sigma$$</span>So, for a given pair of end states, the maximum work that the system can do is for a reversible path, and is equal to <span class="math-container">$-\Delta A$</span>. For irreversible paths between the same two end states, the irreversible work is less than for the reversible path. Again, all this applies only to cases where the surroundings are maintained at the same temperature as the initial temperature of the system.</p>
|
Physics
|
|quantum-field-theory|vacuum|greens-functions|many-body|
|
Green function and probability amplitude
|
<p>There are two things wrong here. Firstly, you shouldn't use <span class="math-container">$x$</span> as both a label for <span class="math-container">$a^*_x$</span> and the position variable that's the argument of <span class="math-container">$\delta(x)$</span>—it'd be better to do something like asking about <span class="math-container">$a^*_x$</span> when <span class="math-container">$y$</span> is our position operator/coordinate, and then you could write <span class="math-container">$a^*_x \Omega_0 = \delta(y-x)$</span>. This confusion is usually avoided in calculations like this by staying in Fock space and not writing out single-particle wavefunctions—for instance, you also wouldn't be able to write out a wavefunction for <span class="math-container">$\Omega_0$</span> because it's not a single-particle state. My notation of choice would probably be just sticking with <span class="math-container">$a^\dagger_x|{\Omega_0}\rangle$</span> and leaving it at that, knowing that what that means is a single-particle state in which the single particle is perfectly localized at position <span class="math-container">$x$</span>.</p> <p>The last expression is wrong—you should have an integral over position because of the inner product, which was hard to see because of your previous notational choice with delta functions, so it should be something like <span class="math-container">\begin{align*} G_2(x,t,x^\prime,t^\prime) &= \langle e^{i t H_0} a^*_x \Omega_0, e^{i t^\prime H_0} a^*_{x^\prime} \Omega_0 \rangle\\ &= \int dy (e^{i t H_0} \delta(y-s)) (e^{i t^\prime H_0} \delta(y-x^\prime)). \end{align*}</span> Another reason this looks odd is that the Hamiltonians are operators, so they're going to smear out the delta functions, making this something like the convolution of two Gaussians, which is much more reasonable to ask about than the integral of two delta functions against each other.</p>
|
Physics
|
|special-relativity|speed-of-light|
|
Equivalence of speed and time flow
|
<p>In Minkowski space, there are three kinds of intervals, or separations between points, e.g. they are "time-like", "space-like" and "null". Light travels along null lines, i.e. lines which have 0 distance between any two points on the line. Such null paths are not perpendicular to the time-like paths. For a stationary object, (purely temporal direction) its path or "world line" is always at a 45 degree angle to null paths.</p> <p>There are no normal physical objects which posses a purely space-like path, paths which are perpendicular to purely temporal paths, because such paths require infinite speed. However, not all space-like paths require infinite speed. All such paths do, however, require speeds faster than light, and are thus not possible for normal physical masses (hypothetical particles called <em>tachyons</em> reside in this domain).</p> <p>Normal physical masses must stay in the realm of stationary or less-than-light speeds, a hyper-dimensional set of space-time points known as the "light-cone". All paths in this region are time-like. Navigation through space time is thus along time-like paths which have velocity vector that may have varying magnitude (from 0 to approaching c) and direction (purely temporal to approaching 45 degrees from temporal).</p>
|
Physics
|
|newtonian-mechanics|classical-mechanics|rotational-dynamics|equilibrium|statics|
|
On beam suspended by wires
|
<p>You can explain this to yourself by gripping a string or very thin wire in your left hand—so it's completely clamped, unable to translate or rotate—and then pinching the other end with two fingers on your right hand and moving that hand in various directions.</p> <p>Unless you're pulling directly away from your left hand—<strong>if the pulling force has any <a href="https://en.wikipedia.org/wiki/Euclidean_vector#Decomposition_or_resolution" rel="nofollow noreferrer">component</a> in any lateral direction (meaning any direction perpendicular to the wire length)</strong>—the wire will unstably bend and/or buckle <strong>without limit</strong>. Try it. (Compare with a stiff rod or beam, which you can stably push laterally if you're gripping one end, i.e., if the rod or beam is <a href="https://en.wikipedia.org/wiki/Cantilever" rel="nofollow noreferrer">cantilevered</a>.)</p> <p>In other words, when a structure has no bending stiffness, unless the end loads point directly away from each other, the structure will bend and/or buckle. Put another way, a string/cable/wire can stably accommodate only axial tension. That's the meaning of that statement from the book.</p>
|
Physics
|
|angular-momentum|representation-theory|
|
Addition of angular momenta with relative coefficients
|
<p>Well, assuming these represent rotation of a tensor product space, so operators of different subscripts commute, and they satisfy the rotation group Lie algebra, <span class="math-container">$$ [J^a_i,J^b_j]=i\epsilon^{abc} J^c_i ~\delta_{ij}, $$</span> you see that your <span class="math-container">$\vec J$</span> fails to satisfy the same algebra, <span class="math-container">$$ [J^a,J^b]=i\epsilon^{abc}( \alpha^2 J^c_1+ \beta^2 J^c_2) \neq i\epsilon^{abc} J^c, $$</span> so it does not represent a rotation!</p> <hr /> <p><em><strong>Edit pursuant to comment by OP</strong></em></p> <p>Its grim karma to use your confusing definition of <em>J</em>. Let's, instead, call it <span class="math-container">$\vec \mu= \alpha \vec J_1+ \beta \vec J_2$</span>, and the <em><strong>true</strong></em> total angular momentum <span class="math-container">$\vec {\cal J}= \vec J_1+ \vec J_2$</span> which <em>does</em> satisfy the angular momentum Lie algebra. Nevertheless, your <span class="math-container">$\vec \mu$</span> is a fine <em>vector operator</em>, <span class="math-container">$$ [\mu^a, {\cal J}^b]=i\epsilon^{abc} \mu^c, $$</span> so its matrix elements for a given rotation representation <em>j</em> transform simply by the <a href="https://en.wikipedia.org/wiki/Land%C3%A9_g-factor#A_derivation" rel="nofollow noreferrer">Wigner-Eckart theorem's</a> projection theorem: <span class="math-container">$$ \langle jm'|\mu^q|jm\rangle = \frac{\langle jm|\vec {\cal J}\cdot \vec \mu |jm\rangle}{j(j+1)} \quad\langle jm'|{\cal J}^q|jm\rangle, $$</span> where the essential orientation-dependence is carried by <span class="math-container">${\cal J}$</span> of the second factor on the r.h.s., only, as the first factor does not depend on it (<em>m</em>) and is <em>the same</em> for the entire (2<em>j</em>+1)-dimensional multiplet; that's what makes it "reduced".</p> <p>I have skipped any and all "other" quantum numbers, to keep things simple, but you may slip them into the reduced matrix element. The text by Sakurai & Napolitano, (3.483), has a fine discussion of that. So there is <em>some</em> predictability in the rotation of your vector operator.</p>
|
Physics
|
|quantum-mechanics|homework-and-exercises|perturbation-theory|
|
time-dependent perturbation theory
|
<p>The answer to your question lies in the fact that <span class="math-container">$|m\rangle$</span> and <span class="math-container">$|0\rangle$</span> are states with definite energy <span class="math-container">$\omega_m$</span> and <span class="math-container">$\omega_0$</span> respectively. Hence <span class="math-container">$$\hat{H}_0|m\rangle=\omega_m|m\rangle\Rightarrow e^{i\hat{H}_0 t'}|m\rangle=e^{i\omega_m t'}|m\rangle$$</span> <span class="math-container">$$\hat{H}_0|m\rangle=\omega_0|m\rangle\Rightarrow e^{i\hat{H}_0 t'}|m\rangle=e^{i\omega_0 t'}|m\rangle$$</span> where the exponentials in the right hand side of the equations above do not contain any operators in their exponent...</p>
|
Physics
|
|newtonian-mechanics|newtonian-gravity|orbital-motion|celestial-mechanics|binding-energy|
|
Do orbits with positive energy tend to infinity?
|
<p>Consider an attractive central force field with a magnitude that varies as the inverse fourth power of radial distance i.e. <span class="math-container">$F(r) = \frac {km} {r^4}$</span>. Then we have <span class="math-container">$V(r) = - \frac {km} {3r^3}$</span>. For a circular orbit with constant speed <span class="math-container">$v$</span> and radius <span class="math-container">$r$</span> we have</p> <p><span class="math-container">$\displaystyle \frac {mv^2} r = \frac {km} {r^4} \\ \displaystyle \Rightarrow \frac 1 2 mv^2 = \frac {km} {2r^3} \\ \displaystyle \Rightarrow \frac 1 2 mv^2 + V(r) = \frac {km} {2r^3} - \frac {km} {3r^3} = \frac {km} {6r^3}$</span></p> <p>So we have found a family of bounded orbits with positive total energy.</p> <p>(Note that <strike><a href="https://en.wikipedia.org/wiki/Bertrand%27s_theorem" rel="nofollow noreferrer">Bertrand's Theorem</a> tells us that these orbits are not stable</strike> these circular orbits are not necessarily stable).</p>
|
Physics
|
|electromagnetism|electromagnetic-induction|rocket-science|
|
Is it possible to build electrical space engine based on Electromagnetic induction
|
<p>Assuming radiation losses are negligible this will not provide any net propulsion. The momentum gained by pushing the magnet will be lost by stopping the magnet. And the distance moved will be reversed by recovering the magnet.</p>
|
Physics
|
|electromagnetism|electromagnetic-radiation|electric-current|antennas|
|
Understanding radiation mechanism of inset fed microstrip antenna
|
<p>There are two apertures (you called them slots) radiating, these are fed approximately <span class="math-container">$\lambda_g/2$</span> apart therefore the charges on the metal that induce the electric fields parallel with the aperture and perpendicular to the metal are in the opposite direction. These do not radiate perpendicular to the patch. But there is also a pair of fringing fields whose components parallel with the patch are actually pointing in the <em>same</em> direction along the "z" axis (in your diagram the "x" axis), and these are the fields shown in the plots taken from Orfanidis: Electromagnetic Waves and Antennas. On the contrary, fringing fields of the apertures parallel with the feed line do not radiate in the "z" direction because the pairs are in opposite direction to each other. <a href="https://i.stack.imgur.com/KDEvp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KDEvp.jpg" alt="enter image description here" /></a></p>
|
Physics
|
|reference-frames|coordinate-systems|astronomy|stars|software|
|
How is the trajectory of a star found relative to the Sun?
|
<p>What you're calling "space/true velocity" <em>is</em> velocity relative to the Sun. You're using observations in the solar reference frame without adjustment to another frame.</p> <p>Velocity is always relative to some reference frame. There is no more objective "true" velocity.</p>
|
Physics
|
|quantum-field-theory|particle-physics|spacetime|wave-particle-duality|matter|
|
Do particles, quarks, atoms really move in space, or is it field disturbance-wave that moves in S-T with speed $c$? How particles move in S-T in QFT?
|
<p>The quantum fields of the Standard Model are Lorentz invariant, so moving is in the eye of the beholder. Any particle with mass has a rest-frame, and the minimum energy of 1 quanta is <span class="math-container">$E_0=mc^2$</span>. Most relativistic QFT treatments are in the momentum domain, but if we consider a quanta of say, an electron, in its rest frame, so <span class="math-container">$E_0 = m_ec^2$</span>, its propagation in time is an unobservable rotating phase:</p> <p><span class="math-container">$$ e^{im_ec^2/\hbar} $$</span></p> <p>Ofc, that's really a quantum mechanics view.</p> <p>The fields are not like water waves, or anything in a medium. There is no medium. I would first master classical EM waves in SR, and then move up. For instance, if I have a plane wave from a helium neon laser with 633 nm wavelength (internally) and shoot it into space. A few light years later: is the wavelength still 633 nm? (no expanding universe..strictly flat space time)...</p> <p>When you're answer is "No, that really doesn't mean anything", then consider rQFTs.</p>
|
Physics
|
|quantum-field-theory|renormalization|perturbation-theory|quantum-chromodynamics|self-energy|
|
How does the on-shell (OS) scheme work if we assume mass to be zero?
|
<p>Well, if the quark is massless then the self-energy of the form</p> <p><a href="https://i.stack.imgur.com/gdHFb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gdHFb.png" alt="one-loop quark self-energy" /></a></p> <p>is scaleless (and thus vanishing in dimensional regularization) since the gluon (or the photon) is massless, the quark is massless and the on-shell condition is given by <span class="math-container">$p^2 = 0$</span>. Thus the field renormalization constant and the mass renormalization constant vanish.</p> <p>The problem you are facing is that the expansion in <span class="math-container">$\epsilon_\text{UV}$</span> (for <span class="math-container">$\epsilon_\text{UV} \approx 0$</span>) does not commute with the limit <span class="math-container">$p^2 \to 0$</span>. Actually this is a quite common problem when computing loop corrections. Whenever you go into a critical kinematical limit you have to start again from the full integral (Or you compute the integral with full <span class="math-container">$\epsilon_\text{UV}$</span> dependence take the kinematic limit and then expand in <span class="math-container">$\epsilon_\text{UV}$</span>). And actually when you go back to your computation of the loop integral for <span class="math-container">$p^2 \neq 0$</span> you will find that at some point you actually needed this assumption in order to arrive at your expression.</p>
|
Physics
|
|newtonian-mechanics|newtonian-gravity|orbital-motion|moon|satellites|
|
Why doesn't the Moon disrupt the orbits of geostationary satellites?
|
<p>The highest-flying satellites orbit the earth at a distance of <span class="math-container">$42000$</span> km (the geostationary orbit) away from the center of the earth.</p> <p>Remember the gravitational force is <span class="math-container">$F\propto \frac{M}{r^2}$</span>.</p> <p>Thus the force of the moon (mass <span class="math-container">$M=7.3\cdot 10^{22}$</span> kg, distance <span class="math-container">$r=380000$</span> km) on the satellite is very much smaller than the force of the earth (mass <span class="math-container">$M=6.0\cdot 10^{24}$</span> kg, distance <span class="math-container">$r=42000$</span> km) on the satellite. Using these numbers you find the force by the moon is around <span class="math-container">$10^{-4}$</span> times smaller than the force by the earth. So the effect of the moon is too small to significantly disrupt the satellite orbit around the earth.</p> <p>The situation is different when a star moves through the Oort cloud. The star and the sun have similar masses, and they are similar distances away from the comets. Hence the forces by star and sun acting on a comet would be of similar size.</p>
|
Physics
|
|optics|reflection|dielectric|metals|
|
Fresnel Equations, Refraction, and Metals
|
<blockquote> <p>This seems to make sense, however I have also read other sources stating that metals have no refracted beam at all. Is this meant as short-hand for saying that the refracted light is immediately absorbed by the metal? Or does it actually mean that the fresnel equations do not apply?</p> </blockquote> <p>This isn't true, the Fresnel equations do apply to metals. In particular, light can pass through metal films that are sufficiently thin. It's hard to comment further on the claims you speak of without context.</p> <blockquote> <p>Further, why is it that metals tint the colors of their reflections while dielectrics do not? Is it that a given metal (such as gold) has a structure which will absorb specific wavelengths and not reflect them at all? (This also feels like a deviation of fresnel's equations, as I did not think them to be wavelength dependent)</p> </blockquote> <p>Note that the Fresnel equations include the refractive index of the medium (or equivalently, the wave impedance in the medium). The normal reflectance from a smooth metal surface can be written as <span class="math-container">$$ R_\text{normal} = \left|\frac{\bar n - 1}{\bar n + 1}\right|^2 = \frac{(n-1)^2+\kappa^2}{(n+1)^2+\kappa^2}$$</span> where <span class="math-container">$\bar n$</span> is the complex refractive index, and <span class="math-container">$n$</span> and <span class="math-container">$\kappa$</span> are its real and imaginary components. These parameters are frequency dependent for metals. Their variation in real metals can be quite complicated, but a commonly observed feature is that below a certain plasma frequency, the conductivity of the metal is high, leading to <span class="math-container">$n < 1$</span> and large <span class="math-container">$\kappa$</span>, and hence high reflectance according to the equation above.</p> <p>Above the plasma frequency, <span class="math-container">$n$</span> approaches <span class="math-container">$1$</span> and <span class="math-container">$\kappa$</span> is small (sufficiently thin metal films are therefore approximately transparent), leading to a small reflectance. A consequence of all this is that gold has high reflectance at low frequencies (reddish colors) and lower reflectance at high frequencies (bluish colors), resulting in its characteristic color.</p> <p>Dielectrics can also have a similar tint in color if they have a resonance frequency in or close to the visible spectrum, but it may not be as easy to discern especially if they are not opaque.</p>
|
Physics
|
|quantum-field-theory|operators|wick-theorem|matrix-elements|non-perturbative|
|
Non-perturbative matrix element calculation
|
<p>First, some remarks :</p> <ul> <li><p>Wick's theorem only applies to free (and interaction picture) fields, hence it is fundamentally perturbative.</p> </li> <li><p>the normal ordered product <span class="math-container">$:\phi(x)\phi(y):$</span> contains a term with two creation operators, whose matrix elements between <span class="math-container">$\langle \lambda_p|$</span> and the vacuum need not vanish.</p> </li> <li><p>Lastly, if you find a divergent integral, it is probably best to regularize and renormalize the theory before making any kind of conclusion.</p> </li> </ul> <p>Now, if we assume that we have a fully renormalized non perturbative theory, what can we say ? We have a Hilbert space with a representation of the Poincaré group and an invariant vacuum state <span class="math-container">$|\Omega\rangle$</span>, as well as an operator valued distribution <span class="math-container">$\phi(x)$</span>.</p> <p>On one hand, because <span class="math-container">$\phi(x)$</span> (even fully renormalized) is an operator-valued distribution (and not a function), the product <span class="math-container">$\phi(x)\phi(y)$</span> is well-defined (as a distribution, it needs to be smeared with a Schwartz function <span class="math-container">$f(x,y)$</span> to yield an operator). We can try to get a spectral representation for the matrix element <span class="math-container">$\langle \lambda_p|\phi(x) \phi(y) |\Omega\rangle$</span> following the same kind of calculations as in P&S section 7.1 : <span class="math-container">\begin{align} \langle \lambda_p|\phi(x) \phi(y) |\Omega\rangle &= \langle \lambda_p|\phi(x) |\Omega\rangle\langle \Omega|\phi(y) |\Omega\rangle + \sum_{\lambda'}\int\frac{\text d^3 p'}{(2\pi)^3 2E(p')}\langle \lambda_p|\phi(x) |\lambda'_{p'}\rangle\langle \lambda'_{p'}|\phi(y) |\Omega\rangle \\ &= e^{-ip\cdot x}\langle \lambda_0|\phi(0) |\Omega\rangle\langle \Omega|\phi(0) |\Omega\rangle + \sum_{\lambda'}\int\frac{\text d^3 p'}{(2\pi)^3 2E(p')}e^{-i(p-p')x}\langle \lambda_{p}|\phi(0) |\lambda'_{p'}\rangle e^{-iy\cdot p'}\langle \lambda'_{p'}|\phi(0) |\Omega\rangle \end{align}</span> Because we can replace <span class="math-container">$\phi(x) \to \phi(x) - \langle \Omega|\phi(0)|\Omega\rangle$</span>, we can assume that <span class="math-container">$\langle \Omega|\phi(0)|\Omega\rangle$</span> vanishes, so that : <span class="math-container">\begin{align} \langle \lambda_p|\phi(x) \phi(y) |\Omega\rangle &= e^{-ip\cdot x} \sum_{\lambda'}\int\frac{\text d^3 p'}{(2\pi)^3 2E(p')}e^{-i p'\cdot (y-x)}\langle \lambda_{p}|\phi(0) |\lambda'_{p'}\rangle\langle \lambda'_{0}|\phi(0) |\Omega\rangle \end{align}</span> I don't think that there is much more we can do. (One idea would be to smear <span class="math-container">$\phi$</span> with a function whose Fourier transform is supported near the 1-particle mass shell, so that the operator <span class="math-container">$\phi_f = \int f(x)\phi(x) \text dx$</span> would only produce 1 particle states when acting on the vacuum). The limit as <span class="math-container">$y\to x$</span> has no reason to be well behaved.</p> <p>On the other hand, we can regularize and renormalize the theory in such a way that there is an operator <span class="math-container">$\mathcal O_2(x)$</span> which we can interpret as a renormalized version of the ill-defined <span class="math-container">$\phi^2(x)$</span>. In a free theory, for example, normal ordering would be enough so <span class="math-container">$\mathcal O_2(x) = :\phi(x)^2:$</span> (and we can write a well-defined Hamiltonian operator). Assuming the renormalization process was done properly, this operator should be a Lorenz scalar, and therefore the second calculation from OP works perfectly well : <span class="math-container">$$\langle \lambda_p|\mathcal O_2(x) |\Omega \rangle = e^{-ip\cdot x}\langle \lambda_0|\mathcal O_2(0) |\Omega \rangle$$</span></p> <p>The fact that this does not match with the singular behavior of the matrix element <span class="math-container">$\langle \lambda_p|\phi(x)\phi(y)|\Omega\rangle$</span> is just a trace of the fact that the renormalized "<span class="math-container">$\phi^2$</span>" operator is not defined as the limit of <span class="math-container">$\phi(x) \phi(y)$</span> as <span class="math-container">$y\to x$</span>.</p>
|
Physics
|
|optics|reflection|refraction|geometric-optics|lenses|
|
Lenses and missing reflection
|
<p>The ratio of power reflected from an interface to the power incident is called <a href="https://en.wikipedia.org/wiki/Reflectance" rel="nofollow noreferrer">reflectance</a>. For a smooth (as in not rough) interface, it can be calculated using the <a href="https://en.wikipedia.org/wiki/Fresnel_equations#Power_(intensity)_reflection_and_transmission_coefficients" rel="nofollow noreferrer">Fresnel equations</a>. For a ray normally incident on a lens surface from air (or vacuum), it is given by <span class="math-container">$$ R_\text{normal} = \left(\frac{n - 1}{n + 1}\right)^2$$</span> where <span class="math-container">$n$</span> is the refractive index of the material the lens is made of. For a glass lens with <span class="math-container">$n = 1.5$</span>, <span class="math-container">$R_\text{normal} = 4\%$</span>.</p> <p>These reflections result in an image that is dimmer than it would be without reflections, and also produce dim "stray" images at various locations in an optical system. Why aren't they considered? Probably because in an introductory class, the teacher already has their hands full trying to teach students how an image is formed, without getting into into reflections from lens surfaces. Perhaps this is no great crime, because a relatively small fraction of power is reflected at each interface, and stray reflections are often dim enough not to matter for many applications.</p> <p>Reflections are often considered in practice. Transmission losses can be significant especially in optical systems consisting of many lenses. If the image brightness is important, attempts are made to reduce transmission losses using <a href="https://en.wikipedia.org/wiki/Anti-reflective_coating" rel="nofollow noreferrer">anti-reflective coatings</a> for instance. Undesired internal reflections can also cause problems like <a href="https://en.wikipedia.org/wiki/Lens_flare" rel="nofollow noreferrer">lens flare</a>.</p>
|
Physics
|
|homework-and-exercises|general-relativity|metric-tensor|tensor-calculus|stress-energy-momentum-tensor|
|
Showing that derivative of energy-momentum tensor is equal to 0
|
<p>Using the product rule, Maxwell's equations (with <span class="math-container">$j^\mu =0$</span>), the antisymmetry of the field strength tensor and some index gymnastics yields <span class="math-container">$$ \begin{align} \partial_\mu (F^{\mu \lambda}F^\nu_{\; \lambda}) &=\underbrace{(\partial_\mu F^{\mu \lambda})}_{=0}\,F^\nu_{\; \lambda}+F^{\mu \lambda}\,\partial_\mu F^\nu_{\;\lambda}\\[2pt] &=F^{\mu \lambda} \, \partial_\mu F^\nu_{\; \lambda}\\[3pt] &=F_{\mu \lambda}\, \partial^\mu F^{\nu \lambda}\\[3pt] &= F_{\sigma \lambda}\,\partial^\sigma F^{\nu \lambda} \\&=-F_{\lambda \sigma} \, \partial^\sigma F^{\nu \lambda} \end{align}\tag{1} \label{1}$$</span> and <span class="math-container">$$\begin{align} \partial_\mu (\eta^{\mu \nu}F^{\lambda \sigma}F_{\lambda \sigma})&=\partial^\nu (F^{\lambda \sigma}F_{\lambda \sigma})\\[3pt] &=(\partial^\nu F^{\lambda \sigma})F_{\lambda \sigma}+F^{\lambda \sigma}\partial^\nu F_{\lambda \sigma}\\[3pt] &=2 F^{\lambda \sigma}\partial^\nu F_{\lambda \sigma}\\[3pt] &=2F_{\lambda \sigma} \, \partial^\nu F^{\lambda \sigma}\\[3pt] &= -2F_{\lambda \sigma}(\partial^\lambda F^{\sigma \nu}+\partial^\sigma F^{\nu \lambda})\\[3pt] &=-4F_{\lambda \sigma}\,\partial^\sigma F^{\nu \lambda}. \end{align}\tag{2} \label{2} $$</span> Combining \eqref{1} and \eqref{2}, one finds the desired result: <span class="math-container">$$\begin{align} \partial_\mu T^{\mu \nu}&= \partial_\mu \left(F^{\mu \lambda}F^\nu_{\;\lambda}- \frac{1}{4}\eta^{\mu \nu}F^{ \lambda \sigma} F_{ \lambda \sigma}\right) \\[3pt]&=0 . \tag{3}\end{align}$$</span></p>
|
Physics
|
|simulations|boundary-conditions|interactions|molecular-dynamics|
|
Periodic boundary conditions: torus or infinite images?
|
<p>PBC means that the system moves on a torus. This means that the particle positions <span class="math-container">$x$</span> are thought of as being defined up to a lattice <span class="math-container">$\Lambda$</span>, i.e. living in <span class="math-container">$\mathbb R^D/\Lambda$</span>. The issue is that your interaction term <span class="math-container">$U$</span> is therefore multivalued in this context, or in other words it is not <span class="math-container">$\Lambda$</span> periodic. You therefore need to periodise it to <span class="math-container">$U_\Lambda$</span>, i.e. you want it to satisfy for all <span class="math-container">$\lambda_1,...,\lambda_N\in\Lambda$</span> and all <span class="math-container">$x_1,...,x_N\in\mathbb R^D$</span>: <span class="math-container">$$ U_\Lambda(x_1,...,x_N) = U_\Lambda(x_1+\lambda_1,...,x_N+\lambda_N) $$</span> As a consistency condition, as the spatial periodicity goes to infinity, you should recover the original potential, so: <span class="math-container">$$ U_{L\Lambda} \xrightarrow{L\to\infty} U $$</span></p> <p>There are many ways to do this. One way is to use your infinite images: <span class="math-container">$$ U_\Lambda(x_1,...,x_N) = \sum_{\lambda\in\Lambda^N}U(x_1+\lambda_1,...,x_N+\lambda_N) $$</span> It's a nice method if <span class="math-container">$U$</span> decays fast enough. In Fourier space, it is equivalent to sampling the Fourier transform on the dual lattice <span class="math-container">$\Lambda^*$</span>. It also preserves smoothness etc. However when <span class="math-container">$U$</span> does not decay fast enough, you need to define the summation carefully.</p> <p>The MIC is a separate method where you use the same periodising technique, but first you set <span class="math-container">$U$</span> to zero outside the Voronoi tile of the origin for <span class="math-container">$\Lambda^N$</span>. In Fourier space, it is equivalent to sampling the Fourier transform on the dual lattice <span class="math-container">$\Lambda^*$</span> after having smoothed it out by convolving with a sinc filter. It's exactly like in sampling where you first filter the signal to avoid aliasing (except here you just use the brick wall filter). This has the consequence of typically creating "kinks" at the boundary of the Vornoi tiles but is easier to compute and does not require fancy resummation.</p> <p>When the potential is long range, both methods will give rather different results. Depending on what you are interested in, one method may be more appropriate than another. An extreme example of this would be the 1D Coulomb potential for the interaction of two particles. In the first method, this will give periodic inverted parabolas, while in the second method, it will give a triangle wave. Notice that as expected, both agree at the minima, but the general structure is rather different.</p> <p>The two methods are different, but since they both obey the same consistency condition in the infinite spatial period limit, they will give similar results in this limit. Since this is the limit you are interested in, because you usually want to model particles in space, for most practical purposes they coincide.</p> <p>Hope this helps.</p>
|
Physics
|
|water|physical-chemistry|earth|molecules|geophysics|
|
Conservation of water?
|
<p>The number of water <em>molecules</em> is certainly not constant, because chemical reactions can create or destroy water. A simple example is metabolism of Glucose, which creates water.</p> <p><span class="math-container">$C_6H_{12}O_6 + 6O_2 \rightarrow 6 CO_2 + 6 H_2O + \mathrm{Energy}$</span></p> <p>The reverse reaction (destroying water) happens during photosynthesis.</p>
|
Physics
|
|quantum-field-theory|lagrangian-formalism|path-integral|gauge-invariance|constrained-dynamics|
|
Why impose constraints in (Path Integral) Quantization of Proca action?
|
<p>The path integral quantization of a (free) massive vector field does not pose any problems, in particular it does not require imposing any additional constraint in the functional integral. A (real) spin <span class="math-container">$1$</span> field <span class="math-container">$V^\mu$</span> is described by the Lagrangian <span class="math-container">$$\mathcal{L}= -\frac{1}{4} V_{\mu \nu}V^{\mu \nu}+\frac{M^2}{2}V_\mu V^\mu, \qquad V_{\mu \nu}=\partial_\mu V_\nu-\partial_\nu V_\mu. \tag{1} \label{1} $$</span> The associated generating functional is defined by <span class="math-container">$$Z[J] = \langle 0| e^{-i\int\! d^4x \, J^\mu(x)\, V_\mu(x)} |0 \rangle = \int [dV] \, e^{iS[V,J]} \tag{2} \label{2} $$</span> with <span class="math-container">$$ \begin{align} S[V,J]&= \int \! d^4x \,\left(-\frac{1}{4} V_{\mu \nu}V^{\mu \nu} +\frac{M^2-i\epsilon}{2} V_\mu V^\mu - J_\mu V^\mu \right) \\[5pt] &= \int \! d^4 x \left\{\frac{1}{2}V^\mu \left[\eta_{\mu \nu} \left( \square+ M^2-i \epsilon\right) -\partial_\mu \partial_\nu \right] V^\nu -J_\mu V^\mu \right\} \end{align}\tag{3} \label{3} $$</span> and the normalization condition <span class="math-container">$Z[0]=1$</span>. The functional integral is evaluated by employing the usual shift of the integration variable <span class="math-container">$V_\mu = V_\mu^\prime+W_\mu$</span>, using translation invariance of the functional measure, <span class="math-container">$[dV_\mu]=[dV_\mu^\prime]$</span>, and choosing the field <span class="math-container">$W_\mu$</span> in such a way that the terms linear in the new variable of integration <span class="math-container">$V_\mu^\prime$</span> vanish. As a consequence, <span class="math-container">$W_\mu$</span> is determined by the differential equation <span class="math-container">$$\left[\eta_{\mu \nu}\left( \square+M^2-i \epsilon \right) - \partial_\mu\partial_\nu \right]W^\nu=J_\mu. \tag{4} \label{4} $$</span> The determination of the Green function <span class="math-container">$\Delta^{\nu \rho}(x)$</span> of the differential operator occuring in \eqref{4}, defined by <span class="math-container">$$ \left[\eta_{\mu \nu} \left(\square +M^2-i\epsilon \right) -\partial_\mu \partial_\nu \right]\Delta^{\nu\rho}(x)=\delta_\mu^{\;\rho} \, \delta^{(4)}(x), \tag{5} \label{5}$$</span> is straightforward (in contrast to the case of a massless gauge field, the inversion of the differential operator poses no problem), with its Fourier representation given by <span class="math-container">$$ \Delta^{\nu \rho}(x)=\int \!\frac{d^4k}{(2\pi)^4 }\, e^{-ik\cdot x} \,\frac{\eta^{\nu \rho}-k^\nu k^\rho/M^2}{M^2-k^2-i \epsilon}. \tag{6} \label{6} $$</span> The (unique) solution of \eqref{4} is thus given by <span class="math-container">$$W^\mu(x)=\int \! d^4 y \, \Delta^{\mu \nu}(x-y)\, J_\nu(y), \tag{7} \label{7}$$</span> finding <span class="math-container">$$Z[J]=e^{-\frac{i}{2}\int\! d^4x \, d^4 y \, J_\mu(x) \, \Delta^{\mu \nu}(x-y) \, J^\nu(y)} \tag{8} \label{8}$$</span> for the generating functional.</p>
|
Physics
|
|homework-and-exercises|electrostatics|charge|conductors|
|
How does charge move in conducter which is between two connected conductors
|
<p>You can treat this as two parallel capacitors. One between the top plate and the top of the middle plate, and one between the bottom plate and the bottom of the middle plate.</p> <p>Since the dielectric thickness of the lower capacitor is twice that of the upper capacitor, it will have half the capacitance.</p> <p>So the charge "stored" in the lower capacitor will be <span class="math-container">$q/3$</span> and the charge "stored" in the upper capacitor will <span class="math-container">$2q/3$</span>.</p> <p>Put another way, there will be <span class="math-container">$-2q/3$</span> on the upper plate, <span class="math-container">$2q/3$</span> on the upper surface of the middle plate, <span class="math-container">$q/3$</span> on the lower surface of the middle plate, and <span class="math-container">$-q/3$</span> on the lower plate.</p> <p>Edit to add:</p> <p>In comments you asked,</p> <blockquote> <p>Is there a way to solve it with fields/potential?</p> </blockquote> <p>You know the potentials of the top and bottom plate are equal. And the potentials of the top and bottom surfaces of the middle plate.</p> <p>And the field between two "broad" planes of charge is uniform (You can show this with Gauss's Law).</p> <p>Then since the potential is the integral of the field along a path, and the path from the top plate to the middle plate is half the distance as the path from the bottom plate to the middle plate, the field strength must be twice as high in the upper dielectric as in the lower dielectric.</p> <p>Then since <span class="math-container">$$\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0 \varepsilon_r}$$</span> (the differential form of Gauss's Law), the charge on the top surface of the middle plate must be double the charge on the bottom surface. (Put in more hand-wavy terms, there must be more charge on the top surface to terminate the more closely spaced field lines)</p>
|
Physics
|
|electrostatics|electric-fields|potential|conventions|integration|
|
Problem with understanding the definition of electric potential
|
<p>The electrostatic field is defined as a gradient of a scalar potential: <span class="math-container">$$\vec E(\vec r)=-\nabla V(\vec r).$$</span> Thus there is considerable freedom in how one might a scalar potential for any given electric field. One might add any constant <span class="math-container">$V_0$</span> to it without changing the physics one bit: <span class="math-container">$$V^\prime(\vec r)=V(\vec r)+V_0\implies\vec E^\prime(\vec r)=-\nabla(V(\vec r)+V_0)=-\nabla V(\vec r)=\vec E(\vec r).$$</span> This freedom of choice can be used to the advantage, i.e. one can use it to get rid of terms like <span class="math-container">$V(\vec r_0)$</span> For example, as you note in your post, <span class="math-container">$$V(\vec r)=V(\vec r_0)+\int_{\vec r_0}^{\vec r}-\vec E(\vec r)\cdot d\vec r;$$</span> however, on may write: <span class="math-container">$$V^\prime (\vec r)=V(\vec r)+V_0=V(\vec r_0)+\int_{\vec r_0}^{\vec r}-\vec E(\vec r)\cdot d\vec r.$$</span> Note that with the modified potential and setting <span class="math-container">$V_0=V(\vec r_0)$</span>, the pesky <span class="math-container">$V(\vec r_0)$</span> is cancelled out without making any assumption that <span class="math-container">$\vec r_0\rightarrow +\infty$</span>. Physically this is tantamount to the statement that only potential <em>differences</em> are relevant to physics, and not the absolute value of the potential at any given point, one is free to choose any point to be the "zero point", and measure the potential at all other points relative to it. For practical purposes it is often convenient to choose this point at infinity, however, it is not at all necessary.</p> <p>The quantity in the integrand is the electric field <span class="math-container">$\vec E(\vec r)$</span> and it is difficult to imagine how any finite charge distribution can give rise to an electric field that does not diminish at infinity. Thus, it is difficult to comment on your specific problem in this regard without an explicit example of a supposed divergent field <span class="math-container">$\vec E$</span>.</p>
|
Physics
|
|visible-light|scattering|
|
Calculation of illumination from the Sun and sky - explanation needed
|
<p>Please transcribe or paraphrase enough of the surrounding text so that this question can be intelligible without reference to a third-party link and add it to your question.</p> <p>"IO" is just <span class="math-container">$10$</span> on the author's typewriter.</p> <p><span class="math-container">$M$</span> will be the corresponding physical observable to the coefficient <span class="math-container">$\epsilon$</span>, giving a unitless number when multiplied together. From the context, it's the mass of a meter-square column of atmosphere in the direction of the light, corresponding to <span class="math-container">$\epsilon$</span> being the mass absorption coefficient. If <span class="math-container">$M$</span> was the moles of air in the column and and <span class="math-container">$\epsilon$</span> was the molar absorption coefficient, you'd get the same number, so it doesn't really matter which pair you choose.</p>
|
Physics
|
|thermodynamics|energy|potential-energy|
|
When is the internal energy of a system not considered potential energy?
|
<p><strong>This is a serious edit to rectify my mistake of not explaining things correctly or more precisely</strong></p> <p>I must rewrite this answer again as I misdirected the OP to understand only one side of The story, Thanks to @BobD, his answer is to the point and precise, I will try again to be as precise technically as possible for me</p> <p>I must be clear as already mentioned in BobD's answer that any non-kinetic energy can be generally called Potential energy except in some cases, see <a href="https://physics.stackexchange.com/a/245189/283030">https://physics.stackexchange.com/a/245189/283030</a></p> <p>For classical mechanics, Now, When you say internal energy, you really should ask internal to what?</p> <p>Let us consider An example, You are observing a ball translationally falling(Not rotating) from a height on earth</p> <p>When You only observe The ball, Internal energy of ball</p> <blockquote> <p><span class="math-container">$I_{ball}=P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ ball}}+K_{\mathbb{microscopic \ w.r.t \ Centre \ of \ mass \ of \ ball}}\tag 1$</span></p> </blockquote> <blockquote> <p>When You only observe The Earth, Internal energy of earth <span class="math-container">$I_{earth}=P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ earth}}+K_{\mathbb{microscopic \ of \ earth \ w.r.t \ Centre \ of \ mass \ of \ earth}}\tag 2$</span></p> </blockquote> <p>where <span class="math-container">$P$</span> and <span class="math-container">$K$</span> are potential and kinetic energies at microscopic levels,The microscopic interactions includes all other interactions like electrostatic and gravitational etc.</p> <p>When you observe complete "ball-earth" system from outside, The internal energy</p> <blockquote> <p><span class="math-container">$$I= P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ ball}}+P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ earth}}+P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ ball \ and \ earth}}+K_{\mathbb{microscopic \ of \ earth \ w.r.t \ Centre \ of \ mass \ of \ earth}}+K_{\mathbb{microscopic \ of \ ball \ w.r.t \ Centre \ of \ mass \ of \ ball}} \tag 3$$</span></p> </blockquote> <p>Now keeping this in mind</p> <blockquote> <p>You can now write <span class="math-container">$$E=K_{\ Centre \ of \ mass \ of \ ball }+K_{\ Centre \ of \ mass \ of \ Earth }+I$$</span></p> </blockquote> <p>which can be re-written using (3) as</p> <blockquote> <p><span class="math-container">$$E=K_{\ Centre \ of \ mass \ of \ ball }+K_{\ Centre \ of \ mass \ of \ Earth }+K_{\mathbb{microscopic \ of \ earth \ w.r.t \ Centre \ of \ mass \ of \ earth}}+K_{\mathbb{microscopic \ of \ ball \ w.r.t \ Centre \ of \ mass \ of \ ball}}+P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ ball}}+P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ earth}}+P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ ball \ and \ earth}}$$</span></p> </blockquote> <p>This can further be reduced to</p> <blockquote> <p><span class="math-container">$$E=K_{\ Centre \ of \ mass \ of \ ball }+K_{\ Centre \ of \ mass \ of \ Earth }+I_{ball}+I_{earth}+P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ ball \ and \ earth}}$$</span></p> </blockquote> <p>Now Let <span class="math-container">$P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ ball \ and \ earth}}=P_{EB}$</span> includes gravitational and non gravitational interactions$</p> <p><span class="math-container">$P_{EB}=P_{EB \ gravitational}+P_{EB \ non-gravitational}$</span></p> <p>This <span class="math-container">$P_{EB \ gravitational}$</span> is called gravitational potential energy <span class="math-container">$U$</span> for this case</p> <p>the final equation becomes</p> <blockquote> <p><span class="math-container">$$E=K_{\ Centre \ of \ mass \ of \ ball }+U+I_{ball}+I_{earth}+P_{EB \ non-gravitational}+K_{\ Centre \ of \ mass \ of \ Earth }$$</span></p> </blockquote> <p>For change analysis, this becomes</p> <blockquote> <p><span class="math-container">$$\Delta E=\Delta K_{\ Centre \ of \ mass \ of \ ball }+\Delta U+\Delta I_{ball}+\Delta I_{earth}+\Delta P_{EB \ non-gravitational}+\Delta K_{\ Centre \ of \ mass \ of \ Earth }\tag 4$$</span></p> </blockquote> <p>Now in most of mechanics problem</p> <p>The term</p> <p><span class="math-container">$$I_{earth}+P_{EB \ non-gravitational}+K_{\ Centre \ of \ mass \ of \ Earth }$$</span> is assumed to be constant</p> <p>hence the term <span class="math-container">$$\Delta I_{earth}+\Delta P_{EB \ non-gravitational}+\Delta K_{\ Centre \ of \ mass \ of \ Earth }=0$$</span></p> <p>which gives out the final result</p> <blockquote> <p><span class="math-container">$$\Delta E=\Delta K_{\ Centre \ of \ mass \ of \ ball }+\Delta U+\Delta I_{ball}\tag 5$$</span></p> </blockquote> <p>Here note that <span class="math-container">$\Delta I_{ball}$</span> cannot be solely included in potential energy category, Because it includes both <span class="math-container">$\Delta P_{\mathbb{microscopic\ interaction\ between\ atoms \ of \ ball}}$</span> and <span class="math-container">$\Delta K_{\mathbb{microscopic \ w.r.t \ Centre \ of \ mass \ of \ ball}}$</span></p> <p>The change in internal energy of ball can be done using changing its shape, which changes the potential part of Internal energy, while it can also be changes by heating, which changes kinetic as well as potential part of Internal energy</p> <p>Which answers you question..</p> <p>Note:In The complete analysis of kinetic energy, I have used <span class="math-container">$K_{total}=K_{C.O.M}+K_{w.r.t \ C.O.M}$</span></p>
|
Physics
|
|special-relativity|spring|statics|
|
Proper treatment of springs in special relativity
|
<p>I think your analysis is essentially correct, but is only valid for springs and forces parallel to the relative motion. Let's generalise it a bit more but remain with the static case where the proper force is not varying in the rest frame:</p> <p>Assumptions:</p> <ol> <li>Force transforms as per the Lorentz transformation of <em><strong>static</strong></em> force.</li> <li>Length transforms as per the Lorentz transformation of Length.</li> <li>The springs obey Hooke's law in the rest frame of the spring.</li> </ol> <p>In the rest frame of the spring, we have:</p> <p><span class="math-container">$$ k = F / r $$</span></p> <p>where (r) is the difference between the length of the stretched spring and length of the spring when it is not under stress. To an observer moving relative to the rest frame of the spring:</p> <p><span class="math-container">$$ k'_{\parallel} = F'_{\parallel} / r'_{\parallel} = (F_{\parallel} )/ (r'_{\parallel} \gamma^{-1}) = k \gamma$$</span></p> <p><span class="math-container">$$ k'_{\perp} = F'_{\perp} / r'_{\perp} = (F_{\perp} \gamma^{-1}) / (r_{\perp}) = k \gamma^{-1}$$</span></p> <p>so I agree with your conclusion that the spring constant is not Lorentz invariant.</p> <p>The relativistic form of Hooke's law becomes:</p> <p><span class="math-container">$$ F'_{\parallel} = k'_{\parallel} \ r'_{\parallel} = (k \gamma) \ (r_{\parallel} \gamma^{-1} ) = k \ r_{\parallel} = F_{\parallel}$$</span></p> <p><span class="math-container">$$ F'_{\perp} = k'_{\perp} \ r'_{\perp} = (k \gamma^{-1}) \ (r_{\perp}) = k \ r_{\perp} \ \gamma^{-1} = F_{\perp} \gamma^{-1}$$</span></p>
|
Physics
|
|electromagnetism|special-relativity|charge|mass|inertial-frames|
|
Relativistic mass and relativistic charge
|
<p>The way we deal with electric charge in relativity is to introduce it as part of a complete set of ideas about electromagnetic fields and charged bodies. You are quite right that in this formulation the amount of electric charge on a given body is independent of the inertial frame in which the body may be being observed. We say it is 'invariant' or (to be clear) 'Lorentz invariant'. The word 'invariant' means the same, at any given event, no matter what inertial reference frame may be being adopted to define spatial and temporal coordinates.</p> <p>Another important Lorentz invariant property of any particle is the rest mass <span class="math-container">$m$</span>. In terms of this quantity, the momentum of a body moving at speed <span class="math-container">$v$</span> is <span class="math-container">$$ p = \gamma m v. \tag{1} $$</span> Our thinking is clearer if we regard <span class="math-container">$m$</span> as the important mass-related property here, not <span class="math-container">$(\gamma m)$</span>. The equation setting out how the momentum relates to a vector force is <span class="math-container">$$ {\bf f} = \frac{d {\rm p}}{dt} \tag{2} $$</span> Hence <span class="math-container">$$ {\bf f} = \frac{d\gamma}{dt}m {\bf v} + \gamma \frac{dm}{dt}{\bf v} + \gamma m \frac{d{\bf v}}{dt}. $$</span> For forces from electromagnetic fields the rest mass doesn't change as the particle accelerates, so this simplifies to <span class="math-container">$$ {\bf f} = \frac{d\gamma}{dt}m {\bf v} + \gamma m \frac{d{\bf v}}{dt}. $$</span> Notice that this is <em>not</em> <span class="math-container">$(\gamma m)$</span> times the acceleration. I have mentioned this to give you some idea of why it is that it doesn't help much to gather the Lorentz factor and the rest mass together and call the combination by a name such as 'relativistic mass'. It's better to think of the situation as a given rest mass and then a momentum related to that by the formula (1). The important point is that the rest mass is the same in all inertial frames. That means that in order to find the momentum in any given frame you have to use the combination <span class="math-container">$\gamma v$</span> (which can depend on the frame) and multiply by the same <span class="math-container">$m$</span> no matter which frame it is.</p> <p>Once you have settled that idea, you will also see that the ratio of charge to rest mass, for any given body, is the same in all inertial frames. When a body of given charge and rest mass is accelerated by an electric field, the charge and rest mass stay fixed while the momentum increases. The degree to which the momentum is sensitive to force is also fixed (see equation (2)). But as <span class="math-container">$\gamma$</span> gets larger the acceleration <span class="math-container">$d{\bf v}/dt$</span> gets smaller, for any given force.</p>
|
Physics
|
|newtonian-mechanics|forces|friction|
|
Friction force variable or non-variable force
|
<p>Whenever a two bodies undergo relative motion, they pull each other with kinetic friction, i.e F = (kinetic friction coefficient)(normal force) When two bodies have the <em><strong>tendency</strong></em> to undergo relative motion, they subject each other to static friction.</p> <p><strong>To answer your question:</strong> If you displace the block, then the block and ground experience relative motion, hence pulling each other with Kinetic friction. Which means, that throughout the displacement(since there would be relative motion during the complete displacement) the frictional force applied between them would not change.</p>
|
Physics
|
|quantum-field-theory|symmetry|group-theory|lie-algebra|
|
Lie group symmetry in Weinberg's QFT book
|
<p>You are correct, (2.B.10) is actually not true. But it's only used through (2.B.11), and (2.B.11) does hold, because the derivative <span class="math-container">$\frac{\partial^2 f^a(\theta, \theta_1)}{\partial\theta^b \partial\theta^c}$</span> is symmetric in <span class="math-container">$b$</span> and <span class="math-container">$c$</span>.</p>
|
Physics
|
|differential-geometry|curvature|mathematics|geometry|space|
|
How can a triangle have a sum exceeding 180 degrees in a curved space?
|
<p>Here are three diagrams to illustrate that the angle sum of a triangle can differ from <span class="math-container">$180^\circ$</span>.</p> <p><a href="https://i.stack.imgur.com/nT1iA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nT1iA.jpg" alt="enter image description here" /></a></p> <p>The Wikipedia article <a href="https://en.wikipedia.org/wiki/Spherical_geometry" rel="nofollow noreferrer">Spherical Geometry</a> might be of interest?<br /> It has a nice illustration relating to measuring angles of a triangle on the Earth.<br /> <a href="https://i.stack.imgur.com/tKfoz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tKfoz.jpg" alt="enter image description here" /></a></p>
|
Physics
|
|newtonian-mechanics|energy|rigid-body-dynamics|
|
Total kinetic energy confusion
|
<blockquote> <p>lets say we have this object which we can make roll really fast (above ground) and so it has a lot of rotational kinetic energy but no translational kinetic energy.</p> </blockquote> <p>Great example.</p> <blockquote> <p>We then put it on the ground</p> </blockquote> <p>Let's first imagine that we put it on a super-slippery ice rink. The object continues to spin, but without friction, it can't turn that rotation into motion. It just sits there spinning. So the energy hasn't changed when it reached the ground.</p> <p>But now let's imagine a normal ground with friction. The friction between the two creates a force on the object. This force has two effects:</p> <ul> <li>it accelerates the object (linearly) in one direction</li> <li>it creates a torque on the rotation of the object which slows down the spin</li> </ul> <p>These two effects oppose each other. The translational energy goes up as the rotational energy goes down. (In addition, some energy will also be lost as the object skids on the ground).</p> <blockquote> <p>maybe at every moment rotational energy will be lost to translational kinetic energy (and heat)? and that's how it works?</p> </blockquote> <p>Correct.</p>
|
Physics
|
|quantum-mechanics|operators|hilbert-space|quantum-information|density-operator|
|
Schmidt decomposition of density operators
|
<p>The density matrix can be written as <span class="math-container">$$ \rho = \sum_{ijkl} \rho_{ijkl} |i\rangle_A|j\rangle_B \langle k|_A\langle l |_B\;, $$</span> where <span class="math-container">$|i\rangle_A,|j\rangle_B, \langle k|_A$</span> and <span class="math-container">$\langle l |_B$</span> are respectively bases for <span class="math-container">$\mathcal{H}_A^\dagger$</span>, <span class="math-container">$\mathcal{H}_B^\dagger$</span>, <span class="math-container">$\mathcal{H}_A$</span> and <span class="math-container">$\mathcal{H}_B$</span>.</p> <p>Your Schmidt decomposition groups terms like this <span class="math-container">$$ \rho = \sum_{ijkl} \rho_{ijkl} ( |i\rangle_A\langle k|_A)(|j\rangle_B \langle l |_B) $$</span> before diagonalizing. As you can see the left factor, that will form your <span class="math-container">$|\sigma_k\rangle\rangle$</span> basis is indeed from <span class="math-container">$\mathcal{H}_A\otimes\mathcal{H}_A^\dagger$</span> and similarly <span class="math-container">$|\Gamma_k\rangle\rangle \in \mathcal{H}_B\otimes\mathcal{H}_B^\dagger$</span></p>
|
Physics
|
|quantum-mechanics|heisenberg-uncertainty-principle|commutator|error-analysis|observables|
|
Is there a physical cause of uncertainty?
|
<p>I like to see quantumness of small scale systems not as weird as they first seem, because most of them can be understood from a wave perspective.</p> <p>Regarding the Uncertainty Principle and the impossibility of measure conjugate variables, like position and velocity, it also happen somehow in the classical world, let me explain myself:</p> <p>Imagine an harmonic wave with a definite <span class="math-container">$k$</span> and <span class="math-container">$\omega$</span>, then, add up a second wave with same <span class="math-container">$k$</span> and slightly different frequency which is coherent with the first one. The result is a superposed wave where certain max and mins are cancelled and with a less-defined frequency. If one adds infinite waves to the first one we will have a wave packet: an entity that is localised in space but not in the frequency domain. This is also known as a Fourier Transform in mathematics and it shows us how the localisation in space implies a spreading in frequency and viceversa, in the classical domains. Nevertheless, this doesn’t apply for particles where one can measure everything everywhere all at once.</p> <p>Therefore, in quantum mechanics where the properties of particle and waves are mixed and combined, you will have the same uncertainty principle. The real quantumness of the Heisenberg Uncertainty Principle is that there exists a fundamental limit in the relation between the uncertainties, while in classical mechanics <span class="math-container">$\Delta x \Delta p = 0$</span>, in quantum mechanics it must be always greater than a quantum scale, i.e. <span class="math-container">$\Delta x \Delta p \geq \frac{\hbar}{2}$</span>, meaning that it is impossible to measure both variables with 100% of accuracy.</p> <p>The physical solutions to the Schrodinger equation are those that can be normalized in space, thus, the free wave is not a real solution due to <span class="math-container">$\Delta x = \infty$</span> and <span class="math-container">$\Delta p = 0$</span>. In that case, physicists use the quantum fourier transform to build a wave packet that is normalizable in the Hilbert space, making the solution to fulfill the uncertainty relations inevitably.</p> <p>This video can be useful to you to understand what is purely classical in the uncertainty principle: <a href="https://youtu.be/MBnnXbOM5S4?si=SZIABAtDy3ZLj6Mt" rel="nofollow noreferrer">https://youtu.be/MBnnXbOM5S4?si=SZIABAtDy3ZLj6Mt</a></p> <p>T.</p>
|
Physics
|
|electromagnetism|vector-fields|
|
Doubt about the derivation of Liénard-Wiechert Potentials?
|
<p>You're right that some problems could arise for a kind of circular dependence of the variables involved. Let's get some partial results first and then put everything together:</p> <ul> <li><p>gradient of the retarded time <span class="math-container">$\nabla t_r(\mathbf{r}) = \nabla \left( t - \frac{|\mathbf{r} - \mathbf{r}_s(t_r)|}{c} \right)$</span>, that from direct computation reads <span class="math-container">$$\nabla t_r = -\frac{1}{c} \nabla|\mathbf{r} - \mathbf{r}_s(t_r)| \ ,$$</span> we don't need to manipulate this result further, so far, since the unknwon has appeared;</p> </li> <li><p>gradient of the absolute value <span class="math-container">$|\mathbf{r} - \mathbf{r}_s(t_r)|$</span>, that by direct computation reads <span class="math-container">$$\begin{aligned} \nabla |\mathbf{r} - \mathbf{r}_s(t_r)| & = \frac{1}{2 |\mathbf{r} - \mathbf{r}_s(t_r)|} \nabla |\mathbf{r} - \mathbf{r}_s(t_r)|^2 = \\ & = \frac{1}{2|\mathbf{r} - \mathbf{r}_s(t_r)|} \, 2 \left[ \mathbf{r} - \mathbf{r}_s(t_r) - ( \mathbf{r} - \mathbf{r}_s(t_r) ) \cdot \frac{d \mathbf{r}_s(t_r)}{d t_r} \nabla t_r \right] = \\ \end{aligned} \ .$$</span></p> </li> <li><p>Using the epxression of the first point for <span class="math-container">$\nabla t_r$</span>and defining <span class="math-container">$\mathbf{\hat{n}} = \frac{\mathbf{r} - \mathbf{r}_s(t_r)}{|\mathbf{r} - \mathbf{r}_s(t_r)|} $</span>, and <span class="math-container">$\boldsymbol{\beta} = \frac{1}{c}\frac{d \mathbf{r}_s(t_r)}{d t_r}$</span>, it should be not so hard to recast the last equation as <span class="math-container">$$\nabla |\mathbf{r} - \mathbf{r}_s(t_r)| = \mathbf{\hat{n}} + \mathbf{\hat{n}} \cdot \boldsymbol{\beta} \, \nabla |\mathbf{r} - \mathbf{r}_s(t_r)|$$</span></p> </li> <li><p>Eventually, solving for <span class="math-container">$\nabla |\mathbf{r} - \mathbf{r}_s(t_r)|$</span>, we get the desired expression <span class="math-container">$$\nabla |\mathbf{r} - \mathbf{r}_s(t_r)| = \frac{\mathbf{\hat{n}}}{1 - \mathbf{\hat{n}} \cdot \boldsymbol{\beta}} \ .$$</span></p> </li> </ul> <p>That should provide a way to get the desired result.</p> <hr /> <p><strong>Edit 1. Required details about <span class="math-container">$\nabla |\mathbf{r} - \mathbf{r}_s(t_r)|^2.$</span></strong> Direct calculation using Cartesian coordinates (for simplicity) gives</p> <p><span class="math-container">$$\begin{aligned} \left\{ \nabla |\mathbf{r} - \mathbf{r}_s(t_r)|^2 \right\}_i & = \partial_i \sum_k \left( r_k - r_k^s(t_r(r_j)) \right)^2 = \\ & = 2 \sum_k \left( \partial_i r_k - \partial_i r_k^s \right) \left( r_k - r_k^s(t_r(r_j)) \right) = \\ & = 2 \sum_k \left( \delta_{ik} - \partial_{t_r} r_k^s \partial_i t_r(r_j) \right) \left( r_k - r_k^s(t_r(r_j)) \right) = \\ & = 2 \left( r_i - r_k^s(t_i(r_j)) \right) - 2 \partial_i t_r(r_j) \sum_k \partial_{t_r} r_k^s \left( r_k - r_k^s(t_r(r_j)) \right) = \\ & = 2 \left\{ \mathbf{r} - \mathbf{r}^s - \nabla t_r \, \dfrac{d \mathbf{r}^s}{dt_r} \cdot ( \mathbf{r} - \mathbf{r}^s ) \right\}_i \end{aligned}$$</span></p>
|
Physics
|
|special-relativity|speed-of-light|relative-motion|
|
Determining the time elapsed between two events in
|
<p>You are overlooking the fact that the observer in the rocket uses two different synchronised clocks to measure how long the photon takes to traverse the rocket and these clocks will not be synchronised according to an observer with motion relative to the rocket. This is known as the "<a href="https://en.wikipedia.org/wiki/Relativity_of_simultaneity" rel="nofollow noreferrer">relativity of simultaneity</a>" and is the thing most likely to catch people out in relativity calculations.</p> <p>It is best in general to use the <a href="https://en.wikipedia.org/wiki/Lorentz_transformation" rel="nofollow noreferrer">Lorentz transformations</a> that take care of all these issues.</p> <p>Unfortunately, you have defined the primed and unprimed frames differently to how Wikipedia does, so you will have to swap those values when using the Wikipedia Lorentz transformation equations.</p> <p>For your example, start with <span class="math-container">$t = \gamma (t' -v l_0 /c^2)$</span> where v is the velocity of the observer relative to the initial reference frame.</p> <p>As you have stated, <span class="math-container">$t' = l_0/c$</span> and we can substitute this into the transformation and get: <span class="math-container">$$t = \gamma (l_0/c -v \ l_0 /c^2) = \gamma \ \frac{l_0} c (1 -v/c)$$</span> which differs from your result by a factor of <span class="math-container">$(1-v/c)$</span>.</p> <p>Your correctly stated that the speed of light is the same in all reference frames, so let's check how that works out here. The Lorentz transformation for the spatial coordinate is <span class="math-container">$l = \gamma (l_0 -v t')$</span> and since <span class="math-container">$t' = l_0/c$</span> we get: <span class="math-container">$$l = \gamma (l_0 -v l_0 /c) = \gamma \ l_0(1 -v/c) $$</span></p> <p><span class="math-container">$$ c = \frac l t = \frac{\gamma \ l_0(1 -v/c)} {\gamma \ \frac{l_0} c (1 -v/c)} =c $$</span> confirming c is the same in both reference frames.</p>
|
Physics
|
|general-relativity|differential-geometry|metric-tensor|tensor-calculus|curvature|
|
Independent Components of the Riemann Curvature Tensor
|
<p>Your symmetry properties do not make sense. The correct ones are <span class="math-container">$$ R_{\rho\sigma\mu\nu} = - R_{\sigma\rho\mu\nu} $$</span> <span class="math-container">$$ R^\rho{}_{\sigma\mu\nu} = - R^\rho{}_{\sigma\nu\mu} $$</span> <span class="math-container">$$ R^\rho{}_{\sigma\mu\nu} = - R^\rho{}_{\nu\sigma\mu} - R^\rho{}_{\mu\nu\sigma} $$</span></p>
|
Physics
|
|homework-and-exercises|astrophysics|stars|luminosity|
|
Luminosity of stars
|
<p>The luminosity of a main-sequence star is roughly <span class="math-container">$$ L \propto \mu^4 M^3\ , $$</span> where <span class="math-container">$\mu$</span> is the mean molecular weight.</p> <p>Your formula for <span class="math-container">$\mu$</span> is incorrect, it should be <span class="math-container">$$\mu \simeq (2X + 0.75Y + 0.5Z)^{-1}\ .$$</span></p> <p>For a pure hydrogen star <span class="math-container">$\mu = 0.5 m_H$</span>. For a pure iron star <span class="math-container">$\mu \simeq 2m_H$</span> (actually more like <span class="math-container">$56m_H/27$</span>).</p> <p>The ratio of luminosities would be that the iron star would be more luminous by a factor of <span class="math-container">$4^4$</span>.</p> <p>However, the approximate proportionality you are using for the luminosity applies to main-sequence, hydrogen-burning stars, so it is not really applicable to your scenario. A rough understanding of why metal-rich stars might be more luminous comes from the core temperature being proportional to <span class="math-container">$\mu$</span> (from the virial theorem and perfect gas law) and that the opacity of the star drops steeply with increasing temperature, allowing efficient radiative transport of the flux.</p>
|
Physics
|
|electromagnetism|electrostatics|differential-geometry|maxwell-equations|mathematics|
|
Why does $\oint_C \vec{E}\cdot d\vec{\ell}=0$ imply $\nabla\times \vec{E}=\vec{0}$?
|
<p>The first equation is only for all <strong>closed</strong> loops, not for all contours. That’s why you can’t conclude <span class="math-container">$\vec{E}=0$</span>. The only thing you can conclude is that <span class="math-container">$\vec{E}=\text{grad}(V)$</span> for some function <span class="math-container">$V$</span>.</p> <p>If instead you had a vector function <span class="math-container">$F:U\subset \Bbb{R}^n\to\Bbb{R}^n$</span> (where <span class="math-container">$U\subset\Bbb{R}^n$</span> is open) such that for <strong>all</strong> (smooth enough) contours <span class="math-container">$\Gamma$</span> lying in <span class="math-container">$U$</span>, <span class="math-container">$\int_{\Gamma}F\cdot dl=0$</span>, then you <em>can</em> conclude that <span class="math-container">$F=0$</span> identically in <span class="math-container">$U$</span>.</p> <p>Even if not directly obvious, all such statements are going to be minor modifications of the <em>fundamental lemma of calculus of variations</em>.</p> <hr /> <p>To understand this geometrically, imagine the following scenrios:</p> <ul> <li>First, consider 1 dimension. So, consider a smooth function <span class="math-container">$f:\Bbb{R}\to\Bbb{R}$</span> such that <span class="math-container">$\int_x^xf(t)\,dt=0$</span> for all <span class="math-container">$x$</span>. Well, this is kind of silly because every function <span class="math-container">$f$</span> satisfies this, no matter how crazy, so of course it doesn’t imply <span class="math-container">$f=0$</span>.</li> <li>perhaps a slightly less fringe case might be more illuminating. In two dimensions, consider <span class="math-container">$F(x,y)=(1,0)=e_1$</span>, i.e the vector field which constantly points to the right (it is the gradient of <span class="math-container">$f(x,y)=x$</span>). Then, integrating over any closed loop (for example closed rectangle) gives <span class="math-container">$0$</span>, but <span class="math-container">$F$</span> itself is not <span class="math-container">$0$</span>. The reason is that when you allow the loop to close in on itself, you allow for there to be <strong>cancellations</strong> in the integral. For example, take a rectangular loop. Then, the integrals over the top and bottom edges are non-zero but they will cancel out, since when you traverse the loop, you’ll traverse them in opposite directions. The integrals over the right and left edges cancel out for the same reason (actually, for this specific example, the integrals over the right and left edges vanishes, since the vector field is <em>normal</em> to the edges, as it points to the right).</li> </ul>
|
Physics
|
|quantum-mechanics|homework-and-exercises|operators|hilbert-space|
|
How does an operator in the denominator act on a state?
|
<p>The comment by Filippo tells you everything you need to know, but I will explain a little.</p> <p>When we write a quantity such as <span class="math-container">$\exp(i \hat{\phi})$</span> we are giving ourselves the 'right' to evaluate a function such as <span class="math-container">$\exp$</span> on an operator-valued quantity. How can we do that? Is it even well-defined? The answer is that prior to writing <span class="math-container">$\exp(i \hat{\phi})$</span> we first <em>define</em> what we mean by it. In this case a suitable definition is <span class="math-container">$$ \exp(i \hat{\phi}) := 1 + i \hat{\phi} + (i\hat{\phi})^2/2 + (i\hat{\phi})^3 / 3! + \cdots $$</span> using the Taylor series expansion and this is the standard definition.</p> <p>Another way to proceed is to define a function of an operator by asserting how it behaves when acting on a complete set of states, and for a Hermitian operator a suitable complete set is the set of eigenstates. One typically asserts <span class="math-container">$$ f(\hat{\phi}) | u_\lambda \rangle = f(\lambda) | u_\lambda \rangle $$</span> where <span class="math-container">$| u_\lambda \rangle$</span> is an eigenstate of <span class="math-container">$\hat\phi$</span>: <span class="math-container">$$ \hat{\phi} | u_\lambda \rangle = \lambda| u_\lambda \rangle. $$</span></p> <p>In your book the author is clearly doing one or both of the above for general functions of operators.</p>
|
Physics
|
|newtonian-mechanics|forces|inertial-frames|galilean-relativity|
|
Why is force independent of frame of reference (inertial)
|
<blockquote> <p>I am looking for a nice simple explanantion which can help me understand why <span class="math-container">$F$</span> is invariant</p> </blockquote> <p>How about a simple example:</p> <p>If I apply a force <span class="math-container">$F$</span> to your arm while we are in a car, do you think it will feel different if we are accelerating together in the car (a non-inertial reference frame) than if the car were moving at constant velocity (an inertial frame)?</p> <p>If the answer is no, <span class="math-container">$F$</span> is reference frame independent.</p> <p>Hope this helps.</p>
|
Physics
|
|statistical-mechanics|entropy|boltzmann-equation|
|
Expressions for Entropy in the Canonical Ensemble
|
<p>The premise is wrong: they are not always equivalent.</p> <p>The Gibbs entropy formula gives <em>information entropy</em> for any discrete probability function <span class="math-container">$p_i$</span>, even for such function that has no use in physics. It is a mathematical concept, a characterization of the discrete probability function, a real positive measure of uncertainty about the microstate. For a probability distribution localized at a single state it gives zero information entropy (zero uncertainty), and the broader the set of states over which the distribution is non-zero, the larger the information entropy is (larger uncertainty about the microstate).</p> <p>The Boltzmann entropy formula gives statistical-physics-based estimate/determination of thermodynamic (Clausius) entropy of an isolated system with definite volume <span class="math-container">$V$</span>, number of particles <span class="math-container">$N$</span> and energy <span class="math-container">$U$</span> (in case of ideal gas; for more complicated systems, entropy may depend also on additional macroscopic state variables, such as magnetic field or surface area). It gives a completely different thing.</p> <p>The relation between these two different entropy concepts is that the Gibbs formula (giving information entropy of a probability distribution) gives morally the same value as the Boltzmann formula, <em>if</em> we have a very large system with very large number of available microstates, and the probability distribution <span class="math-container">$p_i$</span> is such that it maximizes the information entropy under the constraints defined by values of the macroscopic state variables <span class="math-container">$U,V,N,...$</span> as used in the Boltzmann entropy.</p> <p>The microstates <span class="math-container">$i$</span> describe the system of <span class="math-container">$N$</span> particles in volume <span class="math-container">$V$</span>, and the probabilities satisfy the constraints</p> <p><span class="math-container">$$ \sum_i p_i E_i = U. $$</span> <span class="math-container">$$ \sum_i p_i = 1. $$</span> The first constraint is that the probability distribution implies average expected energy <span class="math-container">$U$</span>, and the other is necessary if <span class="math-container">$p_i$</span> are to be probabilities, they have to sum up to one.</p> <p>With these constraints, the distribution that maximizes the Gibbs entropy is the Boltzmann distribution <span class="math-container">$Ce^{-E_i/kT}$</span>, and value of the Gibbs entropy is the same as that of the Boltzmann entropy for the same macroscopic state:</p> <p><span class="math-container">$$ S_{Gibbs}(\{p_i\}_{constraints~U,V,N,...}) \approx S_{Boltzmann}(U,V,N,...). $$</span></p> <p>The approximation sign is there because it takes the limit <span class="math-container">$N\to\infty$</span> for the two formulae to behave the same as functions of <span class="math-container">$U,V,N$</span>, and also because the number of allowed states <span class="math-container">$\Omega$</span> can be defined in different ways, giving somewhat different results. These however cause the two entropies to differ only by an additive constant, thus this is immaterial in thermodynamics - the only thing we require of formula for thermodynamic entropy is that it gives change of entropy between two macroscopic states correctly.</p>
|
Physics
|
|homework-and-exercises|continuum-mechanics|
|
Surface integral Vs Volume Integral
|
<p>Starting from the continuity equation <span class="math-container">$$\frac{\partial \rho(t, \mathbf{x})}{\partial t} +\mathbf{\nabla} \cdot \mathbf{j}(t, \mathbf{x})=0 \tag{1} \label{1}$$</span> with <span class="math-container">$$\mathbf{j}(t, \mathbf{x})=\rho(t, \mathbf{x})\, \mathbf{v}(t, \mathbf{x}), \tag{2} \label{2}$$</span> you find <span class="math-container">$$\frac{\partial \rho(t,\mathbf{x})}{\partial t}+\rho(t,\mathbf{x})\,\mathbf{\nabla}\cdot \mathbf{v}(t,\mathbf{x})+\mathbf{v}(t, \mathbf{x})\cdot\mathbf{\nabla} \rho(t, \mathbf{x})=0. \tag{3} \label{3}$$</span> Integrating \eqref{3} over an arbitrary three-dimensional domain <span class="math-container">$V \subset \mathbb{R}^3$</span> obviously yields <span class="math-container">$$\int\limits_V \! d^3x \left( \frac{\partial \rho(t,\mathbf{x})}{\partial t}+ \rho(t,\mathbf{x}) \,\mathbf{\nabla}\cdot \mathbf{v}(t, \mathbf{x})+ \mathbf{v} \cdot \mathbf{\nabla} \rho(t,\mathbf{x}) \right)=0. \tag{4} \label{4}$$</span> Conversely, if \eqref{4} holds for arbitrary <span class="math-container">$V \subset \mathbb{R}^3$</span> (subject to some mathematical qualification), the local version \eqref{3} is implied.</p>
|
Physics
|
|thermodynamics|temperature|capacitance|vacuum|low-temperature-physics|
|
What happens to Capacitors at extreme temperatures and pressures?
|
<p>If the conductors are surrounded by vacuum, with no dielectric material in between, the only temperature we are talking about is those of the conductors. As temperature increases, the conductors will normally <a href="https://en.wikipedia.org/wiki/Thermal_expansion" rel="nofollow noreferrer">expand</a>, altering the capacitor geometry, and hence the capacitance. If the conductors are somehow prevented from expanding, you are then necessarily imparting pressure on the conductors, although I don't know why that would affect capacitance.</p> <p>As naturallyInconsistent has mentioned, <a href="https://en.wikipedia.org/wiki/Thermionic_emission" rel="nofollow noreferrer">thermionic emission</a> of electrons from the conductors can take place at elevated temperatures, resulting in a "leaky" capacitor, effectively a capacitance in parallel with a resistance.</p>
|
Physics
|
|electromagnetism|potential-energy|differential-equations|
|
Charge Distribution and Stability in a Conductive Solid Sphere
|
<p>In a static state, there can be no charge inside a solid conductor. This means the charge at the center will be in unstable equilibrium. An infinitesimal displacement of the charge will produce a reorientation of the surface charges to produce a force on the charge, which will quickly migrate to the surface.</p>
|
Physics
|
|energy-conservation|everyday-life|electronics|microwaves|efficient-energy-use|
|
Microwave oven efficiency and conservation of energy
|
<p>efficiency losses in a microwave oven occur in the power supply that feeds the magnetron, in the magnetron itself that feeds the oven cavity, and in the cavity walls.</p> <p>The useful radiation output of the oven is absorbed by the food inside the cavity. Whatever is not absorbed by the food passes through it and bounces off the cavity walls and passes through the food again at a different angle.</p> <p>If there is no food in the cavity, the microwave energy inside it builds up and can reach levels which may cause the magnetron to malfunction.</p> <p>In the no-food case, when you turn off the oven then the buildup of radiation is dissipated in the walls so there's no burst of radiation to escape when you open the oven door.</p>
|
Physics
|
|thermodynamics|statistical-mechanics|ideal-gas|gas|
|
The mean kinetic energy of a gas particle
|
<p>In the kinetic theory of gases, the average velocity of a particle is derived by considering the molecules of gas to be point particles with velocities <span class="math-container">$v_{i}$</span>, where <span class="math-container">$i\, =\, 1,\,2, \, \ldots,\,N$</span> is the particle index and <span class="math-container">$N$</span> is the total number of particles. In this context, <span class="math-container">$ \langle v \rangle$</span> is the average velocity of the collection:</p> <p><span class="math-container">$$ \langle v \rangle^2 = \left( \frac{1}{N} \sum_{i=1}^{N} v_i \right)^2 $$</span></p> <p>and</p> <p><span class="math-container">$ \langle v^{2} \rangle$</span> is:</p> <p><span class="math-container">$$ \langle v^2 \rangle = \frac{1}{N} \sum_{i=1}^{N} v_i^2. $$</span></p> <p>Expanding a few terms reveals the problem:</p> <p><span class="math-container">$$ \langle v \rangle^2 = \frac{1}{N^2} (v_1 + v_2 + v_3 + \ldots + v_N)^2, $$</span></p> <p>whereas:</p> <p><span class="math-container">$$ \langle v^2 \rangle = \frac{1}{N} \sum_{i=1}^{N} v_i^2 = \frac{1}{N} (v_1^2 + v_2^2 + v_3^2 + \ldots + v_N^2). $$</span></p> <p>I hope this helps!</p>
|
Physics
|
|thermodynamics|fluid-statics|surface-tension|bubbles|
|
How does a bubble pop?
|
<p>According to your video, when the object first touches the bubble, it <strong>creates a hole on one side</strong>. However, the elasticity of the bubble's surface causes it to contract and partially seal around the object. As the object moves through the bubble, it displaces some of the air within it. This displacement leads to a <strong>temporary increase in pressure inside</strong> the bubble. The increase in pressure inside the bubble causes air to flow towards the newly created hole. This <strong>inward airflow</strong> helps maintain the integrity of the bubble by preventing it from bursting immediately. Once the object exits through the opposite side of the bubble, the sudden release of pressure causes the bubble to burst. The airflow dynamics change as the pressure equalizes, and the bubble can no longer sustain its shape.</p>
|
Physics
|
|thermodynamics|statistical-mechanics|ideal-gas|
|
Discrepancy in the Derivation of Maxwell-Boltzmann Distribution
|
<p>You are just wrong in asserting that those are speeds. Those are velocities, which is why everybody else integrates from <span class="math-container">$-\infty$</span> to <span class="math-container">$+\infty$</span>. Your mistake of a factor of 8 is just coming from you making this mistake, since <span class="math-container">$0$</span> to <span class="math-container">$+\infty$</span> is a factor of 2 wrong, we have 3D space, and so <span class="math-container">$2^3=8$</span> factor mistake. You have to start with velocities integral, before converting to polar coördinates so as to work with speeds instead of velocities.</p>
|
Physics
|
|electromagnetism|lagrangian-formalism|symmetry|quantum-electrodynamics|noethers-theorem|
|
Conserved current from a symmetry
|
<p>Let me do the calculation just for the vectorial current to show you the derivation.</p> <p>The fields transformations reads: <span class="math-container">\begin{eqnarray} \psi &\rightarrow& e^{i\alpha} \psi, \\ \bar{\psi} &\rightarrow& e^{-i\alpha} \bar{\psi}. \end{eqnarray}</span> The Noether current is: <span class="math-container">$$J^\mu = \frac{\partial L}{\partial(\partial_\mu \psi)} \left.\frac{d(\delta\psi)}{d\alpha}\right|_{\alpha=0} + \frac{\partial L}{\partial(\partial_\mu \bar{\psi})} \left.\frac{d(\delta \bar{\psi})}{d\alpha}\right|_{\alpha=0}.$$</span></p> <p>Where <span class="math-container">$\delta\psi = \psi' - \psi = \psi(e^{i\alpha} - 1)$</span> and <span class="math-container">$\delta \bar{\psi} = \bar{\psi}' - \bar{\psi} = \bar{\psi}(e^{-i\alpha} -1)$</span>, then: <span class="math-container">\begin{eqnarray} \left.\frac{d(\delta\psi)}{d\alpha}\right|_{\alpha=0} &=& i\psi, \\ \left.\frac{d(\delta\psi^*)}{d\alpha}\right|_{\alpha=0} &=& -i\bar{\psi}. \end{eqnarray}</span></p> <p>By other hand, the lagrangian derivatives are: <span class="math-container">\begin{eqnarray} \frac{\partial L}{\partial(\partial_\mu \psi)} &=& i\bar{\psi}\gamma^\mu, \\ \frac{\partial L}{\partial(\partial_\mu \bar{\psi})} &=& 0. \end{eqnarray}</span></p> <p>Putting all together we obtain: <span class="math-container">$$J^\mu = -\bar{\psi}\gamma^\mu \psi.$$</span></p> <p>T.</p> <p>PS: I checked the course you are reading and saw that the convention of the transformations are <span class="math-container">$\psi \rightarrow e^{-i\alpha}\psi$</span>, that provoques a change in the global sign and then the current is: <span class="math-container">$$J^\mu = \bar{\psi}\gamma^\mu \psi$$</span> Q.E.D.</p>
|
Physics
|
|quantum-mechanics|special-relativity|particle-physics|double-slit-experiment|wave-particle-duality|
|
Interference pattern of relativistic particles
|
<p>Do you mean in principle or in practice? In principle the answer is "yes, even fast electrons can in principle be brought into a state with a superposition of directions of motion which would lead to interference fringes." In practice the answer is "the short wavelength makes it very difficult." At <span class="math-container">$0.99c$</span> for electrons I find the wavelength is around <span class="math-container">$10^{-42}\,$</span>m. This is a lot smaller than the Planck length so it is very hard to suggest how to form a structure to act as 'slits' or, more generally, how to form a pair of collimated beams at a slight angle or how to detect the interference.</p>
|
Physics
|
|homework-and-exercises|optics|interference|
|
A question related to Newton's Rings (SOLVED)
|
<p>The Newton's rings formula arises from a condition on the optical path length difference <span class="math-container">$\delta$</span> between the ray reflected on the interface lens\air and the ray reflected by the glass plate and refracted at the interface air\lens.</p> <p>This condition is the condition of interference. It is given by <span class="math-container">$N\lambda=\delta$</span> or <span class="math-container">$(N+1/2)\lambda=\delta$</span> if you consider bright or darks fringes (<span class="math-container">$N$</span> is an integer). One can show that <span class="math-container">$\delta=2e$</span> where <span class="math-container">$e$</span> is the vertical height between the glass plate surface and the incidence point of the ray on the convex surface of the lens. You can write <span class="math-container">$e$</span> as a function of the curvature radius of the lens <span class="math-container">$R$</span> and the horizontal distance <span class="math-container">$r$</span> between the incident point on the convex lens and the axis of the lens.</p> <p>I'm pretty sure you can you find a derivation online ;)</p>
|
Physics
|
|quantum-mechanics|time-evolution|open-quantum-systems|
|
Contractivity of the Lindblad Generator Adjoint
|
<p>The property of contractivity depends on the norms that are used. (Unfortunately, that seems to be often glossed over in textbooks.) To answer your question, we have to quickly discuss the involved norms. <span class="math-container">$\newcommand{\tr}{\operatorname{tr}}$</span></p> <p>For an operator <span class="math-container">$X$</span>, the Schatten-p-norms are defined as <span class="math-container">$ \lVert X \rVert_p = \bigl( \tr\bigl[ \lvert X \rvert^p \bigr] \bigr)^{1/p} $</span>. Note that <span class="math-container">$p=1$</span> is the trace norm, <span class="math-container">$p=\infty$</span> is the operator norm, and <span class="math-container">$p=2$</span> is the norm induced by the Hilbert-Schmidt product <span class="math-container">$\langle X, Y \rangle = \tr[ X^\dagger Y ]$</span>. For a superoperator <span class="math-container">$\mathcal L$</span>, we consider the operator norms induced by the Schatten norms: <span class="math-container">$$ \lVert \mathcal L \rVert_p = \sup_X\bigl( \lVert \mathcal L X \rVert_p : \lVert X \rVert_p = 1 \bigr) . $$</span> They satisfy the duality relation <span class="math-container">$\lVert \mathcal L \rVert_p = \lVert \mathcal L^\dagger \rVert_q$</span>, where <span class="math-container">$\mathcal L^\dagger$</span> is the adjoint with respect to the Hilbert-Schmidt product and <span class="math-container">$1/p + 1/q = 1$</span>.</p> <p>Every positive trace-preserving superoperator is contractive with respect to the trace norm. Therefore, Lindbladians generate contractive semigroups, <span class="math-container">$\lVert e^{\mathcal L t} \rVert_1 \leq 1$</span>. By the duality relation above, the adjoint <span class="math-container">$\mathcal L^\dagger$</span> therefore also generates a contractive semigroup, but with respect to a different norm: <span class="math-container">$$ \lVert e^{\mathcal L^\dagger t} \rVert_\infty \leq 1 . $$</span> The question when <span class="math-container">$\mathcal L^\dagger$</span> is contractive in the trace norm is answered by a theorem in this paper: <a href="https://arxiv.org/abs/math-ph/0601063" rel="nofollow noreferrer">https://arxiv.org/abs/math-ph/0601063</a>. The following statements are all equivalent:</p> <ul> <li><span class="math-container">$\mathcal L$</span> is contractive for some <span class="math-container">$p > 1$</span> (and by duality, <span class="math-container">$\mathcal L^\dagger$</span> is contractive for some <span class="math-container">$p < \infty$</span>)</li> <li><span class="math-container">$\mathcal L$</span> is contractive for all <span class="math-container">$p$</span> (and by duality, <span class="math-container">$\mathcal L^\dagger$</span> is contractive for all <span class="math-container">$p$</span>)</li> <li><span class="math-container">$\mathcal L$</span> is unital (<span class="math-container">$\mathcal L(1) = 0$</span>)</li> </ul> <p>Regarding your second question, I am not sure if I understand your question. Both <span class="math-container">$\mathcal L$</span> and <span class="math-container">$\mathcal L^\dagger$</span> must preserve Hermiticity (and they do), because otherwise they could transform a state (observable) into something that is not a state (observable) which doesn't make any physical sense.</p>
|
Physics
|
|newtonian-mechanics|free-body-diagram|spring|elasticity|structural-beam|
|
Elasticity and Hooke's law: who is applying the force?
|
<p>You are basically correct. Internal stresses in the beam will be communicated from one infinitesimal element to the next as you describe. For the purposes of longitudinal vibrations you can think of the beam as being made up of a large number of very small springs connected end to end. Young's modulus plays a similar role to the spring constant in Hooke's law. You may also have to take into account external constraints e.g. one end of the beam could be fixed.</p>
|
Physics
|
|general-relativity|metric-tensor|diffeomorphism-invariance|
|
How to see the diffeomorphism invariance of a particular metric
|
<p>The change in the metric is not what you say. The correct change is <span class="math-container">$$ \delta g_{\mu\nu} = ({\mathcal L}_\epsilon g )_{\mu\nu} \equiv \epsilon^\lambda\partial_\lambda g_{\mu\nu}+ g_{\lambda\nu} \partial_\mu \epsilon^\lambda + g_{\mu\lambda} \partial_\nu \epsilon^\lambda\\ = \nabla_\mu \epsilon_\nu+ \nabla_\nu \epsilon_\mu $$</span> i.e you need covariant derivatives rather than partials. A metric is unchanged by by an infinitesimal diffeomorphism only if <span class="math-container">${\mathcal L}_\epsilon g =0$</span>, in which case the vector field <span class="math-container">$\epsilon$</span> is a <em>Killing vector</em>.</p>
|
Physics
|
|quantum-mechanics|homework-and-exercises|quantum-spin|
|
The time-dependence of the expectation values of spin operators
|
<p>In the Schrödinger picture, you can write the Hamiltonian operator in the basis of the eigenstates of <span class="math-container">$\hat{S}_z$</span>: <span class="math-container">$|+\rangle_z=\left(\begin{array}{c}1\\0\end{array}\right)$</span> and <span class="math-container">$|-\rangle_z=\left(\begin{array}{c}0\\1\end{array}\right)$</span> as <span class="math-container">$$ H = \frac{\hbar w}{2} \left(\begin{array}{cc}1 & 0\\0 & -1\end{array}\right) $$</span> and the initial state as <span class="math-container">$$ |\psi(t=0)\rangle = \frac{1}{\sqrt{2}} \left( |+\rangle_z + |-\rangle_z \right) = \frac{1}{\sqrt{2}} \left(\begin{array}{c}1\\1\end{array}\right). $$</span></p> <p>Now from the Schrödinger equation we get the formula for the time evolution of a state: <span class="math-container">$$ i\hbar \partial_t \psi = H \psi \rightarrow |\psi(t)\rangle = \exp{\left[-iHt/\hbar\right]} |\psi(t=0)\rangle. $$</span> The matrix exponential of a diagonal matrix is trivial: you just exponentiate every diagonal term, so: <span class="math-container">$$ |\psi(t)\rangle = \left(\begin{array}{cc} e^{-iwt/2} & 0 \\ 0 & e^{iwt/2} \end{array}\right) \frac{1}{\sqrt{2}} \left(\begin{array}{c}1\\1\end{array}\right) = \frac{1}{\sqrt{2}} \left(\begin{array}{c}e^{-iwt/2}\\e^{iwt/2}\end{array}\right). $$</span> Knowing the time evolved state, you can compute expectation values of any operator, including for instance <span class="math-container">$$\hat{S}_x = \frac{\hbar}{2} \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) \;\;\;\;\;\;\;\; \hat{S}_y = \frac{\hbar}{2} \left(\begin{array}{cc} 0 & -i \\ i & 0 \end{array}\right) :$$</span> <span class="math-container">$$ \langle \psi(t) | \hat{S}_x | \psi(t) \rangle = \frac{\hbar}{2} \cos{(wt)} \;\;\;\;\;\;\; \langle \psi(t) | \hat{S}_y | \psi(t) \rangle = \frac{\hbar}{2} \sin{(wt)} $$</span></p> <p>Hope this helps! :)</p>
|
Physics
|
|thermodynamics|simulations|equilibrium|
|
How do particle acquire thermal distribution?
|
<p>To get equilibrium, whatever the initial condition is, the bunch of the particles must <em>forget</em> the initial configuration and velocities. This implies that the dynamics of the system and the observation time must allow the vanishing of the energy auto-correlation function.</p> <p>From the technical point of view, we need a <a href="https://en.wikipedia.org/wiki/Mixing_(mathematics)" rel="nofollow noreferrer"><em>mixing</em> dynamical system</a>. In practice, every interaction characterized by a strong repulsive core at short distances is sufficient for mixing dynamics, provided the density is not too high, and the temperature is not too low.</p> <p>The limiting case of a harmonic solid is an example of a non-mixing system. In the harmonic approximation, each initial velocity distribution corresponds to the excitation of a subset of all the possible normal modes. Without strong anharmonic effects, such a distribution of normal modes remains unchanged in time. In such a case, thermal distribution of velocities is impossible. In the case of strong anharmonicity or fluid phases, the velocity thermalization is quite a fast process.</p>
|
Physics
|
|quantum-mechanics|hilbert-space|symmetry|group-theory|
|
A question from S. Weinberg's book (Sec. 2.7)
|
<p>Clearly <span class="math-container">$\phi(T,1)$</span> is independent of <span class="math-container">$T$</span>. Let <span class="math-container">$\phi(T,1)=\phi_0$</span>. You can rescale your unitary operators by a constant phase <span class="math-container">${\tilde U}(T) = e^{-i\phi_0} U(T)$</span>. The new set of unitary operators satisfy <span class="math-container">${\tilde U}(1) = I$</span>.</p>
|
Physics
|
|classical-mechanics|symmetry|coordinate-systems|
|
How to choses coordinate systems and change between them?
|
<p>You are right in that one must choose coordinates in a way that is specific to the problem at hand, however, this is indeed a non-trivial task and is part of the art of problem solving. Richard Feynman stated that one of the things he could not teach his students was problem solving. He could teach them how to compute certain quantities, but he could not teach one problem solving, to him this was a skill that hopefully could be developed. In some respects, choosing coordinates is a lot like this, it is a skill which is developed through practice. There are some general guidelines, however. Look for <em>symmetry</em>, if the problem exhibits a symmetry like polar, azimuthal, spherical, cylindrical, or other geometric symmetry properties, then choose the respective coordinate system; this will generally simplify the problem greatly. When you are working from the Lagrangian point of view, you will want to identify the <em>configuration manifold</em> associated with the problem and then choose the coordinates that reflects the symmetries inherent in the system.</p>
|
Physics
|
|newtonian-mechanics|rotational-dynamics|angular-momentum|momentum|conservation-laws|
|
Momentum conservation laws and explanation
|
<p>If a (mechanical) system is invariant under arbitrary spatial translations in a certain direction (say, the <span class="math-container">$x$</span>-direction), then the corresponding component of the momentum (<span class="math-container">$p_x$</span> in our case) is conserved (i.e. time-independent).</p> <p>Analogously, if a system is invariant under arbitrary rotations with repect to a certain axis (say, the <span class="math-container">$z$</span>-axis), then the corresponding angular-momentum component (<span class="math-container">$L_z$</span> in this case) is a constant of motion.</p> <p>Likewise, if the system is time-translation invariant, energy is conserved.</p> <p>As an <em>example</em>, consider the motion of a particle described by the Lagrangian function <span class="math-container">$$L(x,y,z,\dot{x},\dot{y},\dot{z})=m(\dot{x}^2+\dot{y}^2+\dot{z}^2)/2-V(z), \tag{1} \label{1}$$</span> where the potential depends only on the coordinate <span class="math-container">$z$</span>. The system is obviously invariant under <span class="math-container">$x\to x+x_0$</span>, <span class="math-container">$y\to y+y_0$</span> for arbitrary <span class="math-container">$x_0, y_0 \in \mathbb{R}$</span>. As a consequence, the momenta <span class="math-container">$p_x=m\dot{x}$</span> and <span class="math-container">$p_y=m \dot{y}$</span> are conserved. This can be seen explicitly from the equations of motion, <span class="math-container">$$\begin{align} \frac{d}{dt} \frac{\partial L}{\partial \dot{x}} &= \frac{\partial L}{\partial x} \quad \Rightarrow \quad \frac{d}{dt} (m \dot{x}) = 0, \tag{2} \label{2}\\[5pt] \frac{d}{dt}\frac{\partial L}{\partial \dot{y}} &= \frac{\partial L}{\partial y} \quad \Rightarrow \quad \frac{d}{dt}(m\dot{y})=0. \tag{3} \label{3} \end{align}$$</span> Because of the <span class="math-container">$z$</span>-dependence of the potential, <span class="math-container">$p_z=m\dot{z}$</span> is <em>not</em> conserved: <span class="math-container">$$\begin{align}\frac{d}{dt} \frac{\partial L}{\partial \dot{z}}&=\frac{\partial L}{\partial z} \quad \Rightarrow \quad \frac{d}{dt}(m \dot{z}) =-V^\prime(z) \ne 0. \tag{4} \label{4} \end{align}$$</span> As the Lagrangian \eqref{1} is also invariant under rotations around the <span class="math-container">$z$</span>-axis by an arbitrary angle <span class="math-container">$\alpha$</span>, <span class="math-container">$$ x \to x\cos \alpha + y\sin \alpha , \quad y \to -x\sin \alpha +y \cos \alpha , \quad z \to z, \tag{5} \label{5} $$</span> the angular momentum <span class="math-container">$L_z= m (x \dot{y}-y\dot{x})$</span> is conserved, which can be checked by taking the time-derivative of <span class="math-container">$L_z$</span> and using \eqref{2} and \eqref{3}. On the other hand, neither <span class="math-container">$L_x=m(y \dot{z}-z \dot{y})$</span> nor <span class="math-container">$L_y=m(z \dot{x}-x \dot{z})$</span> are conserved, as the system is <em>not</em> invariant under rotations around the <span class="math-container">$x$</span>- or <span class="math-container">$y$</span>-axis.</p> <p>Finally, as \eqref{1} is invariant under <span class="math-container">$t \to t+t_0$</span> for arbitrary <span class="math-container">$t_0 \in \mathbb{R}$</span>, the energy of the system, given by <span class="math-container">$$ E= \dot{x} p_x+\dot{y} p_y+\dot{z}p_z-L=m(\dot{x}^2+\dot{y}^2+\dot{z}^2)/2+V(z), \tag{6} \label{6}$$</span> is also a constant of motion.</p>
|
Physics
|
|cosmology|fourier-transform|heisenberg-uncertainty-principle|dimensional-analysis|
|
Why are the distances in real space and Fourier space inverses of each other?
|
<p>There are many possible answers to this question. The duality (large wavenumber = small lengthscales) is very deeply rooted in the Fourier analysis with many applications, see for example Heisenberg's uncertainty principle.</p> <p>You are interested in understanding why a cutoff in frequency corresponds to a smoothing in real space. One can understand this quite directly by introducing a cutoff function <span class="math-container">$0\leq G(k)\leq 1$</span> with the properties <span class="math-container">$G(k\ll \Lambda) =1$</span> and <span class="math-container">$G(k \gg \Lambda)=0$</span>. You might take a box-function, a Fermi-function. For concreteness, the simplest is the Gaussian <span class="math-container">$$ G(k) = e^{-(k/\Lambda)^2}\,.$$</span></p> <p>Having an arbitrary function <span class="math-container">$f(x)$</span> with the Fourier-transform <span class="math-container">$$ F(k) = \int e^{-i kx}f(x) dx;$$</span> the cutoff is implemented by the replacement <span class="math-container">$$ F(k) \mapsto \tilde F(k) = F(k) G(k)\,.$$</span> In real space, this corresponds to (using the convolution theorem) <span class="math-container">$$ \tilde f(x) = \int\!dx' f(x-x') g(x'),$$</span> i.e., the values of the function <span class="math-container">$f$</span> are smeared out over the support given by <span class="math-container">$$g(x) = \frac{1}{2\pi} \int e^{i kx}G(k) dk\,.$$</span> For the example of the Gaussian, we have <span class="math-container">$$ g(x) = \frac{\Lambda}{2\sqrt{\pi}} e^{- \Lambda^2 x^2/4}\,.$$</span> Thus, <span class="math-container">$\tilde f$</span> is smeared out version of <span class="math-container">$f$</span> over a lengthscale <span class="math-container">$\sim 2/\Lambda$</span>.</p>
|
Physics
|
|experimental-physics|laser|interference|interferometry|
|
What are the disadvantages of using a single mode Fabry-Perot laser diodes in interferometry, as opposed to distributed feedback laser diodes?
|
<blockquote> <p>I'm specifically interested in potential problems that FB laser diodes could cause when used in Michelson or Mach-Zehnder interferometers, due to a wider spectrum of FB lasers as oppsed to DFB lasers.</p> </blockquote> <p>Wider linewidth will mean lower contrast in the interference fringes, and shorter coherence length. Shorter coherence length will mean smaller optical path differences can be tolerated before the contrast drops substantially.</p> <p>A bare FP laser (without some kind of feedback stabilization) will also have greater variation of wavelength due to temperature changes than a DFB, making the interferometer less accurate as a measurement tool for displacement or index of a sample.</p> <p>The FP laser is also going to be less tolerant of back-reflections (although they can also cause problems for DFB's).</p> <blockquote> <p>Are any commercial interferometers employing single mode FB laser diodes?</p> </blockquote> <p>I don't even know of any commercially available FP lasers, but my experience is limited to a fairly small range of wavelengths and operating powers.</p>
|
Physics
|
|special-relativity|lagrangian-formalism|metric-tensor|conventions|klein-gordon-equation|
|
The Klein-Gordon equation and the sign of the mass term
|
<p>It's just due to a difference in metric convention. <span class="math-container">$(\square^2+m^2)$</span> involves a <span class="math-container">$(+---)$</span> signature and <span class="math-container">$(\square^2-m^2)$</span> involves <span class="math-container">$(-+++)$</span>. If you are ever confused which is which, note that <span class="math-container">$$\square^2 e^{i(Et - p\cdot x)} = \pm (E^2-p^2)e^{i(Et - p\cdot x)}$$</span> with the sign depending on the signature. The sign of the <span class="math-container">$p^2$</span> term in the Klein-Gordon equation must match that of the <span class="math-container">$m^2$</span> term.</p>
|
Physics
|
|general-relativity|differential-geometry|metric-tensor|tensor-calculus|mathematical-physics|
|
Conformal Transformation of Torsion
|
<p>This perhaps isn't a very useful answer, but it simply depends on whether you define it to or not. Usually conformal transformations act on the metric alone, and so torsion is invariant. However, if you're working in a broader geometric setting with torsion, you may find it convenient to define a non-trivial transformation for the affine connection (and hence torsion).</p> <p>In the literature, there are usually three different choices, weak, strong and compensating conformal transformations. The weak choice has torsion invariant, while the other two have non-trivial transformations (see <a href="https://arxiv.org/abs/hep-th/0103093v1" rel="nofollow noreferrer">https://arxiv.org/abs/hep-th/0103093v1</a> for their formulae and a general discussion).</p>
|
Physics
|
|quantum-field-theory|mathematical-physics|dirac-delta-distributions|
|
How do we know Schwinger functions exist?
|
<p>Given that the assumption is that we have a Borel measure <span class="math-container">$\mu$</span>, what is needed is that this measure has finite moments of all orders. One can work on <span class="math-container">$\mathscr{D}'(\mathbb{R}^d)$</span> as in the question, but that's not recommended. Suppose the measure is realized on <span class="math-container">$\mathscr{S}'(\mathbb{R}^d)$</span> with the Borel <span class="math-container">$\sigma$</span>-algebra coming from the strong dual topology. Then the only hypothesis should be:</p> <p><span class="math-container">$$ \forall f\in\mathscr{S}(\mathbb{R}^d), \forall p\in[1,\infty),\ (\phi\mapsto \phi(f))\in L^p(\mathscr{S}'(\mathbb{R}^d), d\mu) . $$</span></p> <p>The OS axioms invoked in the other answer have no bearing on the question.</p> <p>Then using the <span class="math-container">$n$</span>-linear generalization of the Holder inequality, we have that the map <span class="math-container">$$ \phi\longmapsto \phi(f_1)\cdots \phi(f_n) $$</span> is in <span class="math-container">$L^1(\mathscr{S}'(\mathbb{R}^d), d\mu) $</span>, for any fixed Schwartz functions <span class="math-container">$f_1,\ldots,f_n$</span>. As a result, the map <span class="math-container">$$ (f_1,\ldots,f_n)\longmapsto S_n(f_1,\ldots,f_n)=\int_{\mathscr{S}'(\mathbb{R}^d)} \phi(f_1)\cdots \phi(f_n)\ d\mu(\phi) $$</span> is well defined, and is clearly <span class="math-container">$n$</span>-linear.</p> <p>Now comes the first subtle point: this multilinear map is automatically (jointly) continuous.</p> <p>As far as I know this was first proved by Xavier Fernique in the reference I mentioned in <a href="https://mathoverflow.net/questions/149001/measure-theory-in-nuclear-spaces">https://mathoverflow.net/questions/149001/measure-theory-in-nuclear-spaces</a></p> <p>I don't think one can find this fact in Glimm-Jaffe, nor Gel'fand-Vilenkin.</p> <p>Then the second subtle point is that one needs to use the Schwartz Kernel Theorem which says that this continuous multilinear form corresponds to a continuous linear form on the bigger Schwartz space <span class="math-container">$\mathscr{S}(\mathbb{R}^{n\times d})$</span>. Namely, there is a unique distribution <span class="math-container">$T_n\in\mathscr{S}'(\mathbb{R}^{n\times d})$</span> such that <span class="math-container">$$ T_n(f_1\otimes\cdots\otimes f_n)=S_n(f_1,\ldots,f_n) $$</span> for all <span class="math-container">$f_1,\ldots,f_n$</span> in <span class="math-container">$\mathscr{S}(\mathbb{R}^d)$</span>.</p> <p>Finally, if one wants to go further, one can study the singular support of <span class="math-container">$T_n$</span> and see if it is contained in the big diagonal. Here, the OS axioms become relevant since they imply that on the complement of this diagonal, i.e, at non-coinciding points the correlations are not only infinitely differentiable but even real analytic.</p> <p>Note that I explained this already in my review article <a href="https://link.springer.com/article/10.1134/S2070046618040015" rel="nofollow noreferrer">"Towards Three-Dimensional Conformal Probability"</a>.</p>
|
Physics
|
|homework-and-exercises|waves|
|
Where is the compression for this sound wave?
|
<p>Think of the dots in you diagram as the centre of mass of a small volume element containing a number of molecules (but to necessarily the same molecules) defined when there is no sound wave.<br /> As a sound wave traverses the region the position and shape of that volume element changes.<br /> The simple diagram of a sound wave below, a "photograph" of a sound wave at an instant of time, shows the displacement of the centre of mass of the volume elements from their position when there was no sound wave present.</p> <p><a href="https://i.stack.imgur.com/4kzZO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4kzZO.jpg" alt="enter image description here" /></a></p> <p>Note that in a region where there is a compression <span class="math-container">$C$</span>, the increase in pressure above atmospheric is a maximum and the displacement of the volume element is zero.<br /> The displacement of the volume element at a rarefaction <span class="math-container">$R$</span> is also zero.</p>
|
Physics
|
|kinematics|
|
Is it possible that a ball is thrown downward/upward with zero initial velocity?
|
<p>I think you misunderstand the distinction between 'throw' and 'drop'. When you throw something, you apply a force to it, so both it and your hand are moving when you release it from your grip. When you drop something, you release it from your grip without first having accelerated it, so it does start with zero velocity (more or less).</p>
|
Physics
|
|quantum-mechanics|condensed-matter|hilbert-space|topological-insulators|spin-chains|
|
Can the hybridization of edge states in the 1D SSH model be observed numerically?
|
<p>To answer point 1, you should look at the eigen<em>values</em> for those two modes: They should be very close, but not the same.</p> <p>This might very likely also answer 2 -- the hybridization should go down exponentially with N, and thus vanish once the splitting reaches machine precision. A (log-)plot where you plot this splitting vs. <span class="math-container">$N$</span> should be instructive.</p> <p>What you have to realize that numerically, there is no preferred eigenbasis in a degenerate subspace -- only if there is a small splitting (i.e., coupling between the almost-degenerate modes, which leads to hybridization) the numerics will pick that eigenbasis.</p> <hr /> <p>EDIT, following the updated question:</p> <p>Machine precision -- the accuracy to which numbers are represented in the compter -- is usually around <span class="math-container">$2\times 10^{-16} \approx e^{-36}$</span>. Anything below that will not be resolved accurately, and you cannot rely on it.</p> <p>This is precisely the value where your <span class="math-container">$\Delta E$</span> starts to saturate, and look more or less random. Once <span class="math-container">$\Delta E$</span> is below that value, you should no longer trust anything which relies on resolving the splitting of the eigenvalues. In particular, the numerics has no way of resolving the approximate degeneracy between the edge modes, and you will get <em>some</em> two vectors (potentially not even orthogonal, depending on your code) spanning that space.</p>
|
Physics
|
|homework-and-exercises|newtonian-mechanics|projectile|drag|
|
Time taken for ball to reach down is more?
|
<p>While going up, both - air resistance and gravity is stopping a ball, so to say decelerating force is <span class="math-container">$m\alpha=mg+F_{drag} $</span>, while going down - air drag tries to stop, but gravity does not - it accelerates, so overall effect is that air resistance is reducing gravitational acceleration as per <span class="math-container">$ma=mg-F_{drag} $</span>. And because air drag <span class="math-container">$\propto v^2$</span> at some point object will reach terminal velocity, as per equilibrium condition <span class="math-container">$mg=F_{drag}$</span>.</p> <p>Hence due to reduced down acceleration and to the maximum possible down speed, body has more time fly down than up. This is why parachutes is so valuable,- they slows down movement so much that it's safe to land on foots.</p>
|
Physics
|
|quantum-mechanics|quantum-information|quantum-entanglement|bells-inequality|born-rule|
|
Understanding operator product of three mixed states with three projector operators
|
<p>Each party holds two quantum systems, which are parts of two bipartite systems. For example, the Hilbert space <span class="math-container">$H_A$</span> corresponding to the party <span class="math-container">$A$</span> is the tensor product <span class="math-container">$H_A = H_{A1} \otimes H_{A2}$</span> where <span class="math-container">$H_{A1}$</span> correspond to the system which is part of <span class="math-container">$\rho_{AB}$</span> and <span class="math-container">$H_{A2}$</span> correspond to the system which is part of <span class="math-container">$\rho_{AC}$</span>.</p> <p>We have three parties <span class="math-container">$A, B, C$</span> each holding a composite quantum system described by spaces <span class="math-container">$H_A, H_B, H_C$</span> respectively. Each of these spaces is a tensor product, corresponding to the fact that each party has hold parts of two independent bipartite systems: <span class="math-container">$H_A = H_{A1} \otimes H_{A2}$</span>, <span class="math-container">$H_B = H_{B1} \otimes H_{B2}$</span>, <span class="math-container">$H_C = H_{C1} \otimes H_{C2}$</span>. So in total we have <span class="math-container">$6$</span> parts of out quantum systems.</p> <p>The bipartite systems are described by density matrices <span class="math-container">$\rho_{AB} \in \mathcal{D}(H_{A1} \otimes H_{B1})$</span>, <span class="math-container">$\rho_{BC} \in \mathcal{D}(H_{B2} \otimes H_{C1})$</span>, and <span class="math-container">$\rho_{AC} \in \mathcal{D}(H_{A2} \otimes H_{C2})$</span> where <span class="math-container">$\mathcal{D}(H)$</span> denotes space of density matrices on <span class="math-container">$H$</span>. These three density matrices form the density matrix of the whole triangle system <span class="math-container">$\rho_{AB} \otimes \rho_{BC} \otimes \rho_{AC}$</span> which lies in <span class="math-container">$\mathcal{D}((H_{A1} \otimes H_{B1}) \otimes (H_{B2} \otimes H_{C1}) \otimes (H_{A1} \otimes H_{C2}))$</span>.</p> <p>The order in this last tensor product arises from considering our system as consisting of three bipartite systems. To describe measurements, we instead need to view it as a tripartite system with parties <span class="math-container">$A, B, C$</span>, that is, <span class="math-container">$$\mathcal{D}(H_A \otimes H_B \otimes H_C) = \mathcal{D}((H_{A1} \otimes H_{A2}) \otimes (H_{B1} \otimes H_{B2}) \otimes (H_{C1} \otimes H_{C2})).$$</span> To change from the first description to the second, we conjugate with the isometry <span class="math-container">$$M \colon (H_{A1} \otimes H_{A2}) \otimes (H_{B1} \otimes H_{B2}) \otimes (H_{C1} \otimes H_{C2}) \to (H_{A1} \otimes H_{B1}) \otimes (H_{B2} \otimes H_{C1}) \otimes (H_{A1} \otimes H_{C2})$$</span> which permutes the factors accordingly. Physically, it does nothing, it just changes how we view our <span class="math-container">$6$</span>-partite system.</p> <p>Now we can measure our system using local PVMs <span class="math-container">$\{\Pi_a\}$</span>, <span class="math-container">$\{\Pi_b\}$</span>, <span class="math-container">$\{\Pi_c\}$</span> which consists of projections in <span class="math-container">$H_A$</span>, <span class="math-container">$H_B$</span>, and <span class="math-container">$H_C$</span> respectively, so that <span class="math-container">$\Pi_a \otimes \Pi_b \otimes \Pi_c$</span> is a projection in <span class="math-container">$H_A \otimes H_B \otimes H_C$</span> that we can apply to our tripartite state <span class="math-container">$M^T (\rho_{AB} \otimes \rho_{BC} \otimes \rho_{AC}) M$</span>.</p>
|
Physics
|
|homework-and-exercises|general-relativity|lagrangian-formalism|dirac-equation|variational-calculus|
|
Derivation of Dirac equation in curved spacetime by varying the action
|
<p>The metricity condition <span class="math-container">$\omega_{ab\mu}= - \omega_{ba\mu}$</span>, can be used to show that<br /> <span class="math-container">$$ \partial_\mu \gamma^\nu +\frac 12 [{\sigma^{ab}}, \gamma^\nu] \omega_{ab\mu}+ \gamma^\lambda {\Gamma^\nu}_{\lambda\mu}=0. $$</span> This equation can can be interpreted as ``<span class="math-container">$\nabla_\mu \gamma^\nu=0$</span>'' or as the freedom to pass gamma matrices though covariant derivatives provided that suitable connection forms are tacitly or explicitly included.</p> <p>I think you also have forgotten the minus sign from integrating by parts to take the derivative off <span class="math-container">$\bar \psi$</span>.</p>
|
Physics
|
|gravity|black-holes|stars|neutrons|neutron-stars|
|
How do neutron stars overcome neutron degeneracy?
|
<p>In both white dwarfs and neutron stars, collapse occurs because hydrostatic equilibrium cannot be achieved by decreasing the radius. This process occurs even if the electrons or neutrons remain present and unchanged, but might be accelerated if some process removes them.</p> <p><strong>Details</strong></p> <p><strong>White dwarfs and Newtonian gravity</strong></p> <p>Here is a simple Newtonian argument for why a collapse <em>must</em> eventually occur in white dwarfs of increasing mass, governed by relativistic electron degeneracy pressure, <em>even if no neutronisation occurs</em>.</p> <p>As the mass of a white dwarf increases, its radius decreases and its density increases. At low densities, the degenerate electrons have non-relativistic energies, and the pressure <span class="math-container">$P \propto \rho^{5/3}$</span>. As the density increases, the degenerate electrons get pushed to relativistic energies and the relativistic degeneracy pressure goes as <span class="math-container">$P \propto \rho^{4/3}$</span>.</p> <p>What supports a star against gravity is the pressure <em>gradient</em>, <span class="math-container">$dP/dr$</span>, through the equation of hydrostatic equilibrium <span class="math-container">$$ \frac{dP}{dr} = - \rho g\ . $$</span> If we just deal in proportionalities so that <span class="math-container">$dP/dr \propto P/R$</span> and <span class="math-container">$\rho \propto M/R^3$</span>, then in the low density, non-relativistic case, with <span class="math-container">$g \propto M/R^2$</span>, we have <span class="math-container">$$\frac{M^{5/3}}{R^6} = \frac{M^2}{R^5}\ .$$</span> For any given increase in mass it is then possible to decrease <span class="math-container">$R$</span> to keep the LHS equal the increasing RHS.</p> <p>In the relativistic case, the hydrostatic equilibrium equation gives <span class="math-container">$$\frac{M^{4/3}}{R^5} = \frac{M^2}{R^5}\ . $$</span> In this case, the equation can only work for a single mass. Any increase in the mass above that means the RHS would become bigger than the LHS and the star will collapse. This limiting mass is the Chandrasekhar mass.</p> <p><strong>It is worth noting then that the pressure provided by fermion degeneracy will always be "overcome" at some finite mass threshold even in Newtonian physics. Consideration of General Relativity (see below) simply lowers the mass threshold.</strong></p> <p>In practice, the "real" Chandrasekhar mass is a little lower in typical carbon white dwarfs because indeed, electrons are captured by protons in the nuclei once they become highly relativistic; this removes electrons and lowers degeneracy pressure, leading to collapse.</p> <p><strong>Neutron stars and General Relativity</strong></p> <p>In neutron stars, the reason for the upper limit is also because hydrostatic equilibrium cannot be reached, either because of the increasing density (even if the neutrons remain intact) but possibly accelerated if the neutrons are removed.</p> <p>We cannot use the Newtonian hydrostatic equilibrium equation in neutron stars. Its General Relativistic equivalent is the TOV hydrostatic equilibrium equation. <span class="math-container">$$\frac{dP}{dr}=-\left(P+\rho\right)\frac{m(r)+4\pi r^3P }{r\left(r-2m(r)\right)}\ .$$</span> A major difference is that the pressure appears on the RHS. This means that increasing the density at the centre of the star in order to increase the pressure gradient and support a more massive star also increases the pressure gradient required to support that star. Ultimately this is self-defeating and the RHS will always be bigger than the LHS, for any radius, and the star collapses. The mass threshold for collapse is lower than it would be if Newtonian hydrostatic equilibrium were considered.</p> <p>There is considerable debate however as to whether this process might be accelerated by the disappearance of neutrons. This might happen because they have enough energy to create heavy hadrons - the hyperons - like <span class="math-container">$\Sigma$</span> and <span class="math-container">$\Lambda$</span> particles. This would have the effect of turning neutron kinetic energy, which is a source of pressure, into additional rest-mass energy and thus decreases pressure for a given density and might destabilise the star.</p> <p>More catastrophic may be the production of mesons via strong force interactions - pions or kaons. These feel the strong nuclear force but are bosons, can form a condensate, and so that component of the pressure contributed by neutron (fermion) degeneracy would be removed and might trigger the collapse.</p> <p>There are other possibilities too - like quark matter, but it is unclear whether that would hinder any collapse. The discovery of neutron stars of mass <span class="math-container">$2M_\odot$</span> possibly means that equilibrium structures featuring quark matter do not exist (e.g. <a href="https://www.annualreviews.org/doi/full/10.1146/annurev-astro-081915-023322" rel="nofollow noreferrer">Ozel & Freire 2016</a>), but I'm sure you can also find papers that disagree.</p>
|
Physics
|
|condensed-matter|solid-state-physics|dispersion|phonons|
|
Why acoustic phonon dispersion cross $\omega=0$ at $k=0$?
|
<blockquote> <ol> <li>For the acoustic mode, <span class="math-container">$k=0$</span> means all lattice points oscillate in phase. But still, they are oscillating so each lattice point must have some finite frequency (otherwise they would hold still or travel in a single direction indefinitely). How comes this frequency is zero?</li> </ol> </blockquote> <p>It's zero because the dispersion relation forces it to be zero. The frequency <span class="math-container">$\omega$</span> is directly proportional to the wavenumber <span class="math-container">$k$</span>. So, as <span class="math-container">$k$</span> goes to zero so does <span class="math-container">$\omega$</span>.</p> <blockquote> <ol start="2"> <li>The energy of the phonon is given by <span class="math-container">$\hbar\omega$</span>. This implies at <span class="math-container">$k=0$</span>, the acoustic phonon has no energy, how?</li> </ol> </blockquote> <p>The energy of the acoustic phonon at <span class="math-container">$k=0$</span> is zero, because <span class="math-container">$k=0$</span> is an infinite wavelength and infinite period "oscillation." I put "oscillation" in quotes because at <span class="math-container">$k=0$</span> the mode is just a overall translation of the crystal in space. The overall translation in space of the cryptal can't increase the total energy, so the energy of the associated excitation has to be zero.</p>
|
Physics
|
|electromagnetism|field-theory|speed|
|
How to calculate the time derivative of electromagnetic field?
|
<p>As the problem is posed, this is impossible. just knowing the spacial variation of an electric field at a specific moment in time does not mean you have any knowledge of its time variance. I assume you mean <span class="math-container">$\frac{\partial \vec{E}}{\partial x}$</span> and not the total derivative. If you did mean the total derivative and you want to find the time variance of the electric field experienced by an object moving with some velocity function you could use the convective derivative <span class="math-container">$$\frac{D\vec{E}}{Dt} = (\vec{V} \cdot \nabla )\vec{E} +\frac{\partial \vec{E}}{\partial t} $$</span> Where the partial derivative should be 0 to be solvable or atleast known.</p> <p>Theoretically although not what I suspect you are asking, is that if we are given specific functions that represent the partial derivatives of the electric field, that are valid for all moments in time(and thus most likely time dependant), we could construct the divergence and curl and solve the equations for E and then take its partial derivative</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.