This is a third (and presumably final) text about view-angle tilting (or VAT) MRI concluding the explanations posted on 2020-09-10 and on 2020-09-15. I don’t really like the mathematical approximations that I used in my first text and, therefore, I would like to try again; the simple graphical visualization should correspond to an equally simple mathematical derivation …
Let’s start as before with the MRI signal equation for an excited slice without any distortions
S(t) = ∫dxρ(x,ẑ) exp (–iγGxxt)
with frequency-encoding direction x and frequency-encoding gradient Gx (phase encoding is generously omitted). The x integral is over the complete imaged object (but for a compact object that fits into our field of view, we can integrate from – ∞ to ∞). The slice-selection direction is denoted by ẑ, where I use the hatted (“^”) variable to differentiate the undistorted (original) slice position ẑ from the distorted position z = z(x) analyzed below.
I will ignore the slice thickness, since the essence of the VAT technique can pretty well be explained without it. And everything gets really complicated if we consider varying slice thicknesses – or even slices split in several parts as illustrated in the following figure.
It’s useful to keep in mind the resonance relation between an excitation pulse with frequency ωrf applied after switching on the slice-selection gradient Gz,slc and the slice position ẑ (without field inhomogeneities):
ωrf = γGz,slcẑ or ẑ = (ωrf)/(γGz,slc).
If we now consider inhomogeneity-induced distortions of a slice (due to a field inhomogeneity, ΔB(x,z)), then each original slice position ẑ (corresponding to the excitation frequency ωrf) is “moved” to a new position, z(x), which is, in general, different for each x. The new position, z(x), is the solution of the equation
ωrf = γ(Gz,slcz(x) + ΔB(x, z(x)))
for each (fixed) value of x. The relation to the original slice position, ẑ, can be expressed (after division by γGz,slc) as
(Of course, the previous equation is not an explicit solution describing z(x), since z(x) still appears also on the right-hand side of the equation.)
If we go back to the MRI signal equation and include the distorted slice geometry, z(x), the integral changes to
S(t) = ∫dxρ(x, z(x)) exp(–i ω(x,z(x))t),
where, ω(x,z) is used to describe the Larmor frequencies during readout. Without field inhomogeneities, this is simply (and independent of z) ω(x,z) = γGxx. The inhomogeneity, ΔB(x,z) changes the Larmor frequencies to ω(x,z(x)) = γ(Gxx + ΔB(x, z(x))). If we also include the additional VAT readout gradient, Gz,VAT, we get
ω(x,z(x)) = γ(Gxx + ΔB(x, z(x)) + Gz,VATz(x)).
We can now insert the expression for z(x) derived above into the third addend of this last expression, which gives
So, by setting Gz,VAT = Gz,slc (as proposed for VAT MRI), the inner parenthesis vanishes and neither ΔB nor z(x) (which implicitly contains ΔB as well) appears in the expression for the Larmor frequency. This means that the artifacts in readout direction (due to changed readout frequencies) are removed:
S(t) = ∫dxρ(x, z(x)) exp(–iγ (Gxx + Gz,VATẑ)).
Other artifacts caused by the changed slice geometry, z(x), obviously remain (as indicated by the term ρ(x, z(x)) in the integral) and are not corrected by the VAT approach.
Since I’ve omitted the integration (∫dz …) over the finite slice thickness, the meaning of the slice coordinate ẑ in the final results is not completely obvious. Actually, the appearance of ẑ during readout corresponds to a tilted readout (that’s why it’s called view-angle tilting), which becomes clearer if one includes the omitted integration over the slice direction in the resulting MRI signal equation.
Here I would like to extend the introduction to view-angle tilting (or VAT) MRI posted on 2020-09-10. Using the same field inhomogeneity and slice geometry as in the previous text, I’m trying to provide some more (visual) explanations about what’s going on.
Let’s consider two sample slices as shown in the following figure. Both slices have the same thickness Δz and are really meant to be at the same z position (with respect to the field inhomogeneity that I’m going to add later on); they are only plotted at different z positions to combine them into one diagram.
The top slice is a homogeneous slice with the same signal (magnetization density, color-coded in gray) for all x positions (except for the periphery without signal). The bottom slice has alternating signals with two low-signal areas; there is a change from dark to bright signal exactly at the center of the slice. Below each slice, the signal intensity, I(x) – obtained by a readout in x direction – is plotted.
Using the field inhomogeneity defined in the previous text, both slices can be plotted with inhomogeneity-induced distortions in z direction, which is illustrated in the next figure. If we assume (which is generally not the case in real life) that the signal of the underlying object doesn’t vary in slice (z) direction, then the signal intensity I(x) remains exactly the same as before (i. e., the projection of the signals onto the x axis is unchanged). (This is true at least as long as we ignore that the thickness of the excited slice might also change as function of x.)
However, as previously explained the distorted slice geometry is not the only effect of the field inhomogeneity. Field inhomogeneities during readout influence the spatial encoding in readout (x) direction. This is illustrated in the next figure, in which the slice distortion effect in z direction is omitted for better visualization and only in-slice effects are visualized.
The voxels are scaled in x direction (some are wider, others narrower than in the reference plot above). Since the number of protons in each voxel remains the same, the signal, which is proportional to the proton density, is scaled inversely proportionally to the voxel width, so narrow voxels appear brighter. The strongly increased signal intensity, I(x), in the right half of the plot is called “signal pile-up”.
In addition, the in-slice geometry is distorted which is nicely illustrated in the blue intensity curve: the change from low to high signal is clearly shifted from the exact center to the right-hand side of the plot; this is, of course, a direct consequence of the wider voxels in the left half of the slice.
Both effects, the distorted slice geometry as well as the in-slice signal artifacts are shown together in the following figure. The spatial shift in x direction becomes very obvious as a shift of the minimum of the slice shape to the right. The signal intensity after readout, I(x) remains the same as in the previous figure.
This situation now is the starting point for artifact correction with the view-angle tilting approach. The artifacts shown in the signal intensity curves, I(x), can – perhaps surprisingly – be corrected by a simple change of the view angle as illustrated below.
Just by projecting the distorted spin density onto a tilted axis (shown at the bottom of the following plot), the signal artifacts (areas with low signal or signal pile-up) are corrected. In this example, the projection is numerically evaluated and yields the almost perfectly straight intensity curve in the bottom plot.
A second visual “proof” of the VAT approach are the gray projection lines (one for each voxel) that appear equidistant in the tilted projection, but very non-equidistant in the conventional projection shown at the top of the figure.
In addition, the sinc-smoothing effect of the VAT pulse sequence is visualized at the periphery of the projected slice, where the signal increases (and decreases) in a relatively long ramp instead of a sharp step.
Finally, it’s interesting to see how the strength, Gz, of the slice-selection gradient influences the VAT approach (assuming that the readout gradient, Gx, is not changed). A lower slice-selection gradient results in more severe through-slice distortions (since the field inhomogeneities are now stronger relative to the gradient). But the increased slice distortion has a surprising advantage in terms of in-slice artifacts as illustrated in the following figure: increased through-slice (z) distortions shift the slice away from the maximum of field inhomogeneities and, thus, reduce the remaining field inhomogeneity in the slice (along the x direction) after excitation.
The reduced in-slice artifacts are shown in the next figure together with the uncorrected signal intensity (top) and VAT-corrected intensity (bottom).
Obviously, the voxels are less distorted along the readout (x) direction, and the uncorrected signal demonstrates reduced signal pile-up. The angle of the tilted axis at the bottom is smaller than above; it’s always arctan(Gz / Gx) which gives 45° in the first case and about 26.565° now. The numerical projection on this axis shows strongly reduced artifacts (compared to the uncorrected readout) as well as reduced sinc-smoothing compared to the VAT readout above.
This text is first of all an explanation and note to myself – but some parts of it might be of more general interest (if this is not the case for you, please just stop reading …).
Metallic implants can cause severe image artifacts in MRI. The problem of these implants in MRI is their magnetic susceptibility, χ, that influences the static magnetic field B0 in their neighborhood. Consequently, the (ideally) perfectly homogeneous static field B0(x,y,z) = B0ez becomes slightly (or not so slightly) inhomogeneous: B0(x,y,z) = B0 + ΔB(x,y,z) (considering only the z component from here on). A simplified two-dimensional example of such an inhomogeneity ΔB(x,z) is shown in the following figure (together with another kind of desired inhomogeneity, namely a linear gradient field).
MR imaging usually relies on the assumption that B0 is homogeneous; the presence of ΔB(x,y,z) results in image artifacts such as voxels moved to wrong positions, signal pile-up, and signal voids.
To simplify things, I ignore the phase-encoding direction (y) and restrict this discussion to the frequency-encoding direction (x, with frequency-encoding gradient Gx; displayed horizontally) and the slice direction (z, with slice-selection gradient Gz; displayed vertically). Slice selection in MRI works by applying a slice-selection gradient Gz during excitation. This means that each position z corresponds to a unique Larmor frequency ω(z) = γzGz, which is used to excite the spins at the desired z positions. A slice is selected by applying a radio-frequency (rf) pulse with a certain frequency (ωrf) and bandwidth (Δωrf). Thus, all spins with Larmor frequencies ωrf ±Δωrf / 2 are excited. For a perfectly linear gradient Gz (i. e., a gradient over a homogeneous background field), these spins form perfect planes (illustrated at the left hand side of the following figure for three different excitation frequencies). In the presence of inhomogeneities, spins are no longer excited in linear planes, but in often very nonlinear structures (illustrated by the central slices on the right hand side).
Now look at the central, most severely deformed slice: During slice selection, all Larmor frequencies in this slice (independent of x) were exactly centered around ωrf (only these Larmor frequencies were in resonance with the excitation pulse). However, after switching off the slice-selection gradient (as shown in the next figure), the Larmor frequencies in this slice are no longer the same at all positions: Obviously, the frequencies close to the center are higher due to the field inhomogeneity.
So, a first effect of field inhomogeneities are deformed slice geometries. A second effect results from the remaining field inhomogeneity in the excited slice shown above. This second effect becomes relevant during frequency encoding, when the spin localizations, x, are encoded by Larmor frequencies that are assumed to depend linearly on x: ω(x) = γxGx. However, in the selected slice, the position x is no longer proportional to the Larmor frequency, ω(x), as shown in the following figure.
The deviation from the linear dependency shifts voxel locations in readout direction resulting in distortion artifacts. This effect (but not the deformed slice geometry) is (at least approximately) corrected by the view-angle tilting (VAT) MRI pulse sequence suggested by Z. H. Cho et al. (1988).
Mathematically, these effects can be analyzed using the basic signal equation of MRI. Without field inhomogeneities, the signal S(t) of the magnetization density ρ(x,z) during the readout is:
S(t) = ∫dxz2∫z1 dzρ(x,z) exp (–iγxGxt)
Frequently ignored, but here made explicit, is the integral over the slice thickness Δz from z1 to z2 = z1 + Δz. As written above, the position and thickness of the slice are defined (in the case without field inhomogeneities) by the slice-selection gradient Gz and by the frequency (ωrf) as well as bandwidth (Δωrf) of the excitation rf pulse:
The signal equation also shows that each position x is uniquely encoded by a corresponding Larmor frequency ω(x) = γxGx.
With additional field inhomogeneities, the Larmor frequencies change proportional to ΔB(x,z): during excitation, the frequencies are ω(x,z) = γ(zGz + ΔB(x,z)) and, during readout, they are ω(x,z) = γ(xGx + ΔB(x,z)). In both cases, the field inhomogeneity can be directly transformed into spatial shifts:
ω(z) = γGz (z + (ΔB(x,z))/(Gz)) = γGz (z + Δz(x,z)) with Δz(x,z) = (ΔB(x,z))/(Gz)
In both cases, the distortions (Δz(x,z) and Δx(x,z)) decrease when stronger gradients (Gz, Gx) are used.
So, in the presence of field inhomogeneities, the signal equation must be changed accordingly by adding an x-dependency to the slice positions z1 and z2 (describing, e. g., the curvy slice shape shown above) as well as using the modified readout frequency:
The slice positions z1,2(x) – and, more generally, the position(s) z(x) corresponding to an excitation frequency ωrf – are given by the following implicit equation (for each value of x):
ω(x, z(x)) = γ(z(x) Gz + ΔB(x, z(x))) = ωrf
which can give very nonlinear solutions for z(x), since ΔB(x,z) can be a complicated function of x and z. The slice can even split into several components centered at different z positions, which means that the z integral is not over a single interval, but over several intervals.
The view-angle tilting sequence (shown in the following figure) adds an additional readout gradient in slice-selection direction with the same magnitude Gz as the original slice-selection gradient.
The additional gradient during readout must be included into the signal equation as additional component γGzz of the Larmor frequency:
For the following, a significant approximation is made: The complicated implicit equation for the slice position, z(x), given above is substantially simplified by replacing the x-dependent slice position z(x) as argument inside ΔB(x, z(x)) with the original slice position, ẑ:
This last equation describes a readout in a tilted readout direction (not longer along Gx, but along the vector (Gx, Gz)), without any distortions (which were related to the canceled term ΔB / Gx) along the readout direction.
Assuming that ρ(x, z′ – ΔB / Gz) does not vary significantly over the the z′ integration interval (say, a thin slice), it can be replaced by ρ(x, ẑ – ΔB / Gz). The integral over z′ can now be performed independently of ρ, resulting in a sinc-shaped modulation of the signal over the time t or, after reconstruction, a certain amount of blurring.
If additional corrections of the slice distortions along the z axis are required, techniques such as “Slice Encoding for Metal Artifact Correction” (SEMAC) as proposed by Lu et al. (2009) can be used in combination with the VAT approach (however, at the cost of substantially prolonged scan times).
Relaxation is one of the most basic and obvious effects in nuclear magnetic resonance (NMR) experiments. Relaxation describes how the nuclear magnetization returns to its equilibrium state after an excitation by a pulsed radio-frequency field. The transverse magnetization (precessing in the plane perpendicular to the external magnetic field B0 and yielding the measurable NMR signal) decays exponentially with a time constant called T2; the longitudinal magnetization (parallel to B0) recovers exponentially with a time constant called T1.
For pure (distilled) water, these relaxation times are in the order of 1 to 5 seconds as established by numerous experiments. However, the calculation of these relaxation time constants is known to be notoriously complicated and difficult (requiring detailed quantum mechanical analysis of the spin ensembles and the spin interactions); a typical result of such an analysis is shown in the figure below.
So I wondered if the NMR relaxation time constants T1 or T2 of pure water can be roughly estimated using only some simple assumptions and considerations such that the resulting quantities are at least in the correct order of magnitude.
The main reason for transverse and longitudinal relaxation is the magnetic dipole-dipole interaction between spins (other effects such as interaction with the thermal radiation field or electric interactions with the electrons can be neglected for the water protons). The dipole field of a magnetic moment m is
Bdipole(r) = (μ0)/(4π)(3r(m · r) – mr2)/(r5)
or, which is enough for our order-or-magnitude estimations,
Bdipole(r) ~ (μ0)/(4π)(|m|)/(r3),
where μ0 = 4π × 10–7 T2 m3 / J is the vacuum permeability.
The magnetic moment m of the proton can be easily expressed by the gyromagnetic ratio γ ≈ 267.5 × 106 rad / (sT), which is defined as the ratio between magnetic moment and angular momentum I, m = γI. Approximating the angular momentum I of the proton by I ~ ℏ ≈ 1.055 × 10–34 Js leads to |m| ~ γℏ ≈ 3 × 10–26 J / T and
A first idea about T2 or transverse relaxation (introduced already 1946 in the seminal NMR paper by F. Bloch) is to consider spins (protons) in a rigid lattice, where the magnetic field of the nearest neighboring proton induces a phase shift at the spin of interest leading to signal dephasing. Using the shortest distance that is relevant for the nuclei in water molecules, namely the intramolecular proton-proton distance dpp ≈ 0.15 × 10–9 m, the additional, dephasing dipole field of a single neighboring proton is δB = Bdipole(dpp). This yields a dephasing angular frequency δω = γδB = γBdipole(dpp) (in addition to the Larmor frequency due to the external B0 field). Assuming that transverse relaxation requires an additional phase angle of the order of 1, the time constant T2 can be estimated to be of the order
Well, this value for T2 ~ 5 μs is obviously way too small for water – by about 6 orders of magnitude! So we are not even close … but there is still something to learn from this: The very short T2 is in fact quite exactly what we find in solids (as e.g. in water ice with T2 ≈ 4 μs as measured by T. G. Nunes et al.), which shouldn’t be too surprising since we have assumed a rigid spin configuration above. Consequently, we’ve learned now that the constant random motion of water molecules in liquid water must play a very important role for the quantitative understanding of relaxation.
To obtain at least an order-of-magnitude estimation of the actual T2 relaxation time (and in fact also of T1) of liquid water, we have to include some additional information about the random motion of water molecules. A handy physical parameter to describe the effects of this random motion is the correlation timeτc, which can be used to describe, e. g., a fluctuating magnetic field component ⟨B(t)B(t + τ)⟩ = ⟨B(t)2⟩ exp(–|τ| / τc), where ⟨ · ⟩ denotes the ensemble average over all spins and the result is assumed to be independent of the time t. So basically, after the correlation time τc the fluctuating field strength becomes statistically independent of its former values.
For the protons of water, this correlation time is extremely short, in the order of picoseconds: τc ≈ 5 × 10–12 s. This value is in fact about the diffusion time of a water molecule over a region with diameter dpp, which is dpp2 / (6D) ≈ 2 × 10–12 s for D ≈ 2 × 10–9m2 / s. It is important to note that there is almost no dephasing over the correlation time, i. e., τcδω ≈ 10–6 ≪ 1. This means that the static dephasing as assumed above cannot take place since the protons move too quickly and the static effects are averaged out. After about τc, the motion has completely changed the orientation of the vector distance r between the protons and, hence, also the magnetic dipole field direction and the magnitude of each field component.
To obtain an estimation of T2 using this correlation time, we now consider the phase angles ϕ(t) within our ensemble of spins after a time t – more exactly, the additional phase angles (as seen in a rotating frame of reference) due to the fluctuating random magnetic fields δB(t). These phase angles exhibit a random distribution with mean value ⟨ϕ(t)⟩ = 0, but the width of this distribution ⟨ϕ(t)2⟩ increases proportional to the time ⟨ϕ(t)2⟩ = αt. Similarly as above, we estimate our relaxation time T2 to be the time required for this standard deviation to become of order 1; i. e., T2 ~ (1)/(α) = (t)/(⟨ϕ(t)2⟩).
What can we say about the value of α? The factor α must have the dimension of 1/time; it should increase with the local magnetic field strength δB = δω / γ and also with the correlation time τc (the longer the correlation time, the less effective becomes the averaging). So, the simplest approach to express α by the given quantities with the correct physical dimension is to set α = δω2τc. With this guess, we find
in surprisingly good agreement with the experimental data.
To obtain a very similar result by an actual calculation, we can quantitatively determine the width of the distribution ⟨ϕ(t)2⟩. The relation between the fluctuating magnetic field component δB(t) seen by an individual spin and the phase evolution of this spin is
⟨ϕ(t)2⟩ = γ2t∫0dt′ t – t′∫–t′dτ ⟨δB(t′) δB(t′ + τ)⟩.
We can now use the correlation time relation from above for the fluctuating random magnetic field component δB(t), namely ⟨δB(t) δB(t + τ)⟩ = ⟨δB(t)2⟩ exp(–|τ| / τc) with the average squared field strength ⟨δB(t)2⟩ ~ Bdipole(dpp)2 ~ δω2 / γ2 to obtain
⟨ϕ(t)2⟩ ~ δω2t∫0dt′ t – t′∫–t′dτ exp(–|τ| / τc).
To solve this double integral without long calculations (which are nevertheless straightforward), we need the fact that τc ≪ t and therefore almost always τc ≪ τ (the second integration variable), so the exponential function in the integral is almost always 0.
Hence, the second (inner) integral can be approximated for almost all values of t′ as ∞∫–∞exp(–|τ| / τc) dτ = 2∞∫0exp(–τ / τc) dτ = 2τc. With this approximation, we find
⟨ϕ(t)2⟩ ≈ δω2t∫0dt′ 2τc = 2 δω2τct
(1)/(T2) ~ 2δω2τc ≈ 0.5 s–1 ≈ (1)/(2 s),
which is again an estimation that agrees well with the observed values.
A final short comment on T1: In non-viscous liquids such as water, T1 and T2 are approximately the same. While loss of phase coherence is required for T2 relaxation, energy transfer from individual spins to the liquid is causing T1 relaxation. This energy transfer occurs due to transverse fluctuating magnetic fields δBx,y(t) with spectral components agreeing with the Larmor frequency ω = γB0. Since these frequencies are much lower than the inverse correlation time, i. e., γB0 ≪ 1 / τc, they basically occur with the same probability as the dephasing T2 effects considered above. A more detailed analysis of T1 and T2 requires quantitative estimations of the spectral density functions J(ω) of the dynamic processes involved.
Consider the following simple question in magnetic resonance imaging (MRI) or spectroscopy (MRS): Given a fixed total measurement time, Ttotal, (e. g., a typical breath-hold duration of 16 s or a maximum accepted sequence duration of 10 min) and the possibility to fit into this duration an acquisition with several repeated read-outs (that are to be averaged to increase the data quality), what is the optimum balance between the repetition time (TR or TR) and the number of averaged signals (also called simply the number of averages, N, or the number of excitations, NEX or Nex)? (For simplicity, let’s consider only pulse sequences with 90 ° excitations (e. g., spin-echo sequences), in which all available magnetization is flipped to the transverse plain in each repetition.)
The solution (for 90° excitations)
In general, a single MRI acquisition (without averaging) requires M repeated excitations separated by TR (e. g., M could be the number of phase-encoding steps for conventional spin-echo acquisitions or M = 1 for single-shot EPI or one-dimensional MR spectroscopy). Hence, the acquisition time without data averaging is Tacq = MTR, and, neglecting for now that the number of averages must be integer, this number of averages is N = Ttotal / Tacq = Ttotal / (MTR). In the following, the actual relevant time parameter is, thus, Tavail = Ttotal / M, the time available for each required excitation, and the number of averages is N = Tavail / TR. We can also express the repetition time by these parameters as:
TR = Tacq / M = Ttotal / (MN) = Tavail / N.
On the one hand, averaging data increases the obtainable signal-to-noise ratio (SNR or Ψ) proportional to the square root of the number of averages:
Ψ ∝ √(N) = √(Tavail / TR).
On the other hand, increasing the number of averages (at fixed total measurement duration!) results in a shortened time for longitudinal relaxation: TR = Tavail / N and, thus, in less SNR:
Ψ ∝ 1 – exp(–TR / T1) = 1 – exp (–Tavail)/(NT1).
The resulting total SNR is proportional to the product of both factors, i. e.,
Ψ ∝ √(Tavail / TR)(1 – exp(–TR / T1)) :
The SNR (as a function of TR at constant total measurement time) has, thus, a maximum in an intermediate range; for very short TR (and, consequently, many averages), the longitudinal magnetization cannot relax sufficiently, which reduces the available signal; for very long TR, the relaxation is approximately complete anyway, but the number of averages decreases.
To calculate the optimum TR (or N), TR and Tavail are best expressed in terms of T1 using τ = TR / T1 and T = Tavail / T1, which gives
Ψ ∝ √((T)/(τ))(1 – exp(–τ))
The maximum of this expression with respect to τ is obtained by setting its derivative to zero, i. e.
where W–1 is the lower branch of the Lambert W-function (or product-log or Omega function, which gives the solution for W in z = Wexp(W), i. e., it is the inverse function of f(W) = Wexp(W)). Hence, the optimal choice for TR is (at least theoretically):
TR ≈ 1.256 T1.
Fortunately, the function to be maximized has a rather broad maximum, and one obtains values between 99 % and 100 % of the maximum SNR for TR between about 0.9937 T1 and 1.5773 T1 and values between 95 % and 100 % of the maximum SNR for TR between about 0.7293 T1 and 2.0882 T1 (values determined by numerical evaluation and illustrated in the first figure at the top).
So, practically, choosing TR to be T1 gives still a nearly optimal SNR; if a larger number of signal acquisitions is preferred, e. g., for statistical evaluation, then TR may even be shortened to 0.75 T1, and if – on the other hand – the base SNR of every single acquisition becomes very low (but is to be reconstructed before signal averaging) then TR may be chosen to be 2 T1 to increase the quality of each single (not averaged) data set.
The presented result is not new; in fact, it has been derived before (at least numerically) in several publications. An early example is, e. g., a publication by R. R. Ernst and R. E. Morgan (1973).
The presented analysis is valid only for simple 90° excitations; FLASH sequences or steady-state pulse sequences which refocus the transverse magnetization as well will exhibit a different behavior.
Of course, optimizing TR as described above is not an option if T1-weighted or T2-weighted images are to be acquired. In these cases, the optimal value of TR depends on the desired contrast and not on the total SNR.
(This is a shortened version of a slightly longer pdf document that contains some more details and discusses also a few special cases.)
This is a short advertisement for my web document “Diffusion Coefficients of Water”. If you have ever looked for reference values of the self-diffusion coefficient of water at different temperatures, then chances are that you might like to bookmark the link above.
(If you have never heard of molecular (self-)diffusion, then it’s probably not so relevant for you. As an ultra-short explanation: Self-diffusion is a term for the thermal motion of molecules. If we could look at the individual molecules of a liquid such as water (say, with a really big microscope), then we would see all molecules in constant random motion. This molecular motion is known as the cause of the Brownian motion, which has been observed already in 1827 by the botanist Robert Brown. The extent of this motion is directly related to the temperature of the liquid or – which is essentially the same – to the thermal kinetic energy of the molecules. This motion can be described by a physical quantity called the (self-)diffusion coefficient D, which has units of m2 / s. The meaning of a diffusion coefficient of, e. g., D = 2 × 10–3 mm2 / s is that after a time t = 1 s the space “visited” by an average diffusing molecule (in a certain statistical sense) has a size s (i. e., a radius) of s = √( 6Dt) = √( 12 × 10–3 mm2) ≈ 0.1 mm.)
Why is it good to know the exact relationship between the temperature and the diffusion coefficient of water? Typical applications are:
Using these data, we can determine temperatures by measuring diffusion coefficients of water, e. g., using diffusion-weighted MRI.
We can check how well our MRI system is calibrated by measuring the temperature conventionally (i. e., with a thermometer) and compare the MRI-determined diffusion coefficient with a reference value.
An elegant variant of the previous approach is to use an ice-water phantom (i. e., a sample of liquid water floating in a mixture of ice and water such that the temperature is known to be 0 °C) as suggested in a publication by T. L. Chenevert et al. (2011).
The web document introduced above provides an interactive interface to calculate self-diffusion coefficients of water at different temperatures (or, alternatively, to calculate the temperature corresponding to a given diffusion coefficient). This calculation is based on the results taken from several published articles with exactly this kind of data (namely, measured diffusion coefficients of water at different temperatures).
The web page has actually existed for quite some time (since about 2002), but it was never made public. No (hyper)links to it existed and it was used only by me (and by some colleagues and our Master or PhD students). Thus, I was somewhat surprised when I recently found out that the page is indexed by google and has already been cited in several reports and theses. Consequently, I decided to slightly update its content (and to pretty up its appearance). The most relevant change is that I added a non-linear (quadratic) fit that describes the data in the Arrhenius plot much more accurately (and can be used for data over a greater temperature range) than the previously used linear fit. Note that using the (recommended) quadratic fit, calculated diffusion coefficients for intermediate temperatures around 15 to 30 °C will differ from those of the earlier (linear) version by a few tenths of a percent.
Some time ago at lunch, we had a discussion about the advantages of high magnetic field strengths B0 in MRI. We happily agreed that higher field strengths result in higher signal-to-noise ratios (SNR). But then several opinions surfaced about the exact quantitative relation between the SNR and B0 – ranging from linear to quadratic and including some very specific exponents in between such as 7/4. It turns out that more than one correct answer exists … and there are some surprising technical subtleties.
As starting point of a quantitative discussion, we have to define what we consider as SNR in MRI: The SNR (sometimes given the symbol Ψ) is defined as the amplitude of the image signal (at some point or region of interest) divided by the standard deviation of the noise signal. (There are other SNR definitions used in other disciplines that involve squared amplitudes (signal power) and logarithms (resulting in decibels), but in MRI we generally use the very simple ratio of the signal amplitude and the noise standard deviation.)
To simplify things, we can divide the analysis of field strength and SNR into two parts: the discussion of the signal and of the noise. The signal part is rather straight forward: If the field strength B0 increases, then
the Larmor (precession) frequency ω = γB0 of the nuclei increases proportionally with B0 (the proportionality constant γ is the gyromagnetic ratio)
and the nuclear magnetization MN increases; for realistic field strengths and “normal” temperatures (around 300 K), the nuclear magnetization is approximately proportional to the field strength MN ≈ χvB0, where χv is the nuclear magnetic (volume) susceptibility given by χv = ((N / V) γ2 ℏ2I (I + 1))/(3kT) (with N / V: spin density, I: nuclear spin quantum number, k: Boltzmann constant, T: sample temperature).
The measured signal S is the voltage induced in the radio-frequency (rf) receiver coil by the precessing nuclear magnetization. As known from the theory of electromagnetic induction, this induced voltage is proportional to both the (angular) frequency ω = γB0 and the magnetization MN = χB0, and taking both factors together, we find S ∝ B02.
So, if we stop at this point, we may hope for four times the SNR at double field strength. But we haven’t considered the noise yet. And there the trouble begins …
There are several sources of noise in MRI and depending on the setup, different sources can dominate the noise generation. Generally, the thermal noise voltage Unoise can be expressed as Unoise = √( 4kTRΔf), where Δf is the signal bandwidth and R is a resistance associated to the rf receiver coil. For very small samples (milliliters), this resistance R is dominated by the actual coil resistance Rcoil. Rcoil is proportional to the square root of the Larmor frequency Rcoil ∝ √(ω) ∝ √(B0) because of the rf skin effect, which reduces the penetration depth and, thus, the conducting area of the coil wires proportional to 1 / √(ω). Therefore, Unoise ∝ Rcoil1 / 2 ∝ ω1 / 4 ∝ B01 / 4 and the resulting SNR is proportional to S / Unoise ∝ B07 / 4 in this case.
However, a different looking result is presented by A. Abragam in his classic monograph “The principles of nuclear magnetism” (1961). There (on p. 83) we find for the SNR Ψ ∝ √(Q)B03 / 2 with the coil quality factor Q. This is derived from the shunt resistance R = QLω of a circuit (here the receive coil) with inductance L. This apparently contradicting result can be explained if the frequency dependence of the quality factor Q ∝ √(ω) (for a Q-optimized solenoid coil) is considered, which then gives the same exponent of 7 / 4 as before.
Finally, for larger samples – such as human subjects in clinical MRI – additional noise is generated because of the inductive (or magnetic) losses associated with the electrical conductivity of the sample or tissue. Electrical power is dissipated in the sample proportional to the squared induced voltage in the sample, i. e., proportional to ω2 and proportional to the squared current I2 in the coil. This dissipation of power can therefore be expressed as an additional apparent coil resistance Rsample that is also proportional to the squared Larmor frequency Rsample ∝ ω2. Consequently, considering only this apparent coil resistance associated to the sample, we find Unoise ∝ Rsample1 / 2 ∝ ω ∝ B0, and now the resulting SNR is proportional to S / Unoise ∝ B0.
An obvious question now is: Can’t we simply measure the SNR dependence on the field strength to find out what’s actually going on in MRI? Well, we can try, but there are several complications. One major problem is that we cannot use the same rf coils at different field strengths. There will generally be differences of the coil design for different field strengths and these must be expected to influence the measured SNRs. Another complication is the influence of the electromagnetic wavelengths in tissue that approach the dimensions of the samples at about 3 to 7 T. Furthermore, the relaxation times (T1, T2, T2*) change with B0 and may influence SNR measurements.
Nevertheless, there are some publications reporting SNRs at different field strengths, e. g. by D. I. Hoult et al. (1986), J. T. Vaughan et al. (2001), C. Triantafyllou et al. (2005), and recently by R. Pohmann et al. (2016). They all show a clear increase of SNR with B0, but there are still some discrepancies with respect to the exact dependency. Particularly the latest paper by R. Pohmann et al. demonstrates and discusses a slightly better than linear increase of SNR at high fields. But in conclusion and as rule of thumb, assuming an approximately linear relationship of SNR and B0 appears still justified for large (clinical) MRI systems.
If you are familiar with MRI (or NMR in general), then probably also with the relaxation time constants T1 and T2. These tissue-specific (or substance-specific) constants describe how fast the nuclear magnetization returns to its equilibrium value, M0, after excitation by a pulsed radio-frequency (rf) field. Shortly summarized, T1 describes the exponential recovery of the longitudinal magnetization ML (i. e., of the magnetization parallel to the external static magnetic field B0), and T2 describes the exponential decay of the transverse magnetization MT (that is precessing in the plane orthogonal to B0).
Typically (as shown in the first figure), T2 values of tissue are considerably lower than T1 values, i. e., the transverse magnetization decays quicker than the longitudinal relaxation needs for recovery. For most tissues in vivo, T1 varies between about 300 ms and 3 s, while T2 varies between about 10 ms and 200 ms. Longer T2 relaxation times (up to about 3 s as well) are found for liquids.
So one may ask if there are good physical reasons for T2 values being shorter than or – at most – equal to T1. As a physicist, I’d start with checking some extreme cases, e. g., assuming that T2 is much longer than T1, i. e. T2 ≫ T1. Then the longitudinal magnetization can fully recover while at the same time some transverse magnetization would be preserved. As a result, the magnitude of the total magnetization √(ML2 + MT2) would become greater than the equilibrium value M0 – which is physically impossible.
To analyze these properties of T1 and T2 in more detail, longitudinal and transverse relaxation can also be plotted together in a diagram showing the transverse magnetization on the horizontal axis and the longitudinal magnetization on the vertical axis. These diagrams show the evolution of the magnetization (for experts: in a rotating frame of reference) after a 90° rf pulse (all trajectories start at the lower right corner of the diagram). The left-hand side of the following figure shows three cases T2 = T1 / 2 (blue), T2 = T1 (green), and T2 = 2 T1 [sic!] (cyan); and all three curves show a “benign” behavior in that they lie in the shaded area below the black circle segment ML2 + MT2 = M02. This means that the magnitude of the total magnetization vector is always smaller than M0.
However, the right-hand side of this figure shows what’s happening if T2 becomes greater than 2 T1 – in this case, T2 = 3 T1 (red curve): Now the magnitude of the total magnetization vector increases above the physical limit of M0, i. e., the red line crosses the black border of physically benign behavior!
In fact, it can be shown (see appendix if interested) that the maximum T2 value, for which the red curve stays always below the black line, is exactly T2 = 2 T1. And as so often, almost everything that is physically possible is also realized in nature (although the case T1 < T2 < 2 T1 is really extremely rare), as described by Malcolm H. Levitt in his highly recommendable NMR text book “Spin dynamics” (2nd ed., section 11.9.2, note 13):
The case where T2 > T1 is encountered when the spin relaxation is caused by fluctuating microscopic fields that are predominantly transverse rather than longitudinal. One mechanism which gives rise to fields of this form involves the antisymmetric component of the chemical shift tensor (not to be confused with the CSA). […] Molecular systems in which this mechanims is dominant are exceedingly rare (see F. A. L. Anet, D. J. O’Leary, C. G. Wade and R. D. Johnson, Chem. Phys. Lett., 171, 401 (1990)).
So, the answer to the title question is: No, T2 can in fact be greater than T1 in very special circumstances, but it can never be greater than 2 T1.
The maximum T2 value, for which the red curve stays always below the black line, is exactly T2 = 2 T1. This can be seen by analyzing the inequality
First, we divide by M02 and set T2 = αT1 as well as β = exp(–t / T1), yielding
(β1 / α)2 + (1 – β)2 = β2 / α + 1 – 2β + β2 ≤ 1
which is (after subtraction of 1 and division by β)
β2 / α – 1 – 2 + β ≤ 0 or β2 / α – 1 ≤ 2 – β.
β is by definition (for positive t) between 0 and 1, so the right-hand side of the last inequality is a linear function descending from 2 to 1 (i. e. always ≤ 2). Its left-hand side has very different shapes depending on α: it is increasing from 0 to 1 for 0 < α ≤ 2 (since then the exponent 2 / α – 1 ≥ 0); but it is going to infinity for β → 0 if α > 2 (since then the exponent 2 / α – 1 < 0). So, the last inequality will not hold in the latter case for sufficiently small values of β, which means that non-physical behavior occurs if α > 2 or, using the definition from above, if T2 > 2 T1.
You may have heard that gadolinium-based MRI contrast agents can enhance or increase the signal of tissue. This is generally a good description of what’s going on. Here, however, I would like to argue why this is – strictly speaking – not true: contrast agents cannot really increase the signal available for MRI.
Some basic facts first: gadolinium-based contrast agents are very frequently used in clinical MRI to improve the image contrast as illustrated in the following example.
An important fact about (conventional) MRI contrast agents is that it’s never the contrast agent itself that is visible in MR images. Instead, the contrast agent changes the behavior of the atomic nuclei in its neighborhood – in clinical MRI, these are the nuclei of hydrogen, i. e. the protons. As a consequence, these protons now appear brighter in T1-weighted MRI than protons which are not influenced by the contrast agent.
So, apparently the proton signal is increased by gadolinium? Yes, apparently … Actually, there is always a maximum signal that is available for MRI and that depends on three major factors:
the number of available protons, which is related to the proton density ρ of the tissue: the more protons (per voxel), the higher the signal;
the magnetic field strength B0: the higher the field strength, the higher is also the (thermal) nuclear magnetization and, hence, the measured signal;
the receiver coil: the more efficient the receive system (the radio-frequency coil), the higher the signal.
But the presence of a contrast agent does not increase this maximum signal.
Instead, we are cheating: First, we artificially decrease the MRI signal by choosing short repetition times (TR). And only afterwards, a certain part of this suppressed signal is recovered due to the influence of the contrast agent! This is illustrated in the following diagram:
Obviously, the maximum signal, Smax, with contrast agent is exactly the same as the maximum signal without contrast agent – but we can obtain this maximum signal considerably faster (i. e, at shorter TRs). That’s why gadolinium can be described to increase the speed, but not the MRI signal. In agreement with this observation, no additional gadolinium-induced signal enhancement can be found in proton-density-weighted MR images (with very long TRs). But, hypothetically, if contrast agent could be distributed homogeneously in the tissue (which in reality is not possible), then PD-weighted MRI could be accelerated by using shorter TRs without changing the contrast.
In MRI, we are frequently interested in data from a single two-dimensional slice of the imaged subject or object – or actually in data from several such slices. These slices are then displayed as conventional 2D MR images. A procedure called slice selection is used to restrict our data acquisition to each single slice. To understand slice selection, one has to know that MRI is based on the resonant excitation of spins in a static magnetic field B0. Spins have a characteristic, so-called Larmor frequency ω = γB depending on the magnetic field B (the constant of proportionality γ is the gyromagnetic ratio). By applying a radio-frequency (rf) field with exactly this Larmor frequency, spins can be excited (i. e., they can be made to generate a measurable signal).
The basic idea of slice selection is to excite only spins in a single slice by (first) superposing a linear magnetic field gradient gz e. g. in z direction, resulting in the spatial field distribution (shown in the first figure):
B(x,y,z) = B0 + gzz.
(These linear magnetic fields or gradient fields are one of the basic ingredients of MR imaging. Each MR imager comes with built-in coils to apply gradient fields in all possible spatial orientations.)
Then, by choosing an rf frequency ωslc = 2πfslc, we can select a plane in space where
ωslc = γB(x,y,z) = γB0 + γgzzslc,
and only spins in this plane centered around zslc = (ωslc / γ – B0) / gz are excited (see first figure).
More advanced MRI techniques can excite several spatially separated slices at once by applying rf fields with more than one frequency – the difficult part is then to separate data acquired from these slices for reconstruction.
The nice idea realized in the paper by Koray Ertan et al. is to excite multiple slices not by applying rf fields with a mixture of several frequencies, but instead by modifying the gradient field to a spatially non-linear magnetic field. Consequently the mapping between frequencies ω and spatial positions (z) is no longer one-to-one, but several positions can correspond to a single frequency:
So, depending on the shape of the magnetic field variation two or more slices can be excited using a single excitation frequency. This has the advantage that we can use short and simple standard rf pulses for slice excitation. The obvious disadvantage is that it requires additional gradient hardware providing the non-linear magnetic fields, which is currently not available at existing MRI systems.
However, this may change in future, since there are currently some promising approaches for non-linear encoding fields. In particular, I’m thinking about an impressive MRM paper (doi: 10.1002/mrm.26700) by Sebastian Littin and colleagues published also this year, in which an 84-channel matrix gradient coil is presented, which is capable of providing very flexible linear or non-linear field configurations.