Chris Angove, Independent Professional Engineer
Chris Angove is a highly experienced and MSc qualified chartered electronics engineer specialising in electrical and electronics engineering. He manages and owns Faraday Consultancy Limited (FCL).
RF and Microwave Design and Development
Listed below are some typical questions often asked about RF and microwave engineering. My answers are based on my opinion and my understanding of the subjects, hopefully without too many errors. But don't take my word for it. I have included references and will be adding more as I update the pages. Then you may look up these and hopefully verify my statements. Most of the references come from industry standard texts, peer reviewed journals, application notes and data sheets from respected sources. Whilst Wikipedia is a fantastic free online encyclopedia, some people consider it to be about 80% accurate. So I use Wikipedia with some caution, not often as a reference itself but as a reference to references. (That second link was also from Wikipedia, so perhaps that should actually be 64% accurate).
Radio frequency (RF) and microwave engineering is a specialist area of electronics engineering which addresses alternating current (AC) stimuli and responses, intentionally and unintentionally generated, operating at frequencies typically ranging from approximately 20 kHz to 300 GHz . Within this range, RF is generally understood to be up to about 1 GHz and microwave above. For the sake of brevity, we will assume RF and microwaves to be synonymous. There is no sudden change in electrical properties at 1 GHz but the wavelength (λ) at this frequency is 0.3 m or about one foot. In the days of early and bulky radar equipment, when they were exploring higher frequencies, component sizes and cable lengths were becoming significant fractions of a wavelength around this frequency. During the development of radar it was known that the smaller the wavelength in relationship to the dimensions of a typical target, such as an aircraft, the better would be the quality of the reflection, with fewer distortions due to diffraction. However, the higher frequency sources required new, expensive and unreliable technologies for that time and presented extra challenges simply in handling the frequencies involved.
RF applies to both circuit currents and voltages and to (propagated) electromagnetic waves. The transition between these is achieved with an antenna. An antenna will ideally be designed to either transmit or receive RF energy, or it may be an inherent property of a circuit, desirable or otherwise.
As RF engineering has evolved, we have developed a better understanding of the properties of the different frequencies. For example, we tend to talk about audio frequencies separately from RF. Strictly, that is wrong. They are AC just like RF, they all have the same properties and electromagnetic waves have been successfully transmitted well down into the audio frequency range. Confusion may have arisen because audio frequencies, by definition, are those we are capable of hearing, but only when they are converted into acoustic waves using some type of transducer such as a loudspeaker. There are a couple of limitations of low RF frequencies however: antennas and bandwidth.
An antenna with good directionality needs to have dimensions of at least several wavelengths. Having said that, a simple, but popular and electrically small antenna which is close to omni directional in the H field plane is the half wave dipole. As its name implies, it has a length of half a wavelength (λ/2). An audio frequency of, for example, 1 kHz, has a wavelength of 300 km, so it would be quite a challenge to build a 150 km half wave dipole for a 1 kHz transmitter or receiver.
Taking this argument further, to a step lower in frequency, 50 Hz or 60 Hz are almost universally used as power frequencies for the transmission of power over long distances, often across countries and continents. Many people will know one of the reasons why AC was originally chosen over DC for power transmission: the convenience of being able to easily change AC voltages using transformers. Another consequence of using AC is that transmission line properties may be relevant when the transmission line distances become significant electrical lengths. So 50 Hz is a wavelength of 6000 km. That would be a very long power transmission line for a full wavelength but some longer lines might be significant portions of a wavelength.
Most definitely yes! Don't let anybody try and persuade you otherwise. Why?
'High speed digital' refers to a digital data signaling or information rate which is expressed in bits per second (bit/s). In most cases we will be interested in signaling rates of many megabits per second (Mbit/s) or even gigabits per second (Gbit/s). Bits are, by definition, binary digits and we know that there are 2 possible states, commonly referred to as 0 and 1. With 'return to zero' (RTZ) coding these might represent nominal voltages of say 0 V and 1 V. A '1' does not necessarily have to be 1 V and a '0' does not necessarily have to be 0 V. The transitions between states are not of course instantaneous as this would require infinite bandwidth. More realistically, to describe the waveform, minimum and maximum values of rise and fall times should be specified. If either or both are too fast there is a risk that the spectrum of the modulated waveform will be excessive and possibly exceed the allocated bandwidth. If they are too slow, there may be excessive errors caused by inter-symbol interference (ISI), degrading the bit error rate (BER).
As an example, for a 1 Mbit/s information rate the bit period will be 1 μs, and the rise and fall times might be in the order of a few tens of nanoseconds. Now we must think about the data itself represented as a voltage against time waveform. The actual data passing through can be any combinations of 1s and 0s unless there is some coding algorithm at work limiting the bit patterns. Depending on the coding used, there might even be some long strings of 1s or 0s or rapidly changing alternate states, a repeating 1010 pattern. The bandwidth necessary will be determined by the highest frequency content of this waveform, when there are the maximum possible number of 01 and 10 transitions per unit time. That is usually the 1010 condition which is also the type of waveform often used for high speed, non-sinusoidal, clocks. However, in this case the clock period would actually be two bit widths, making the square wave clock frequency not 1 MHz but 500 kHz. Three such bits from a string like this are shown in Figure 2-1, in this case with a 50 ns rise and fall time.
Figure 2-2 and Figure 2-3 show the log magnitude spectra of the waveform in Figure 2-1 for rising and falling edges of 400 ns and 40 ns respectively. In each case, the vertical scale is in logarithmic amplitude units of dBμV (dB relative to 1 μV). These were obtained using the Matlab® 2019B fast Fourier transform (fft) function.
Noting that Figure 2-2 and Figure 2-3 have logarithmic vertical scaling, it is still clear that the bandwidth occupied in each case is equivalent to many times the 'frequency' of the 1010 waveform. So for this relatively slow data rate by today's standards, of 1 Mbit/s, we can clearly see that the spectrum extends to many megahertz. Furthermore, for the 40 ns slope case, the amplitudes of the higher frequency components are greater than for the 400 ns slope case.
Another way of looking at the 101010 waveform with very short rising and falling edges would be as a pulse waveform with a pulse width of 1 μs, a pulse repetition frequency of 2 μs and therefore a duty ratio of 50%. If this was used as a square wave clock waveform, its frequency would be 500 kHz, but that would be the frequency of the square wave and not of the (sinusoidal or cosinusoidal) component frequencies in its spectrum.
These are examples of widely adopted shorthand acronyms for frequency bands: high frequency, very high frequency and ultra high frequency respectively. Using these makes it convenient for anybody to talk informally about the frequencies they are referring to without needing to remember the frequency detail. Formal documents would normally specify the frequency detail when is was required. Table 3-1 and Table 3-2 summarise some of the commonly used frequency band designations and their frequency ranges  .
|VLF||Very Low Frequency||3||30||kHz|
|VHF||Very High Frequency||30||300||MHz|
|UHF||Ultra High Frequency||300||3000||MHz|
|SHF||Super High Frequency||3||30||GHz|
Familiarity and understanding of the Smith chart are mandatory in RF and microwave engineering. Those are the rules. An example of a blank (impedance) Smith chart is shown in Figure 4-1. There are also an admittance Smith chart and a mixed (impedance and admittance) Smith chart.
A good understanding of transmission line theory which is behind the Smith chart is a fundamental RF and microwave engineering skill . It relates to how waves, which are passing along a transmission line, react and interact when they meet a discontinuity. A discontinuity is an impedance (or admittance) differing from the characteristic impedance of the transmission line itself. When this happens, a reflected wave is set up, moving in the opposite direction to the incident or forward wave. This creates a standing wave, also known as a total or resultant wave, which is made up of contributions from the forward and reflected waves. The instantaneous (spatial) voltage on the standing wave is a function of the position at which it is measured on the transmission line. These effects can be analysed on the Smith chart with the help of the circumferential scaling which is in wavelengths.
Components can be designed which rely solely on the relative positioning of deliberate mismatches, either short stubs of precise dimensions or other reactive components. These are known as distributed components but will only be effective for a relatively narrow band of frequencies.
Smith charts can also be used for matching and analysis problems using only lumped components, or a combination of lumped and distributed components.
Some experience with the Smith chart allows the user to quickly observe the behaviour of an imperfectly matched transmission line just by examining the locus of points across a range of frequencies. The scalar quality of the match, either return loss or voltage standing wave ratio (VSWR), may be determined by how much the locus deviates from the center of the Smith chart. The center corresponds to a perfect match or a reflection coefficient magnitude of zero, effectively a circle with zero radius.
Smith charts are mostly used in the unity radius region. That means within the circular region between a radius of zero (perfect match, no reflection) and unity (short or open mismatch with 100% reflection). This corresponds to a region in which the magnitude of the reflection coefficient is between zero and 1, irrespective of phase. This should occur on any component that is passive. If we were measuring something which is active, such as an amplifier, sometimes the magnitude of the reflection coefficient may actually be greater than 1. This indicates that an instability or possible oscillation is present, which is usually undesirable, even if it is well away from the operating frequency band. Sometimes it may be possible to stop such instabilities with carefully designed reactive matching, which presents frequency dependent loading to the amplifier calculated to stop the instabilities. An amplifier which is designed in this way would be known as unconditionally stable if it showed no instabilities across the specified frequency range.
Analyses of plane waves incident on real conductor boundaries with air using Maxwell's equations tells us that a perfect conductor will not support an electric field  . A perfect conductor has infinite conductivity. We know that copper for example is a good conductor but it not a perfect one. For comparison purposes, some conductivities of common metals used as conductors in electronics are shown in Table 5-1.
|Metal||Chemical Symbol||Conductivity (σ) (S/m)||Relative Permeability (μr)||Skin Depth (m)||Density (kg/m3)||Metal Market Price Cash 02/12/2020 (USD/tonne)|
|50 Hz||100 MHz||1 GHz|
The result of the analysis is that the magnitude of an incident electric (E) field or magnetic (H) field, as it penetrates the conductor, will reduce exponentially with depth from its value at the air/metal boundary. The depth at which it reaches 1/e times its value at the surface is known as the skin depth (δ), given by Equation 5-1.
In Equation 5-1, f is the frequency (Hz), σ is the conductivity of the metal (S/m), μ is the permeability of the metal (H/m). Also, μ=μ0μr, where μ0 is the absolute permeability, 4π×10-7 H/m and μr is the relative permeability. The relative permeability for a non-magnetic metal such as copper is 1.
The full current penetrates further than the skin depth itself and the general rule of thumb is that 5 skin depths will contain very close to 100% of the alternating current . Therefore, for AC conductors, any thickness in excess of about 5 skin depths would be wasted and unnecessary. If the thickness is less than 5 skin depths, we could have either of 2 problems depending on the application. We will look at two situations where we need to consider skin depth: power transmission and screening.
The conductors used in power transmission are nearly always either copper or aluminium. If the full penetration depth is less than 5 skin depths there may be a tradeoff between the metal cost and wasted energy in the form of heat. To some extent we may be able to tolerate some extra heat dissipation if there is a significant cost and mass saving in the conductor materials. Overhead transmission lines have the advantage in cooler climates that they allow better heat dissipation and can safely run quite hot, limited only by the maximum safe levels of thermal expansion. With underground transmission lines, heat dissipation may be more limited because the only heat transfer mechanism is conduction.
Electrical screening, often referred to as the 'Faraday cage' principle, is another important application in which to consider skin depth. In this case we wish to attenuate the penetrating wave as fully as possible. Therefore we need to choose the metal and its thickness correctly, assuming that, otherwise, the screen is correctly constructed. From the formula for skin depth in Equation 5-1, it is clear that the skin depth reduces for increasing frequency. Therefore we must base the screening design on the lowest frequency which it is required to attenuate, that with the deepest skin depth. About five skin depths would be the starting point and if the metal is mechanically too thin, its thickness may be increased as necessary. Equation 5-1 also shows that one way of reducing the skin depth even further is to use a magnetic metal such as mu metal. This has a relative permeability of 20000 to 50000 compared to 1 for non-magnetic metals. Although this is offset slightly by its conductivity being less than copper or aluminium, it reduces the skin depth to about 1% of its value for a non-magnetic metal.
Loss tangent, also known as tan delta (tan δ) or dissipation factor is generally taken as the reciprocal of Q factor, the greater the Q factor the better the 'quality' of the component. In the early days of research work into damped oscillations the dissipation factor often worked out to be a small number to 2 or 3 decimal places so it was found more convenient to use its reciprocal . Contrary to some people's understanding, including my own, Q factor was never invented to represent 'quality' factor. It just happened to be a letter left over after the others had been allocated for other parameters . There are essentially two definitions of Q factor, one applying to discrete real reactive components and the other to the behaviour of a tuned circuit or resonator.
Loss tangent and Q factor are used to describe how well real components perform at different frequencies compared to what we would expect from ideal components. SRF or self resonant frequency of a component is a frequency at which a resonance (series or parallel) occurs due to its parasitic properties: values of unintended capacitance and inductance which are distributed around it due to its physical construction. These will include effects such as inter-winding capacitances and wire self-inductances. So we would normally exclude any behaviour correctly designed to be resonant such as a tuned circuit used to create a pole or zero in a filter circuit.
In order to study these parameters, we need to look at the equivalent circuits which represent real capacitors and real inductors, either in series or parallel forms. These are shown schematically in Figure 6-1 and Figure 6-2 together with the associated phasor diagrams and the formulas for tan δ and Q factor in each case.
Each equivalent circuit is considered at a particular angular frequency, ω rad/s and well away from any self resonant frequencies which may exist. This comprises the equivalent ideal reactive element (capacitor or inductor) either in series with or in parallel with a resistance whose value is a function of frequency. Either the series or parallel version equivalent circuit is equally valid but conventions have developed over the years for one or the other. These circuits are not like simple low frequency CR and LR circuits in which we assume that the resistance is constant with frequency. Here they are operating well into the limits of their frequency ranges where the resistance part in each case is very much a function of frequency. It may be difficult to model this relationship so usually the tan delta or Q factor performance of the component is measured empirically across a range of frequencies using a vector network analyzer (VNA) or similar instrument, properly calibrated and with some form of precision characterised jig appropriate to the dimensions of the component being measured.
Loss tangent is also a measure of how much the behaviour of a practical reactive component, a capacitor or an inductor, departs from the behaviour of a perfect capacitor or inductor. The smaller the loss tangent is, the closer the reactive component is to its ideal form. The loss tangent performance must be specified in relation to frequency. As the frequency of operation is increased, the high frequency effects, such as changes in resistance due to the skin effect, become apparent. Although it is valid to also define the loss tangent of inductors and capacitors, it is used commonly to describe the RF performance of bulk, unetched, substrate dielectric material such as FR4. In its raw state, layers of FR4 are normally sandwiched between layers of copper ground plane. The low frequency/DC approximation to this is a parallel plate capacitor with a high value parallel resistance, known as a leakage resistance, as shown in Figure 6-1.
Taking this a stage further, Figure 6-3 shows a parallel plate capacitor. We will assume that the thickness (t) of the dielectric is much smaller than the dimensions of the plates themselves.
Q factor has two definitions: one relating to the quality of discrete reactive components at high frequencies and the other for loaded tuned circuits or resonators. Q factor is also a measure of how close a reactive component (capacitor or inductor) is to the ideal component but it has a reciprocal relationship to loss tangent.
Self Resonant Frequency
A self resonant frequency (SRF) is a frequency at which a resonance occurs due to the parasitic interaction of capacitive and inductive components, similar to those observed in parallel and series resonant circuits. The parallel SRF has a high impedance at resonance and the series SRF has a low impedance at resonance.
Several manufacturers include ranges of inductors and capacitors which are specifically designed for operation at high frequencies. For example, data sheets by Coilcraft for high frequency inductors include graphs of Q factor, impedance and S parameters across frequency. One example is shown in Figure 6-4 for high frequency inductors of nominal values ranging from 47 nH to 500 nH. One result of this type of high frequency behaviour is that there is usually also an effective change in the capacitance value itself with frequency.
The Coilcraft data in Figure 6-4 reveals some interesting features of this style of high frequency inductor. For all values, the Q factor tends to increase with increasing frequency to a peak and then tail off. Across a similar frequency range, the nominal inductance values are reasonably constant and then start increasing. The frequencies at which these deviations occur are similar for both cases and may be due to SRF effects. For example, inter-winding capacitances will contribute towards the SRF effects together with the self inductances of the wire used for the windings. As the nominal inductance values increase, there are generally more winding turns and therefore more stray capacitances so the SRF effects tend to occur at lower frequencies. This is clear when comparing the data for 47 nH and 500 nH.
When choosing inductors such as these for high frequency circuits, graphical data of this type needs careful examination, concentrating on the highest operating frequency being used. If the signal types are streams of data this may come from the results of a Fourier transform. Values should be chosen which are reasonably clear of SRFs and excessive nominal value deviations. For example, it would not be recommended to use the 82 nH of 500 nH components above about 300 MHz or 90 MHz respectively.
At high frequencies, capacitors are often represented by an ideal capacitor connected in series with an equivalent series resistance (ESR) whose value is a function of frequency. This is also shown as RCS in Figure 6-1 and its variation for the Murata capacitor in the left graph of Figure 6-5. The equivalent variation in Q factor for the same component is shown in the right graph.
Scattering (S) parameters are used frequently in RF and microwave engineering. T or ABCD parameters are also popular but less frequently used. Both types are members of a family of network parameters used in electronics engineering, other examples are Y, Z and H parameters. Definitions for S and T parameters draw heavily from transmission line theory.
Referring to Figure 7-1, S parameters may be defined or measured for a network comprising an integer number of ports from 1 to n. In general, S parameter values vary with frequency, temperature and possibly other parameters such as bias voltage or current. One set of S parameter measurements comprises an N×N; square matrix in which each element is a value expressed in magnitude and (spatial) phase. A one port S parameter is a complex number of the form Smm, where m is an integer used to identify the port. If we have allocated the port numbers on a 2-port network as 1 and 2, an S parameter matrix for this network may be expressed by (7-1).
In (7-1) and all other S parameter matrices, the columns represent the stimuli ports and the rows represent the response ports. So, for example, S21 represents the result for a stimulus applied to port 1 and the response measured at port 2. S22 would represent both the stimulus being applied to and the response being measured from port 2. In every measurement, by definition, every port must be terminated exactly in the system impedance which is normally a purely resistive value and represented by Z0. If the system impedance was 50 Ω, that means exactly 50 Ω purely resistive across the whole measurement frequency range. Theoretically, this means that stimulation sources and response loads, as appropriate, must each also be exactly 50 Ω. That is quite a tall order, especially if we plan to measure values perhaps up to many gigahertz. Fortunately, with the powerful processing built into VNAs in recent years, a range of corrections can be applied for systematic errors by an initial calibration procedure which may be stored in the instrument memory.
A little caution may be required with port numbering. By convention, these normally correspond to the indices (row and column numbering) of the S parameter matrix so, in (7-1), the port numbers allocated were 1 and 2. There may have been some very logical reason why the ports might have been numbered 34 and 92 instead, in which case the corresponding S parameter matrix would be (7-2). This would be fine provided it was still only a 2 port network. If it was actually a 3 port network with ports 34, 92 and 97 for example, then the true S parameter matrix would need to have 3 dimensions (3×3) with the rows and columns named 34, 92 and 97.
To start with, we will consider the S parameters of a 2 port network with either balanced or unbalanced ports but not yet with mixed ports. These are shown in Figure 7-1.
In general, at each port there will not be a perfect match so there will be a forward wave component into the port and a reverse wave component out of the port. These are represented respectively as VnF and VnR where 'n' is the port identity. So for example the forward wave at port 2 will be V2F and the reverse wave at port 1 will be V1R. For this imperfect match condition at port 'n' a standing wave will occur with the total (resultant) voltage Vn and resultant current In given, using transmission line theory, by the equations in (7-3).
For each port it is normal to define incident and reflected 'power waves', an and bn where 'n' is the port identity. Despite their name, these are complex values, similar to S parameters, but normalised to √Z0 as defined in (7-4).
To re-iterate, Z0 is the defined system impedance, a purely resistive value such as 50 Ω, or more correctly Z0=50+j0 Ω.
Let us consider a 2 port network as an example on which the ports are identified as 1 and 2. The functions of these ports, whether they are inputs or outputs, really does not matter as far as the theory is concerned. However, if measurements are planned, an assessment must be made to accommodate any high power levels which may exit any port to avoid possible test equipment damage. By definition, the relationship between the power waves, an and bn, and the 2 port S parameter matrix is given by (7-5).
Solving the matrix equation in (7-5) gives the two equations shown in (7-6). These show the relationships between the 2 port S parameter matrix and the incident and reflected power waves for each of the ports. Now, in order to get some more useful information, we need to apply the definition requirement that both ports must be loaded with the system impedance, Z0, exactly.
We know that all of the definition equations so far have assumed that every port is terminated exactly in the system impedance, Z0. Consider for example port 2 of the 2 port network. If this was connected to a load of exactly Z0, there would be no reflections from the load and therefore no incident wave at port 2 of the network, so a2 would be zero. Substituting this into the first equation in (7-6) yields the result for the port 1 parameter S11 = b1/a1. Similar expressions may be obtained for all of the 2 port S parameters. These are shown in (7-8).
There are some alternative names for the 2 port S parameters, again assuming that we are using a system impedance of Z0. S11 and S22 are the reflection coefficients at ports 1 and 2 respectively. S21 is the transmission from port 1 to port 2 and S12 is the transmission from port 2 to port 1.
At this point we may need to ask a few more questions, such as the following:
A network may be passive or active but to comply with the S parameter conditions it must behave linearly under small signal, steady state conditions. These will normally be continuous wave (CW), either at one frequency but, more commonly, across a range of frequencies. Occasionally S parameters may be specified under non-linear conditions provided the conditions under which they are measured are accurately known and repeatable. These may include temperature, bias conditions, or incident power level.
A port is defined as a pair of associated terminals through which the actual currents may flow in either direction. The algebraic definitions of positive and negative current flows are as identified in Figure 7-1. These are not terminals in the traditional sense but simply two conductor connections designed appropriately for the frequencies being used. For single ended ports using the present technology these would usually be coaxial connectors.
Any number. The largest I have seen is 65, fortunately the ports were numbered 1 to 65.
No, it was literally one network. But I did have a chance to look inside and there were no physical electrical connections (conductors and/or components) between some ports. However there may have been unintentional leakage paths between these ports. With some of the latest test equipment and very careful VNA setup and calibrations it is possible to measure isolation values to greater than 100 dB. That would correspond to a log magnitude S parameter transmission measurement of less than -100 dB, since (linear) transmission is the reciprocal of (linear) isolation. In some equipment very high isolation values are specified and it takes significant time and resources to reliably measure them.
Schematics of these are shown in Figure 7-1. In a balanced network the terminals of every port is assumed to have independent connections, one for the inward current and one for the outward current. There is no assumption of any common connection between any ports, such as what is often provided by a ground. Most VNAs however do have single ended (coaxial) ports, of which one terminal is ground, so it is not possible to use one of these directly to make any balanced measurements of a network. There are however some options available using a 4 port single ended VNA with a 4 port electronic calibration (ecal) device. In these cases the system impedances of the common mode and differential mode measurements are 25 Ω and 100 Ω respectively. For unbalanced networks, generally one of the port terminals, for every port, is connected to ground. This is standard for coaxial connectors, which is still the most common type of port, for which the screen is connected to ground.
So far we have only considered linear S parameters. All S parameters are expressed as unitless complex quantities because they are derived from the ratios of two complex values with the same units, such as shown in (7-8) for the case of a 2 port network. Every S parameter therefore has an amplitude and a (spatial) phase and may be represented in rectangular, polar or exponential form. An example of each of these is shown in (7-9). Linear amplitude may be obtained directly from the value R in either the polar or the exponential forms. It may be obtained from the rectangular form by squaring the real and imaginary coefficients, adding them and taking the square root of the result as shown in (7-10). The phase angle φ may be obtained directly from either of the polar forms or from the rectangular form using the inverse tangent also shown in (7-10).
Log magnitude is just an abbreviation of logarithmic magnitude (RLM) using the dB definition shown in (7-12). Log magnitude simply means that the magnitude is expressed in the logarithmic units of dB. The phase is not affected.
As most of us probably know, EMC stands for electromagnetic compatibility. That is all about how well all electrical and electronic equipment exists together in our environment. We have two categories of electrical and electronic equipment as listed below:
In summary, the requirements for the compliant EMC of our equipment are:
Then how might our equipment either cause emissions or be susceptible to emissions from elsewhere, both of which we want to minimise? There are essentially two mechanisms:
Radiated means electromagnetic waves propagating through the atmosphere, or a near vacuum in the case of space. Conducted means RF currents propagated by currents via conductors.
EMC has become such an important requirement for the military that several government departments of defence across the World have written EMC requirement specifications for their own military equipment. In some cases they have been released under an accessible liberal licence to the public and the requirements have been specified for civilian equipment. Releasing an EMC specification to such a huge and potentially critical audience has exposed it over the years to many suggestions for improvements. Provided the resources are available to consider these in sufficient technical depth and to revise and implement some of them, the result is a widely respected 'industry standard' specification document. One example from the USA Department of Defense (DOD) is known by its original document number, appended by a letter representing it's issue, 'E' in this case or MIL-STD-461E. This is full of comprehensive EMC requirements and detailed supporting information, including test equipment diagrams, anechoic chamber configurations, data recording methods and full explanations.
Perhaps the most commonly quoted requirements from MIL-STD-461E relate to radiated emissions, conducted emissions, radiated susceptibility and conducted susceptibility. Each of these is numbered starting with the letters RE, CE, RS or CS respectively. Each of these has a whole family of specifications.
Figure 8-1 shows a graphical limit for conducted emissions used in CE101. The frequency range over a horizontal logarithmic scale is from 10 Hz to 100 kHz. Although scaled linearly, the vertical scale is also logarithmic as it is in a dB current unit, dBμA, that is dB relative to 1 μA. As the emission currents are AC, the units are root mean square (RMS).
Figure 8-2. is an example of a radiated emissions limit graph. In this case the vertical scaling is again logarithmic, dBμV per metre (dBμV/m). Again, the field is AC so the units are RMS.
Yes, I quite often get involved with antennas in various shapes and forms. In the early days of RF and wireless the main reason for using RF was to make it easier for (modulated) signals to radiate. This would have been difficult at the audio frequencies.
In order to carry information from one point to another a signal must be modulated. Even if it uses a very simple modulation, like the on-off keying used in Morse Code, this is necessary or there is no bandwidth to carry the information. More recently, the need for RF skills is not only to do with RF propagation but also correctly designing hardware for and analysing the higher frequency content of data streams, often called high speed signaling of information in digital form. As the Fourier transform will tell us, if we must have discrete changes from one binary (voltage) state to another we must expect to see high frequency components which will need to be handled correctly. The higher frequencies are necessary either as part of a baseband, or as part of the bandwidth occupied by a modulated carrier, by that baseband. The modulated carrier will be capable of radiation, if it was connected to a suitable antenna.
It is often necessary to use production antennas supplied from elsewhere, for example in EMC measurements or, more correctly, pre-compliance EMC measurements. If you work at an accredited EMC test house you will probably use them for formal EMC measurements. By pre-compliance we generally mean performing informal EMC tests in the laboratory or in some form of screened room or anechoic chamber which is not necessarily formally qualified and/or may not be sufficiently reliable. Clearly this presents risks because what we set up at the laboratory might be inadequate to perform sufficiently accurate and reliable measurements. There might for example be issues with interference, leakage, reflections or power handling. Issues like these are further responsibilities of the RF engineer.
There are many different types of antennas suited to different frequencies and applications. Then we have to consider whether the measurement is in the near field or the far field, as the whole approach is very different in each case. If we are working in the far field, the region in which the huge majority of antennas operate, many approximations can be made which are sufficiently accurate and reliable to really make life a lot easier. The near field is a closer approximation to a magnetic induction region, but it depends on the frequency and the antenna spacing. In recent years near field technologies or near field communications (NFC) such as contactless cards have become very popular in many countries.
Sometimes new antennas need to be designed for any of a number of reasons. An 'off the shelf' antenna may not be available. Perhaps there is minimal space available on a PCB which forces the use of an electrically small antenna. A 'smart' antenna may be required: one whose beam characteristics can be changed electronically and reasonably quickly.
More recent digital cellular (mobile) smartphones, known as such because of their processing power and memory storage, are small and have electrically small internal integrated antennas. These also have dimensions which are a small fraction of a wavelength. services are evolving through the second generation (2G), the third generation (3G) and into the fourth generation (4G/LTE/LTE-A), which is the present technology. Some of the digital cellular services and frequency bands used for the USA and Europe are shown in Table 9-1
|Service Name||Frequencies (MHz)|
|LTE 700 Band 12||USA||698||716||728||746|
|LTE 700 Band 13||USA||776||787||746||757|
Across this range of frequencies, the wavelengths (λ) range from about 110 mm to 430 mm, so even for the LTE 2600 service the necessary antenna size in a mobile device would still be electrically small.
For small mobile devices, it would be very useful if we could just keep on reducing the antenna size to tiny fractions of a wavelength but antenna theory shows us that we cannot extract more 'wireless wave' energy than the antenna 'aperture' will allow. This is not the actual 'physical' aperture of the antenna (its optical cross-sectional area) but the 'equivalent' aperture (A) which is a function of its linear gain relative to an isotropic radiator (G) and the wavelength (λ). This is given by Equation 9-1    .
One big limitation of mobile devices is that they are mobile! A mobile device does not have a particularly directional antenna because it is not designed to be put in a fixed position and the antenna orientated in any specific direction, like towards a cell tower. That's a relief as they would be very difficult to use. There will probably be some directionality to minimise power radiated into the body and to avoid wasting power as it would be largely absorbed by the body and therefore not radiated. However,Equation 9-1 does illustrate how the effective area may be increased if this was possible: by increasing the antenna gain, but the antenna size would be several times that of the device.
There are several specific to electrical and electronics engineering. The starting point is the typical mathematics content of an honours degree course in a relevant subject like electrical/electronic engineering or physics. There are plenty of pure and applied mathematics books for anybody who needs to improve their math skills, though it is probably more useful to choose one that is aimed at the needs of engineering, especially electrical engineering. Many good (and expensive) books in subjects like RF and microwave and digital communications have very useful sections on subjects like 'math revision'. I will talk about what I have found in four of my favourite reference books: Gonzalez, Hall and Heck, Pozar and Sklar. The following mathematical skills are very important.
I am not sponsored my MathWorks who own Matlab, but I have a licence and have found it very useful. It also has very good documentation which you get access to once you receive the licence. You can buy many add-on applications specific to your particular interest. I use one called 'RF Toolbox' quite a lot. If you do not have sufficient budget, consider an open source form like 'GNU Octave' which is similar to Matlab but free.
Complex phasor representations: rectangular, polar and exponential. Euler's and De Moivre's identities and their uses.
For processing matrices like scattering (S) parameters and transmission (T) parameters, all with complex value elements. Conversions between them and calculating the matrix inverse where necessary. Typical applications would be for processing vector network analyzer results.
Especially complex discrete Fourier transforms and basic knowledge of the 'fast' algorithm. Most math applications like Matlab® have built in functions for the FFT so you will probably not need to write a program for it. Interpretations of the complex results in magnitude and phase, Euler's identity and negative frequency representations
Divergence, gradient, curl, cross product and dot product and their uses, for example Maxwell's equations and Poynting vector. Vector co-ordinate systems and the conversions between them, especially rectangular, spherical and cylindrical.
This is probably the most frequently encountered distribution, for example additive white Gaussian noise (AWGN) and how it degrades digital channels.
Complex power generally and its relevance in wave propagation (power flux density) and AC circuits (heating effect dissipation).
Essential in-depth knowledge of both is required as they are normally used together.
Good knowledge is required, it is not necessarily to memorise all the proofs but to have a good understanding how to do them, given the necessary references.
Standard derivations of noise figure (NF), equivalent input noise temperature, cascaded NF (Friis formula), cold source, Y factor, ENR, SNR degradation and improvement, bandwidth limitation, noise power density,
Far field and near field, gain dBi, dBd, beamwidth, directivity, solid angles, polarisation, G/T, EIRP, FSPL, wave impedance, plane wave propagation, diffraction, refraction, absorption. Electronic active and passive scanning, sidelobes, polarisation isolation. Aperture illumination, linear arrays, array factor.
Skin depth formula and estimates for various metals. Uses in screening and AC power transmission.
Examples of single ended and differential transmission lines with typical speeds. Differential mode and common mode excitations. Even mode and odd mode impedances and EMC consequences. Common mode rejection ratio. Making mixed mode S parameter measurements.
Popular filter transfer functions: Elliptic, Bessel, Chebyshev, Butterworth etc. and how to design them using normalised filter tables.
Channel capacity, data rate, bandwidth constraints, Eb/N0, noise power density. Shannon limit, bandwidth limited and power limited regions.
Regions of conditional and unconditional stability represented on a Smith chart. Calculation of Rollet stability factor from S parameters.
Fourier transforms quite frequently and Laplace transforms less frequently. I have a Matlab® licence with several 'toolbox' add-on applications which include many built-in functions for performing these. For example, the Fourier transform may be performed using the Matlab built in 'fft()' function using the 'fast' algorithm. Some basic Matlab scripts need to be written initially to create an input array representing the real voltage against time waveform which is the argument of the fft() function. The output comprises an array of complex numbers which represent the contents of the frequency 'bins' in amplitude and phase. The frequency spacing of these is a function of the sampling frequency chosen for the input waveform. Most of the challenges of using the Matlab fft() function are in choosing an appropriate sampling frequency and performing the correct normalisation and scaling. The outputs may be plotted in amplitude and phase against complex frequency.
I have used Laplace transforms in the analysis of voltage pulses passing along coaxial transmission lines which include discontinuities. This enables a conversion to the complex frequency (s=jω) plane. The results may then be analysed under steady state conditions similarly to using the normal transmission line equations.
It is good to see the renaissance of the word 'wireless' in recent years. The name was originally adopted for radio receivers, in those days using valves or vacuum tubes. These were wireless in the sense of 'no wires'. They received and demodulated radio waves 'over the air' so there were no wires to carry the signal from the studio to the home, rather like a telephone line. Most of them still actually required a power cable so, in the strict sense of the word, they were not completely wireless. Truly wireless, or fully portable, radios did exist but they were in substantial boxes which included heavy high tension (HT) and low tension (LT) batteries necessary for the vacuum tubes. In recent years the name 'wireless' has been adopted again for communications not using wires. Although they may not always be completely wireless, if for example you have plugged your smartphone in to a charger, they are readily capable of being truly wireless. Battery technology now is so good that small devices with appreciable processing power can be powered truly wireless for several hours after the battery has been charged. In the days of the vacuum tube portable wireless the HT battery was often about 90 V and horrendously expensive, being constructed of about 60 zinc carbon cells in series. The LT battery, which powered the tube filaments was either another expensive zinc carbon battery or possibly a lead acid accumulator which had to be re-charged periodically.
Recently, we have heard many references to the 'internet of things' (IOT). This is one of the areas where the true potential of wireless devices may be exploited. Smartphones, for example require appreciable processing power even in standby mode and need to be regularly transmitting 'I am here' signals to the local cell tower so the battery drain is significant. Many IOT devices may be attached to sensors in remote areas well away from power and telecommunications infrastructure, and will only need to report data infrequently. Although this still requires a short periodic transmission, the data capacity is small and, for a well designed IOT device, the battery drain will also be very small.
They are so important because they are so widespread and need to be understood in detail to be used and exploited correctly. For the time-being we will consider loss free single ended transmission lines to avoid complications.
Let us start with a little background information. By definition, a transmission line is a device (possibly a very long device) to transmit intelligence (information) or energy from one location to another. In transmission line theory we are considering the AC case only: alternating currents conducted along the transmission line. Information cannot be transmitted using DC. That is because the DC would need to be modulated in some way to carry the information. Modulated DC is no longer DC, it is DC with an AC component. We can of course carry energy along a transmission line using DC and that is done in many parts of the world, often to avoid needing to synchronise the power frequencies in different areas. Another way of looking at one of these would be a transmission line with zero frequency and therefore infinite wavelength. That is just DC and the normal DC rules would apply.
A simple source and load circuit schematic, based on the Thevenin equivalent circuit is shown in Figure 13-1
We know that we are expected to consider this as an AC circuit because:
However, we are not expected to consider any transmission line behaviour because there is no spatial information. The implication is that the frequencies considered are such that the shortest wavelength (highest frequency) is much greater than the circuit dimensions. This also is known as a lumped element approach, not quite like DC because we still have reactive components. There still are reactive components whose impedances are functions of frequency and we could calculate the reflection coefficients ρS and ρL, as shown in Equation 13-1 and Equation 13-2 respectively, provided we define a system impedance Z0.
The next schematic in Figure 13-2 shows a similar circuit but with an explicit transmission line shown symbolically, connecting the source and the load.
Taking this a step further, we can simplify our symbol for the transmission line to a 2-port network like shown in Figure 13-3.
A 2-port network such as this is typical of what we might measure with a vector network analyzer (VNA) to yield a result in the form of S parameters across frequency.
Over the last 30 to 40 years, thanks to the growth of the Internet and the provision of faster data services, such capacity is almost wholly served by optical fiber cables, a major part of the telecommunications infrastructure. However, most countries do also have considerable copper telecommunications cables. Where used, these are mostly in the areas relatively local to small customers' domestic premises. In telecommunications jargon, this is often known as the last mile (or first mile) of the telecommunications path between the server and client. It is a very approximate average of the distance between most customers premises (where the telecommunications services are consumed) and the first (or last) point in the telecommunications infrastructure where the signals enter or leave the (high capacity) optical fiber part of the network.
There are essentially two types of optical fiber transmission mode: single mode (or monomode) and multimode. Single mode allows a much larger bandwidth and has the smallest transmission loss but is much more expensive to manufacture and install as the components are much more expensive. Multimode fiber and components are much cheaper but they have a more limited capacity and range. For most optical fibers, the wavelengths concerned are around 850 nm, 1310 nm and 1500 nm. 850 nm tends to be used for multimode and the others for monomode. The equipment, especially connector technologies, for multimode do not require tolerances as small as those required for monomode. That is because the optical fiber core diameter for monomode is approximately 8 μm and that for multimode is approximately 60 μm.
Optical fibers also have advantages related to electromagnetic compatibility (EMC). The fiber materials are insulators and therefore cannot conduct currents so will not couple in any way to or from sources of radiated or conducted interference. This suggests they are a good choice for when it is required to transmit data through an environment with significant electric or magnetic fields.
One may reasonably assume that optical fibers are designed to carry optical wavelengths. The visible wavelength range is actually from about 400 nm to 700 nm but the shortest wavelength currently used for most optical fibers is 850 nm, which is actually at the short wavelength end of the infrared range. Probably for historical reasons, optical fibers were named as such in the early days when they literally used optical wavelengths and before it was fully discovered what the optimum wavelengths would plan out to be allowing for the limitations of optical amplification, attenuation windows, fiber manufacture and optical components.
Telecommunications service providers (TSPs) have to maintain minimum quality of service (QOS) commitments to their customers. That is because the information capacities necessary in the optical domain, deep within the infrastructure, are extremely high. My service provider claims that the 'broadband cable' to my house meets the Data Over Cable Service Interface Specification (DOCSIS) at something like 30 Mbit/s, by today's standards relatively slow. However adding up my service with those of my street, my neighbourhood and my town quickly gets up to many gigabit/s (Gbit/s).
With such a high information rate comes high revenue earning capacity. The TSPs design their equipment with many types of redundancy built in and at great expense. No customer is going to pay for a disconnected service or one which has a high bit error rate (BER). The less reliable a digital connection is, the more likely the customer will move to a competitor's service.
Most specialists in RF and microwave will be familiar with how optical fibers work by means of guiding electromagnetic waves along cylindrical dielectric (high purity glass) waveguides and the principles of single mode (or monomode) and multimode operation. .
Since telecommunications de-regulation, few TSPs actually manufacture their own equipment except perhaps for a few very specialised and small quantity applications. They place huge manufacturing contracts with telecommunications equipment manufacturers. So these are the places where most RF and microwave engineers will be found.
Actually quite a few now, and the more they are used the better the skill levels. Some are listed below.
For most of these, I or my company has purchased the licence. Some of the more expensive licenses, such as ADS, Altium and Cadence, I have had access to for my work whilst supporting clients. For several applications, viewer only versions can be downloaded free complete with documentation. Documentation is generally very comprehensive and accessible. For example the Matlab® licence with my choice of 'toolboxes' provides access to more than 100000 pages of documentation in searchable PDF form with many references.
Visual Studio 2015 Express was a free download with Visual Basic and C/C++ and just required registering for a Microsoft account, and also includes extensive documentation.
In most cases development, research and background work is required prior to developing an existing product of some form. Most clients have already invested significant resources into products which have performed well in the field over the years and some may be due for replacement. Perhaps an upgrade is required to a product to improve performance, reduce design and production costs, reduce size, or make it more universally compatable. Usually this type of work creates opportunities like reducing cost, size and mass at the same time. The following list includes some examples of design work I have supported. Non-disclosure and restricted information has been removed.
Design at VHF and UHF using low cost bipolar transistors, characterised using S parameters measured with a HP 8753D vector network analyzer, and matched at input and output to a nominal 50 Ω. The frequency was only a few hundred megahertz, so the quality of the VNA calibration was tolerable with low cost components. To do accurate VNA calibrations requires purchasing a proper calibration kit which is very expensive.
Williams & Taylor includes many tables of normalised filter transfer functions for LC lumped element filters and this edition also extends to digital filters: finite impulse response (FIR) and infinite impulse response (IIR) types. These have been used for several filter types: Elliptic, Butterworth, Bessel constant delay and Chebyshev.
Passive reflector cassegrain antennas at millimetre wave frequencies using the formulas provided from a paper by Hannan to generate cutter positioning data for a numerically controlled lathe. This was originally written for an optical telescope but proved to scale well to microwave and millimetre wave frequencies. Many large cassegrain microwave satellite ground station antennas were used very successfully in the 1960s and 1970s.
I have designed free running negative resistance oscillators using the Clapp architecture to run at a frequency of a few hundred megahertz. These were converted into voltage controlled oscillators (VCOs) by coupling in voltage controlled capacitors appropriate to the frequencies concerned. This enabled the VCO transfer characteristic (KV) to be calculated. The loop filters were designed around classic operational amplifier circuits using low noise devices. The synthesizer IC was a Texas Instruments® dual modulus device programmed appropriate to the channelised frequencies. The reference source was a crystal oscillator. The development work addressed in particular jitter (phase noise), lock time, spurii and frequency pulling.
The Matlab® Instrument Control Toolbox add-on application provides a means of interfacing the power of the Matlab applications to instruments controlled by Ethernet (Gigabit and other versions), TCP/IP, GPIB and other standards including ad hoc (PIO). Some of the test equipment available, for example for phase noise measurements, may be high precision and accurate for the microwave measurements, but the data interfacing is 'legacy' by today's standards.
Pulse type radar, primary and secondary, yes. There are also other types such as frequency modulated continuous wave (FMCW) and moving target indicator (MTI).
Most people know that RADAR is an acronym for 'radio detection and ranging. Primary radar will detect passive targets provided they are within range and reflect enough RF power to be adequately detected when it gets back to the receiver. Secondary radar includes an interrogator usually at the fixed, land-based station which transmits relatively high power and a transponder at the target. To work properly, this depends on a friendly and co-operative target to return accurate and reliable information. This is not necessary for primary radar but hostile targets may have taken measures to reduce the levels of reflection to make them more difficult to detect.
Radar fundamentals draw heavily on antenna parameters, propagation and detection of the received pulses. More recently scanning has been achieved electronically using phased array antennas rather than physically turning a passive, usually reflector antenna.